Skip to main content
Unlocking the promise of edge offload with 5G requires active validation and assurance
15/11/2022

Unlocking the promise of edge offload with 5G requires active validation and assurance

15/11/2022

Unlocking the promise of edge offload with 5G requires active validation and assurance

Edge offload is a key technique that will enable the delivery of a new class of high-performance applications. But it also introduces significant challenges in terms of service validation and assurance – to meet demanding SLAs, for example. How can you meet this challenge head-on?

One of the key new services that the 5G architecture unlocks is commonly known as URLLC – Ultra Reliable Low Latency Communications. URLLC will be the foundation of a range of new services that depend on low latency and is enabled by the Control and User Plane Separation (CUPS) that was introduced by 3GPP in its Release 14 documents.

Essentially, CUPS supports a decomposed architecture in which user plane processing functions can be moved towards the edge of the network (or anywhere needed, to be honest), thereby reducing the distance between the source of demand for such capabilities and the entities that provide them – and, enabling the low latency performance that URLLC services will need.

The need for such capabilities has been apparent for some time so, while it’s early days, a wide number of sectors have expressed interest in the adoption of new 5G wireless technologies to support intensive industrial and enterprise applications via this separation. There’s also growing demand from the public sector – blue light services and the military have also begun to explore how edge offload can support their own specialised applications.

Add network slicing to the mix, and we can foresee a slew of services that depend on fine-grained control of routing, to enable certain data to be directed to edge processing functions, while other data can be sent to different destinations – such as the global internet, for example. You probably already know this – because it is on just such cases that the media has focused its attention over the last couple of years.

How do you coordinate all of this to ensure offload delivers?

But if this is all going to work correctly, there is still quite a lot to coordinate. Without diving into the details of how offload is actually controlled, we’re basically talking about a mechanism that:

  • Identifies traffic and packets
  • Supports rules (and dynamic configuration / selection criteria) for routing
  • Filters and directs traffic accordingly
  • Interfaces correctly with all related system entities, orchestration engines, and control points
  • Delivers the right bandwidth and capacity, as required

With that in mind, it’s important to prepare for this evolution to the existing 5G infrastructure. While early adopters may have finagled this into LTE-based systems (perfectly possible), most are experimenting and awaiting the more widespread deployment of 5G standalone (SA) networks (within which this will be a native functional capability).

This momentum has been confirmed by the numerous trials and PoCs – as well as some deployments of 5G SA in private networks (for example, see this recent news flash which includes our friends from Radtonics – and, if 5G SA private networks for a sawmill doesn’t demonstrate how useful these advances will be, we don’t know what will!). So, things are moving.

How do we get the basics right?

First though, we need to get the basics right. So, there are questions to ask:

  • Have the right packets been sent to the correct destination?
  • Have the correct rules and policies been applied?
  • Is the latency that’s delivered what was expected?
  • Are the interfaces handling all communication and interaction correctly?

These need testing. And we don’t just need to test in labs, we also need to actively monitor live networks to ensure that these services perform consistently. That’s because of the high expectations that their users will expect. If you think your service needs 5ms latency, then you expect this to be delivered – all the time. So, testing isn’t simply an ad hoc or scheduled event, it’s a continuous practice, in keeping with CI/CD methodologies – so that customers paying for these new levels of performance can be sure that they are getting their money’s worth.

Test, validate and assure edge offload and data processing with Emblasoft

That’s where we come in. Emblasoft provides a complete toolkit to help validate and assure any edge offload service, covering all the key interfaces (N6/SGi and N3/S1-U, for example). This allows users to inject traffic and determine that it gets routed correctly, while determining, for instance, that the correct latency requirements have been set.

It’s not just for industrial or safety critical applications, either – the same techniques are also valuable for CDNs, as well as for enterprises that simply want to offload data for efficiency purposes, rather than to attain the very highest levels of performance. So, if you are researching or deploying wireless edge offload applications, then we can help you ensure their success – both pre-launch and in-service for continuous, real-time assurance.