Jump to section

Understanding edge computing for telecommunications

Copy URL

Adopting edge computing is a high priority for many telecommunications service providers as they modernize their networks and seek new sources of revenue. Specifically, many service providers are moving workloads and services out of the core network (in datacenters) toward the network’s edge, to points of presence and central offices.

One of the primary benefits of edge computing is that it greatly reduces the effects of latency on applications. This enables new applications and services on the network that can exploit reduced latency, and improve the experience of existing apps, especially following advancements in 5G.

For telcos, the apps and services their customers want to consume on edge networks are the key to revenue generation, but success depends on building the right ecosystem and coordinating among stakeholders and technology partners alike.

Telco providers in particular have the credibility, skills, and relationships to capture these revenue-generating opportunities in the edge market. As they develop platforms and services that can exploit the ubiquitous, high-bandwidth connectivity of edge computing, their customers are better positioned to meet demands in areas such as healthcare delivery, emergency response, manufacturing efficiency, traffic congestion, and industrial safety.

Telco providers face complex challenges that put the impetus behind modernizing their networks. These include simplifying network operations, improving flexibility, availability, efficiency, reliance, and scalability—while reducing latency and providing better application response times by processing and storing data closer to users and devices.

To improve flexibility, telcos can optimize and integrate workloads that consist of virtual machines, containers, and bare-metal nodes running network functions, video streaming, gaming, artificial intelligence and machine learning (AI/ML), and business-critical applications.

The distributed nature of edge computing can enhance both availability and resiliency for telcos. When a common function or application runs locally at edge sites, a failure at one will not affect the availability in the other locations. With a centralized solution, an outage would have a larger impact on all its served locations.  Telcos can improve resiliency, as well. When a function or application at one location fails, it can be backed up by resources at a nearby edge cloud site or sites while it is recovered, reducing or eliminating any outage.

Telcos also need to manage complex data sovereignty compliance requirements that restrict the movement and storage of locally processed data at the edge across state and national boundaries. And because the amount of data being produced is rapidly increasing, organizations must improve scalability by distributing computing power to the edge. This reduces bandwidth costs and strain on networks, connections, and core data centers.  

Virtualizing network functions allows telcos to abstract functions away from hardware, allowing standard servers to be used for functions that once required expensive proprietary hardware. The development of Linux containers and cloud-native development practices have further expanded the opportunity for telcos to abstract functions away from hardware to modernize their networks.

Simply put, network functions virtualization (NFV) is an application of the principles of enterprise IT virtualization to the use case of network functions. Just as virtualization allows multiple kinds of tasks to be run on any given server, NFV allows network functions to run on standard servers by abstracting those functions into software.

Containers similarly abstract functions away from hardware, but can do so with far less compute and memory overhead than virtual machines and are more easily deallocated or moved across environments. This is achieved by only packaging an app and all the files necessary to run it in a container, rather than also packaging a discrete operating system as is usually the case in a virtual machine.

In summary, with network function applications, whether virtual machines or containers, network operators don’t need to have dedicated, often proprietary hardware for each network function. NFV improves scalability and agility by allowing service providers to deliver new network services and applications on demand using already available computing hardware.

Radio access networks (RAN) are crucial connection points between end-user devices and the rest of an operator's network. They represent significant overall network expenses, perform intensive and complex processing, and now face rapidly increasing demand as more edge and 5G use cases emerge for telco customers.

Just as the virtualization of network functions has enabled telcos to modernize their networks, similar principles can also be applied to RAN. This is especially important as the future of the industry focuses on the transition to 5G—in fact, the ongoing 5G network transformation often depends on the virtualization of RAN (vRAN), and increasingly assumes that it is container-based and cloud-native. 

Through open RANs, telcos can simplify network operations and improve flexibility, availability, and efficiency—all while serving an increasing number of devices and bandwidth-hungry applications. Cloud-native and container-based open RAN solutions often provide lower costs, improved ease of upgrade and modification, ability to scale horizontally, and with less vendor lock-in than VM-based solutions.

Implementing edge solutions at scale brings its own challenges for telco providers. For example, while technically (and with the right approach) edge technologies can be managed using the same tools and processes deployed in centralized infrastructure, new needs arise to automate provisioning, management, and orchestration of hundreds—and sometimes tens of thousands—of sites with minimal (or no) IT staff. 

Beyond this, different edge tiers have different requirements, including the size of the hardware footprint, challenging physical environments, and cost. Often, no single vendor can provide an end-to-end solution, making interoperability among components sourced from various vendors a critical success factor.

To help organizations plan for, adopt, and implement the technology transformation necessary to be competitive in today’s marketplace, Red Hat has extended our open hybrid cloud solutions to the edge with our Red Hat Enterprise Linux (RHEL) and OpenShift platforms. These capabilities include the creation of small footprint images and topology options for edge deployments, remote device mirroring to stage updates at power cycles/reboots (limiting downtime), over-the-air updates for low-connectivity devices, and intelligent rollbacks to help prevent downtime when updates cause production issues.

When a service provider moves mobile workloads closer to the end user, increasing throughput and reducing latency, it can be considered a new kind of mobile architecture. This architecture, called mobile edge computing or multi-access edge computing (MEC), provides an application service environment for telco customers at the edge of the mobile network where it has closer proximity to mobile users.

The result is that MEC makes RAN accessible to app developers and content providers, allowing them to utilize edge computing not just at the app level, but at the lower level of network functions and information processing as well.

Beyond this, OpenShift now provides more choices for telcos to adopt edge deployments by offering expanded support for eventing and remote worker nodes, which enable the placement of single worker nodes in remote locations that can be managed by centralized nodes (like datacenters). These capabilities further build on our expanding edge partner ecosystem, which includes Samsung and NVIDIA, across a wide variety of enterprise use cases, including AI and 5G. And they expand our broad spectrum of supported environments, including leading public clouds and multiple datacenter architectures.

Edge computing solutions include a variety of technologies spanning multiple hardware and software platforms. And while many vendors offer edge solutions that only work on their stack or platform, Red Hat’s open source approach features RHEL as the edge-optimized OS, OpenShift as the container platform for edge, and Red Hat Advanced Cluster Management (ACM) as the multi-cluster control plane. This portfolio emphasizes manageability of edge with the ease of zero-touch/lights out functionality, and interoperability that works against lock-in—allowing freedom to mix and match needed components from third parties and build a better, more customized solution. 

Our open source solutions can help support changes to core networks and supporting systems with simplicity, flexibility, scalability, and improved security—and work on top of all relevant public clouds and compute hardware. And we collaborate with a vast technology and community ecosystem to meet the needs of our customers and their unique environments.

Keep reading

Article

IoT and edge computing

IoT needs compute power closer to where a physical device or data source is located. Edge computing provides that local source of processing and storage for IoT.

Article

What is edge computing?

Edge computing is computing that takes place at or near the physical location of either the user or the source of the data.

Article

Edge computing for telco

Edge computing is a priority for many telco service providers as they modernize their networks and seek new sources of revenue.

More about edge

Products

A stable, proven foundation that’s versatile enough for rolling out new applications, virtualizing environments, and creating a more secure hybrid cloud.

An enterprise application platform with a unified set of tested services for bringing apps to market on your choice of infrastructure.

A portfolio of enterprise software optimized for lightweight deployment at the edge.

Resources

Brief

Edge computing in action: Space

Podcast

Command Line Heroes Season 8, Episode 8:
"Robots and vehicles"

E-Book

Gain a competitive edge with your container strategy