Virtualization and elastic provisioning in carrier-grade solutions

8Austin discusses packet processing, encryption/decryption, load balancing, transcoding, and general purpose processing challenges. He then describes a response to these challenges that can be deployed on IP-based telecommunications platforms to save power and real estate while minimizing capital expenditure.

Modern carrier-grade platforms comprise unprecedented amounts of processing, memory, and network I/O resources. For developers, though, these goodies also come with the mandate to make the most effective use of modern platforms through scaling and other techniques. Through the intelligent use of carrier-class virtualization and elastic provisioning, developers can create highly scalable platforms and often eliminate unnecessary over-provisioning of resources for peak usage.

Current advances in multicore processors, cryptography accelerators, and high-throughput Ethernet silicon make it possible to consolidate what previously required multiple specialized server platforms into a single private cloud. 4G wireless deployments, HD-quality video to all devices, the continuing transition to VoIP technologies, increased security concerns, and power efficiency requirements are all driving the need for more flexible solutions.

What is a private cloud?

In this article, the term private cloud refers to a pool of resources a telecom equipment provider has designed and developed for a specific set of purposes. This article is not using private cloud to mean the general pool of resources found in an IT department and made possible by the department’s general server capabilities.

To deliver the desired level of service, high availability, management, and capacity, telecom service providers will need to tightly control equipment resources for most deployments. Depending on the services and scale required, the private cloud infrastructure could encompass anything from a single AdvancedTCA chassis deployment all the way through multiframe solutions that include AdvancedTCA platforms, storage subsystems, and networking equipment.

Driving factors

Four key areas are driving telecom OEMs toward private cloud infrastructure:

  • Hardware advances
  • Software efficiency
  • Design flexibility
  • Power efficiency

Each one plays a role in an OEM’s decision to expand development resources to move into new areas.

Hardware advances

Single Board Computers (SBCs), multicore processors, and dense memory configurations are becoming mainstream. By early 2012, multiple manufacturers will be producing next-generation SBCs that support 16 physical CPU cores and the ability to have 128 GB-plus of installed RAM.

In addition to raw compute power, SBCs and their supporting AdvancedTCA switches are also rapidly moving to support 40G Ethernet fabrics in standard AdvancedTCA platforms. When combined, these advances allow for large increases in Deep Packet Inspection (DPI) and transcoding, as well as general control plane and data plane performance.

System-level acceleration of virtual machine (VM) function allows for direct access to levels of performance of shared hardware resources, as Figure 1 depicts. Technologies such as Intel’s VT-x allow direct assignment of I/O devices to VM, greatly reducing the performance penalty of the emulated virtual devices used in the past.

Hardware as well as software crypto-acceleration is decreasing the CPU utilization for the much-sought-after encryption for telecom applications. Intel’s new AES-NI instructions for its latest generation of Xeon processers, for example, can allow for a two to three times increase in AES encryption processing when used on a standard SBC. Additionally, hardware accelerators like Cavium’s Nitrox III and Intel’s Cave Creek are likely to achieve 20-40 Gbps encryption throughput by year’s end.

Figure 1: VM acceleration via direct device assignment

Software efficiency

Many control and data plane applications do not scale linearly across multiple CPU cores, often plateau-ing in performance gains between 2-6 CPU cores , as Figure 2 illustrates. Such leveling off prevents the applications from utilizing the full potential of modern AdvancedTCA SBCs without the use of virtualization.

Figure 2: Application plateau effect

Some applications are more I/O-intensive, whereas other are more CPU-intensive. By mixing such applications on the same compute node using different VMs, it’s possible to keep resources separate while more efficiently utilizing the physical infrastructure’s resources.

Most multi-socket SBCs deployed today have memory controllers embedded into each processor. To access memory hosted on one physical processor, the neighboring processor on the same SBC must access it through the hosting processor via an extra interconnect. This effect, know as Non-Uniform Memory Accesses, or NUMA, can allow for performance variances on memory-intensive applications deployed in a monolithic fashion on a single SBC. Utilizing VMs, multiple instances of the same application can run on an SBC, but each can have its own virtual CPU cores and memory tied to a single physical processor to help maintain a uniform performance level.

VMs can create a uniform development environment as well as a testing environment for programmers. Such an environment makes it possible to use a range of tested hardware platforms with little or no additional application development. Being able to skip further application development makes faster adoption of new hardware possible, as well as making it easier for OEMs to source computing elements from multiple vendors.

Design flexibility

By deploying a private cloud with virtual machine infrastructure, your hardware becomes a pool of resources available to be provisioned as needed. The control plane, data plane, and networking can all share the same pool of common hardware.

Deployments can be easily upgraded by simply adding physical resources to the managed pool. Also, migrating VM instances from one compute node to another, (Figure 3), can be nondisruptive.

Figure 3: VM migration and load balancing

Many telecom solutions require multiple different hardware solutions simply because they are made up of applications that run on different operating systems. In a private cloud deployment, multiple operating systems can be run on the same physical hardware, eliminating this requirement.

A private cloud enables running instances (virtual machines tailored to a specific function) to be tailored to different workload environments. For example, you can assign a dedicated service level to each instance, and as demand increases or decreases, other instances can be spawned or decommissioned as necessary. This allows each process workload to be tailored for the moment-in-time demand required (Figure 4). This ability to tailor each process workload to address moment-in-time demand means the practice of over-provisioning all resources for a “peak workload” can go by the wayside. As resources are no longer needed, they are simply added back into the pool to be used by other instances that may need to be spawned.

Figure 4: Dynamic migration of VMs

Power efficiency

Led by Verizon’s Telecommunications Equipment Energy Efficiency Ratings (TEEER), telecom service providers are requiring higher levels of power efficiency out of the equipment they purchase.

Virtual machines allow for the more efficient use of hardware resources by allowing multiple instances to share the same physical hardware, maximizing the use of those resources and increasing the work per watt of power consumed when compared to traditional infrastructure.

VMs also allow for 1+1 and N+1 redundancy through the use of multiple virtual instances running fewer independent hardware nodes, like AdvancedTCA SBCs. In addition, VMs often require fewer physical nodes to achieve the same level of redundancy. By reducing the physical node count to achieve the same uptime goals, less power is consumed overall.

Figure 5: Private cloud architecture

AdvancedTCA and the private cloud

Now that we’ve explored some of the reasons why it makes sense to utilize carrier-grade private cloud infrastructure for many telecom solutions, what is required to begin development of a private cloud environment for your solution?

Choosing AdvancedTCA chassis with SBCs for the compute node (the most common core element in any private cloud) makes sense based on their commonality, variety, manageability, and ease of deployment.

Network switches with Layer 3 functionality are the glue that holds the private cloud together. The selection of AdvancedTCA switches will depend largely on the internal and external bandwidth required for each compute node. Video streaming or deep packet inspection typically requires much more bandwidth (and thus higher bandwidth switches) than SMSC messaging, for example, to optimize performance.

The last necessity is also one of the most critical: shared storage. For an instance to be launched or migrated to any physical node, all nodes must also have access to the same storage. In private cloud infrastructure, a high-performance SAN and a cluster file system often supply this access. Connectivity options typically include Fibre Channel, SAS, and iSCSI connectivity. iSCSI with link speeds of up to 10 Gbps is the least intrusive approach to implementation to each node, as the SAN can be connected to AdvancedTCA fabric switches to provide storage connectivity to each node.

To avoid gobbling fabric bandwidth for storage connectivity, employing SAS or Fibre Channels that are directly attached and connected externally to each node via RTMs is a viable option. With multiple manufacturers now making AdvancedTCA blade-based SANs as well as NEBS certified external SANs, many options are available to meet the SAN requirements for a carrier-grade private cloud.

Hypervisor choice is key

One of the most important decisions in private cloud development is the choice of hypervisor. Hypervisors form the underlying framework for running multiple VMs on a single physical server. Each one carries its own strengths and list of supported hardware. It is advisable to study the characteristics of the main hypervisors available before making a selection to ensure the best fit to requirements for features, hardware support, high availability, and cost. The main hypervisors used today are VMware ESXi, Citrix XenServer, and Microsoft Hyper-V, but many others are available as well (Figure 5).

Next is the selection of a hypervisor manager. These go by many names, but essentially a hypervisor manager is a cluster-aware, multi-hypervisor manager. This is the overall manager that monitors the hypervisors in its care and transforms a bunch of hypervisors running on independent compute nodes into a true private cloud, allowing for VM instances to migrate across physical hardware (Figure 6). Often the hypervisor selection will drive the decision of the hypervisor manager, but there are several third-party tools specifically designed to manage the most popular virtual environments.

Figure 6: Private cloud management

Each running VM must have its own OS. One of the strengths of a private cloud is that a single physical node can run multiple operating systems in guest VMs at the same time. It is important to remember, however, that when running licensed operating systems, each running VM typically requires its own license by the OS provider.

Lastly, the solution will need shelf management to monitor the physical hardware provided for the private cloud. This, of course, is another strength of AdvancedTCA infrastructure for private cloud deployments, since shelf management with standardized functions is a required element in any AdvancedTCA solution.

Armed with this basic understanding of what is driving private clouds for carrier-grade solution deployments and what is required to develop one, let’s look at a basic private cloud example (Figure 7).

  • SBCs become “generic.”
  • The same SBC can run both packet processor and control plane applications.
  • Intelligent monitoring and elastic provisioning can optimize active blade configuration for existing workload.
  • One chassis can handle compute, switching, and storage functions.

Figure 7: AdvancedTCA private cloud example

To increase the port density or service capacity, it is simply a matter of adding more resources into the private cloud pool. This can be done by adding SBCs into an existing AdvancedTCA chassis or adding another populated chassis to the cluster. Once the resources are added to the hypervisor manager and made available, the provisioning rules already established will utilize the newly available resources as they become required.


Capacity needs in the carrier space are growing at an ever-faster pace, as is the mix of services demanded by end users. At the same time, service providers are requiring more power efficient equipment in smaller footprints to fit within their existing central office infrastructure. This scenario has telecom equipment manufacturers being driven to increase features and capacity while making faster upgrades – all at the same relentless pace. It’s important to take advantage of the latest technical improvements today’s hardware offers and to squeeze every bit of performance out of existing solutions. Through the use of virtualization and private cloud infrastructure, an OEM can simplify application development environments, drive greater efficiency in physical infrastructure, increase application flexibility, and allow for a seamless upgrade path by deploying as a private cloud. With all these advantages, it is not so much a matter of if, but when, migration to this approach takes place.

Austin Hipes is Vice President, Technology at NEI. In this role, he manages field applications engineers, supports sales design activities, and educates customers on hardware and the latest technology trends. Over the last eight years, Austin has been focused on designing systems for network equipment providers requiring carrier-grade solutions. He was previously Director of Technology at Alliance Systems and a Field Applications Engineer for Arrow Electronics.


[email protected]