Rob Pettigrew, Director of Communications, Embedded Market, Emerson Network Power

Future-proofing 40 gig: Q&A with Rob Pettigrew of Emerson Network Power, who provides keys to scalability, interoperability, and infrastructure as telecom moves into 40 Gbps.

1Rob’s in-depth telecom knowledge stems from more than fifteen years experience that began when he joined Motorola Computer as a field applications engineer in 1996.

How are Emerson 40 Gbps solutions more than the just the sum of their parts?

One factor distinguishing us from competitors is that we are experts in delivering full, telco-grade platforms that include enclosures, cooling subsystems, shelf management, switches, and payload blades. Many of our competitors focus on one or more of those elements, and then the responsibility for integrating the end platform falls on the Telco Equipment Manufacturers (TEMs).

We have system-level expertise and know what is appropriate to put on a payload blade such that it can be cooled and delivered reliably.

When the Nehalem processor was new, most of our competition delivered single Nehalem blades because they did not feel that a dual-core blade could be cooled. Those companies who did do a dual had a limited amount of memory capacity. The Emerson blade [ATCA-7360] based on the Nehalem processor, however, is dual-processor, with 96 gigabytes of capacity and 12 DIMM sockets.

We knew it could be cooled because we have individuals with expertise in system design, thermal simulation, and testing. Emerson developed many of those cooling specs working collaboratively with CP-TA.

For 40 gig, we are introducing new technologies and much higher fabric bandwidth, so interoperability issues will arise when those come to market. Customers will want to source those components from companies such as Emerson that can deliver switches, systems, and payload blades. And we have done a generous amount of system-level architecture and testing to make sure it all works well together.

40 gig technologies will be entering the market at different rates. Three fundamental technologies and components need to be addressed. First, are the system and the backplane capable of carrying those signals? Second, is the switch capable of switching 40 gig payload and third, are the payload blades capable of processing 40 Gbps data flows? We have had 40 gig ready systems that have been shipping for more than two years now, announced a 40 gig switch last summer, and will be announcing 40 gig payload blades throughout 2011.

Customers can plan for extra capacity by using chassis that are ready for 40 gig as well as by starting their testing with 40 gig switches. When the payload becomes available they can integrate that into their existing systems without forklift upgrades or any significant changes to their platform management software.

Is the ‘plan ahead’ argument reaching a receptive audience?

Absolutely it is. Quite a few applications, primarily in the data plane, such as security gateways [bump-in-the-wire systems], intercept a data stream, process the packets, and then either modify the packets or do various types of packet processing and grooming to facilitate a pretty big set of applications. Developers using 10 gig AdvancedTCA are planning what they need to do to migrate to 40 gig. I cannot think of anyone doing applications that are in the data path or touching packets not interested in 40 gig.

Are we going to see scalability issues?

Simply migrating to 40 gig with exactly the same kinds of technologies and software is probably going to result in scalability issues. However, new technologies coming from Intel and others, both hardware and software, will enable those types of applications to be deployed on 40 gig.

Those products have not generally been introduced yet, but to cite an example with current products, Emerson is working closely with Wind River and Intel on the data path software on Westmere. Last September at the Intel Developer’s Forum, Wind River demonstrated its packet processing software for multicore, the Network Acceleration Platform, NAP. During a demonstration on an Emerson dual Westmere AdvancedTCA blade, the ATCA 7365, Wind River showed 26 gigabits of packet forwarding. It was limited to 26 gigabits only because of the available I/O on the blade; in fact they were only using three of the twelve cores to do that.

What infrastructure had to be in place before the Centellis 4440 was introduced?

The 4400 introduced two new technologies into the market: One, the 4440 has a 40-gig ready backplane, and we did very intensive development as well as testing and simulation work to ensure it would be ready for a future 40 gig payload.

Two, the 4400 introduced compliance with the CP-TA B.4 thermal spec. Getting a system ready for next-generation 40 gig payload, you can assume that payload will be very hot. We had to put a lot of extra future proofing into that chassis to deliver the most cost-effective, high-performance cooling design.

We worked very closely with the industry to come up with the thermal specs that became the CP-TA B specs, and we met the most stringent spec, the B.4 spec, and arguably exceeded that spec with the cooling ability of the Centellis 4400.

What’s the state of the AdvancedTCA ecosystem as it begins its second decade?

Almost every major carrier in the world is deploying AdvancedTCA. Many them are spec’ing AdvancedTCA for their core network elements. The systems that have been deployed have proven to be among the most reliable that have ever been deployed, even comparing them against proprietary platforms. And if you look at the cost that is involved in developing a proprietary platform, you are talking about tens of millions of dollars for a TEM to develop something from scratch or even higher. The decision to adopt AdvancedTCA now is really a no brainer.

Resources

40 G ATCA White Paper

www.emersonnetworkpower.com/embeddedcomputing>