100GBASE-KR4 (yes, 100) powers the next iteration of AdvancedTCA
Amidst unrelenting bandwidth challenges, operators are continually in search of core infrastructure upgrades that can economically increase network capacity. In response, PICMG member companies have been working to apply 100 Gigabit per second (Gbps) backplane technology to AdvancedTCA (ATCA) systems since the release of the IEEE 802.3bj-2014 standard in June, and are already seeing results. Doug Sandy, CTO of Embedded Computing at Artesyn Embedded Technologies, discusses the development of a 100G ATCA-based platform and plans to advance a 100 Gb ATCA specification.
PICMG: Give us some background on Artesyn’s QuadStar product line.
At its heart, QuadStar is a switching architecture that has four switch blades or hubs, and each one of those fans out to all of the other cards within a rack. QuadStar could work with 10 Gigabit Ethernet (GbE), 40 GbE, or 100 GbE switching within an AdvancedTCA (ATCA) shelf (Figure 1).
In QuadStar, the advantage is the use case where three switches can be used as active switches and the fourth is standby, so you have a 3+1 redundancy model – so at 40G you get 120 Gb to each payload blade of usable redundant bandwidth, and at 100G you get 300 Gb of usable bandwidth; if you don’t need redundancy then it’s 160 Gb of usable bandwidth for 40G and 400 Gb of usable bandwidth for 100G (Figure 2). So that really ups the usable bandwidth from the traditional dual-dual star mode, which is two separate switching planes where you have an active plus a standby. So for that dual-dual star configuration at 10G you would get 20 Gb of usable bandwidth plus the redundancy, at 40G you’d get 80 Gb plus redundancy, and at 100G you’d get 200 Gb plus redundancy. So we’ve increased the redundant bandwidth with QuadStar by 50 percent over what you could get with dual-dual star.
PICMG: What were some of the challenges of implementing 100G capability in an AdvancedTCA (ATCA) architecture?
Recognizing the need for higher bandwidth, what we did with 100G was leverage the signal integrity work and the knowledge of the dynamics of the backplane at a signaling level that we developed for 40G and performed a number of simulations on the backplane to determine what worked and what wouldn’t work for 100G signaling rates. Really what it came down to is that at 100 Gb, the backplane channel becomes very unforgiving of any sorts of losses within the traces on the front boards and switch cards, as well as the trace that goes across the backplane and the connectors that it goes across – so any impedance discontinuities where the trace geometries change (Figure 3, page 10). The nominal impedance for a backplane Ethernet channel is a 100 Ohm differential, but even through just manufacturing processes that tends to vary by plus or minus 5 or 10 percent.
For 40 GbE, the PICMG 3.1 committee found that the vias for the press fit connectors on the backplane were actually an issue, and backdrilling, which is the process of taking a drill and drilling out the via that doesn’t need to be there, was required for the backplane. At 100G, all of that is a given – you have to control your impedance, you have to limit your via stubs. But the one thing we found that was absolutely critical to control at 100G is crosstalk, even much more so than losses. Insertion loss is what people tend to focus on, but crosstalk really was the most critical element. That can be controlled with trace geometries; that can be controlled with better grounding through connectors; that can be controlled with how you break in and out of your chips and by making sure you have proper grounding when you change layers, if you have to change layers. All of that has to be done at 100G.
All of that makes it sound like it’s difficult to do, and don’t get me wrong, 100G is going to be very difficult compared to 40G or 10G, but it’s quite doable. We’ve seen a number of industry transitions over the years. I remember in my career going from the first 100 Mb signals that anyone had to deal with, and that was a big transition when we saw that. Artesyn has developed and proven the technology with our connector supplier, ERNI, and taken some of the arrows as pioneers in terms of figuring out how to make the technology work, but I am confident that over the next several years more people will be able to do this and that there will be an ecosystem around 100G.
Right now things are moving forward within PICMG, and Artesyn is absolutely committed to standardization of 100 Gb for ATCA. We see ourselves as industry stewards, and we committed the time and the effort to identifying and proving out the technology, but it has always been our plan to move it into PICMG standardization. The PICMG 3.1 R3 standardization effort is likely to begin by the end of 2014 or beginning of 2015, and Artesyn will be a supporter of that activity.
PICMG: What does the advent of 100 Gb backplanes mean for boards and blades that are currently on the market?
Just like the transition to 10G and 40G in ATCA, what we’ve done very carefully with this backplane is created a migration path where our customers can determine the speed of adoption. With 100G components, since it is KR signaling NRZ encoding, depending on the silicon selected, 40 Gb blades should be able to operate with a 100 Gb switch, but just signaling at 40 Gb per second (Gbps). And a 10GBASE-KR board should function as well, depending on the switch silicon.
In the IEEE specification, there are actually two different 100 Gb backplane definitions – one that uses NRZ encoding, which from an encoding standpoint is actually very similar to what we’re used to in 10G and 40G; and then there’s PAM4 encoding, which is a brand new encoding scheme for backplane Ethernet. Because it’s a different encoding scheme we felt that the silicon supply and the backward compatibility of PAM4 with the old NRZ encoded stuff was going to be challenging or non-existent. That was one of the motivators for us to come up with the 100GBASE-KR4 solution, was to make sure that migration path was there. And we do see, just like before, a migration path where most customers would start with the 100 Gb-enabled backplane, but still be using 40 Gb switches and 40 Gb payload cards, and at some point in the future put in the [100 Gb] switch and some payload cards running at 100 Gbps, and over time more and more 100 Gbps until it’s pretty much a 100G platform.
PICMG: Is 100G is the final frontier for copper-based interconnects?
I’ll give you two answers. The first answer is I don’t know. 10G ATCA wasn’t designed to go above 3 Gbps and change on the connector. No one would have envisioned going higher than that. But then advances came around on connector technology, on backplane materials technology, on transceiver technology. Looking back in time at the original ATCA I would have said there is no way this is possible. I have learned in my career never to say never, but I don’t see a path forward beyond 100 Gb signaling in ATCA today.
The other side of that is, at least currently, 100 Gb backplane technology is as far as the IEEE is on the record as being willing to take [copper] backplane technology. I don’t know that that rules out other advances that we might make to take the backplane technology higher, but at some point the laws of physics do take over, and we’ll just have to see. We’re committed to take ATCA signaling capacity as high as it makes sense, both technologically and economically.
Editor’s note: A call for participation in the 100G AdvancedTCA (PICMG 3.1 R3.0) working group is expected in late December 2014.
Artesyn Embedded Technologies