The rule of four

A while back I used an interview with Chuck Byers of Cisco Systems as the basis for a series of on the future of optical backplanes (you can find the summer, fall, and winter 2013 articles online at http://bit.ly/1h5cRw9, http://bit.ly/1bP3JY5, and http://bit.ly/1h5cGRt, respectively). But as standardized, backplane-based computing platforms such as () attempt to push past the 100 Gigabit Ethernet (GbE) barrier using electrical interfaces and look onwards toward 400G, it’s worth taking a look deeper at developments at the silicon level that are making these backplane speeds economically possible.

As you may recall, in this year’s spring edition of PICMG Systems & Technology I interviewed John D’Ambrosia, Chief Ethernet Evangelist at Dell and Chairman of the P802.3bs 400 Gb/s Ethernet Task Force, who asserted that the IEEE “has decided that it will develop electrical interfaces for chip-to-chip (C2C) and chip-to-module (C2M)” in the hunt to support 400G speeds. To reach that benchmark, the 802.3bs Task Force is currently targeting 16x 25 GbE links that utilize NRZ signaling, as well as faster 8x 50 GbE links that could leverage either NRZ or PAM4 encoding.

Those familiar with both the 40G ATCA specification and work done on 100G will note that, in each instance, what I like to call the “rule or four” is applied where 4x lanes of 10 GbE or 4x 25 GbE links are used across the backplane to reach the desired throughput (the IEEE 100GBASE-KR4 spec assumed the use of 25 GbE Ethernet chipsets given advances in SerDes technology, though 25 GbE C2C and C2M had not yet been standardized). The reasons for this are twofold:

1. Port density – Taking 100G as an example, by using 25 GbE across the backplane as well as chipsets with 25 GbE I/O, designers were able to maximize port density, or the amount of throughput that could be achieved from a single chipset. In turn, this optimization minimizes the number of parallel paths, which means lower equipment costs, more efficient use of power, etc.

2. Design complexity – Another reason for these architectures is simply that they reduce the complexity of system design. Again taking 100G as an example, you can imagine that dealing with 4x 25 GbE links is much more manageable than 10x 10 GbE links.

However, the problem that we have now encountered with regard to 400G and current Ethernet developments at the C2C and C2M levels are that the “rule of four” no longer applies. As mentioned previously, 50 GbE interfaces at the chip level translates into 8 links, whereas 25 GbE amounts to a whopping 16x to reach an overall 400 Gbps system throughput, which is challenging to say the least from both management and economic perspectives. Getting back to the rule of four for 400G means that 100 GbE electrical interfaces are required, which we unfortunately have not yet mastered.

21
Figure 1: Lanes required to reach 400G over 100 Gbps (A), 50 Gbps (B), and 25 Gbps (C) electrical interfaces.

Is it more reasonable, then, to lower our sights to 200G as the next inflection point?

Topics covered in this article