COM Express is heading towards "server-on-module"

1With the newly revised (Version 3.0) iteration of the most successful computer-on-module (COM) standard, a new pinout type is added to extend the reach of COM Express to server type applications.

– a computer-on-module standard defined by the PICMG consortium (www.picmg.org) – saw the first version released in 2005, with updates in 2010 and 2012. The upcoming Revision 3.0 for the COM Express standard defines four different sizes and three pinouts.

The new Type 7, while not a replacement for the well-established Type 6 pinout, trades all audio and video interfaces for four 10G Ethernet ports and a total of 32 PCI Express lanes. These changes were applied in order to support enhanced micro servers and other server type applications that only allow for low power consumption but require high computing performance and communication throughput.

COM Express pinouts

The PICMG specification defines different pinout types in order to fulfill application-specific demands. The pinout Types 1, 3, 4, and 5 are considered to be “legacy” and are no longer used for new designs. Products featuring older pinout types are still available and refer to the revision 2.1 of the COM Express specification. (Table 1.) The Mini size was introduced with Rev. 2.1 and is only implemented for the single-connector pinout Type 10. The most popular pinout today is Type 6, which replaced the legacy Type 2 computing modules.

21
Table 1: The PICMG specification defines different pinout types in order to fulfill application-specific demands.

The Extended size definition of 110 x 155 mm² did not become relevant in the market in the past. With the new, server-oriented pinout Type 7 defined in the COM Express specification Rev. 3.0, this size might actually come into play as server type applications require more capacity and more robust performance levels. COM Express supports a maximum of 137 W power consumption; the larger size adds real estate for more memory and allows for better heat transmission to support higher power consumption.

When comparing the new Type 7 pinout to the Type 6 pinout, it becomes clear that Type 7 is not a replacement for 6: Instead, it’s a definition that clearly targets headless server applications with low power consumption, high computing density and high I/O throughput. The new Type 7 definition removes all audio and video interfaces, the upper four USB 2.0, ExpressCard, and the upper two SATA ports, a move that releases 60 pins on the AB connector and another 42 pins on the CD connector. These 102 newly released pins, in combination with some previously reserved pins, have been used to add extra PCI Express lanes and four 10 GB Ethernet KR lanes with a complete set of NC-SI sideband signals.

At a maximum, Type 7 COM Express modules can provide a host of features:

  • 4 x 10GBaseKR Ethernet with NC-SI
  • 1 x 1GB Ethernet
  • 32 x PCI Express 3.0 lanes
  • 2 x SATA
  • 8 x GPIO shared with SDIO
  • 2 x serial shared with CAN
  • LPC bus shared with eSPI
  • SPI and I²C bus

10 GBit Ethernet

On top of the existing 1 GB Ethernet, COM Express Type 7 pinout defines up to four 10GBASE-KR ports with a maximum theoretical data performance of 10 GBit/s. 10GBASE-KR defines single backplane lanes (see IEEE 802.3/49) in order to avoid being tied to predefined physical interfaces.

The PHY that defines the physical transmission layer is not on the module but instead needs to be implemented on the carrier board. The carrier board implementation finally decides if the data is transmitted via copper or fiber cables. For even more flexibility, this might be implemented as exchangeable SFP+ modules [small-form-factor-pluggable].

It’s also possible to combine the performance of multiple 10 GBit Ethernet signals: Four lanes of 10GBASE-KR can be bundled into one PHY for 40GBASE-KR4.

The feature set of the COM Express 10GBASE-KR interfaces also includes a software-defined pin for each of the four interfaces. This physical pin can be configured as input or output and is controlled by the corresponding Ethernet controller. A typical application for this is the implementation of a hardware-based precision timing protocol for enhanced real-time applications.

NC-SI Ethernet sideband signals

The network controller sideband interface (NC-SI) defines the protocol and electrical interface for connecting a baseboard management controller (BMC), which is used to enable out-of-band remote manageability. This interface was defined by DMTF (Distributed Management Task Force, Inc., see http://www.dmtf.org) and is fully implemented for COM Express Type 7 modules.

NC-SI defines the connection between the network controller and the out-of-band management controller, which can be implemented on the carrier board. It supports the communication between the management controller and external management applications.

NCSI signals

  • NCSI_CLK_IN : Clock Reference
  • NCSI_RXD[0:1] : Receive Data (Network Controller to Baseboard Management Controller)
  • NCSI_TXD[0:1] : Transmit Data (Baseboard Management Controller to Network Controller)
  • NCSI_CRS_DV : Carrier Sense/Receive Data Valid to Network Controller
  • NCSI_TX_EN : Transmit Enable
  • NCSI_RX_ER : Receive Error
  • NCSI_ARB_IN : Network Controller Hardware Arbitration Input
  • NCSI_ARB_OUT : Network Controller Hardware Arbitration Output

Mass storage interface

The removal of two SATA ports looks confusing at first glance, since server applications are always hungry for a large amount of mass storage; however, current technology trends clearly show that SATA hard drives are being replaced by fast solid-state disks (SSDs). Since SSDs are much faster, the SATA interfaces therefore become a performance bottleneck and are being replaced by NVMe (NVM Express/Non-Volatile Memory Host Controller Interface Specification – NVMHCI, see www.nvmexpress.org), which uses the PCI Express interface for mass storage devices. That’s clearly supported by Type 7, with the increased amount of PCIe lanes. (Table 2.)

22
Table 2: Performance comparison of SATA and PCI Express

Benefits of NVMe when compared to SATA

NVMe is the optimized PCI Express SSD Interface. This logical device interface has been defined from the ground up in order to take advantage of the low latency and the parallel internal structures of flash-based storage devices. The goal of NVMe is to release the full performance advantages of PCIe-based SSDs and standardize the PCIe interface for fast SSDs. The NVMe specification defines an optimized register interface, command set, and feature set for PCI Express-based SSDs. NVMe reduces the I/O overhead and brings various performance improvements including multiple long command queues, improved interrupt processing, and reduced latency.

For classical server applications, NVMe mass storage devices are available as standard-sized PCI Express expansion cards. For mobile and embedded applications, the M.2 form factor with up to four PCIe lanes is typically used.

Upcoming NVM technologies

Intel has already announced its intention to release Optane SSD products based on its brand-new 3D Xpoint technology, which was announced in 2016. This is a new technology based on phase change technology and promises to boost performance and endurance by 1,000 times compared to NAND flash technology. Based on the 3D stacking of the cells, the density of 3D Xpoint will be an improvement of 10 times over DRAM technology.

The promised performance and endurance of this new technology will be constrained by SATA, however; NVMe leaves much more headroom to gain the expected maximum performance levels.

Heat management and power consumption

The high density of computing performance required for data-center applications correlates directly with the power consumption. The direct impact: The energy cost that is not expected to decrease in the future. It’s not just the computers power consumption, it’s also the energy required to provide cooling, which goes into the total operating cost. The lower the computer´s power consumption, the lower the cost of cooling it down. Efficient cooling also increases the reliability and lifespan of the silicon. With the “turbo boost” features of current processors, a good cooling concept also allows for extra computing performance. Turbo boost allows overclocking the processors as long as they are kept cool enough.

In order to be highly thermally efficient, most COM Express products are equipped with embedded technologies borrowed from mobile and low-power applications. The COM Express specification defines a heat spreader as a thermal interface to the system housing. This flat surface easily integrates into server applications and allows for a quick technology upgrade without having to change the mechanical and/or electrical system architecture. Following the ever-changing roadmaps of the chip vendors is no longer a challenge.

The COM Express specification also defines an I2C bus that allows the connection of environmental sensors in order to connect multiple temperature sensors for enhanced system monitoring.

Usable for Open Compute Project?

Some of the largest operators, i.e. Facebook and Google, are driving the Open Compute Project (http://www.opencompute.org) to make their server platforms more efficient, flexible, and scalable. This is one of many areas where COM Express might use its server-on-module capabilities to enhance telecom and data center applications.

Compatibility to Type 6 pinout

For headless applications it might be possible to use a Type 7 module with a Type 6 carrier board; such a combination will work if certain interfaces are not used by the carrier board.

Omitted interfaces

  • SATA[2:3]
  • AC97 / HDA Audio
  • USB[4:7]
  • EXCD[0:1]
  • eDP / LVDS
  • VGA
  • DDI[0:2]

Going forward

COM Express is the most successful computer-on-module standard ever, but it’s not the best at supporting the fast-growing area of edge computing servers. In this arena we find some requirements from the “classic” server markets combined with industrial requirements. It’s clear that server processor technologies for low-power applications supporting a massive amount of interfaces (i.e., PCI Express Gen 4 or Gen 5, 100 GBit Ethernet, etc.) will need to become available within the next few years.

COM-HD (the name is not yet fixed; it might be changed by the PICMG workgroup) is the forward-looking next generation of COMs. It’s the next standard on the COM roadmap, which started way back in 1999 with ETX, followed in 2005 by COM Express, and is about to be extended by COM-HD to ensure state-of-the-art computing modules for at least the next 10 years.

The task of PICMG’s COM-HD workgroup is to define a standard that supports fifth-generation PCIe and 100 GB Ethernet while doubling the amount of interfaces. The max I/O performance of COM-HD will be about eight times that of the current COM Express Type 7, with 64 instead of 32 PCIe lanes, or four times throughput when comparing PCIe Gen 3 to Gen 5.

COM-HD will not replace COM Express. Instead, it will offer more choices for high-end edge computing.

Christian Eder is director of marketing for . He has been an active participant in the COM Express workgroups as the specification editor for COM Express 2.0, COM Express 2.1, COM Express Design Guide 2.0, Embedded EEPROM, Embedded EAPI, and now the COM Express 3.0 workgroup.

congatec AG www.congatec.com

Topics covered in this article