AdvancedMCs adapt to Big Data network demands

4With metatrends such as Big Data and the Internet of Things (IoT) promising to turn traditional notions of networking on their head, network hardware vendors have arrived at a crossroads between specialized, dedicated hardware appliances and general-purpose platforms with the ability to scale with bandwidth demands. In this PICMG Systems and Technology panel session, Drew Sproul, Director of Marketing, Adax Inc.; Ron Huizen, Vice President, Systems and Solutions, BittWare; Edward Young, Managing Director, CommAgility; and Nigel Forrester, Technical Product Marketing Manager, Concurrent Technologies discuss how the Advanced Mezzanine Card (AMC) ecosystem is evolving to meet these challenges.

PICMG: How are trends like the Internet of Things (IoT) and Big Data affecting today’s networks, and where does the Advanced Mezzanine Card (AMC) fit within this context?

SPROUL: I think that Big Data, when it comes to datacom networks or whatever we want to call today’s communications networks, dominates. As a dominator, it comes in two kinds of packages with regards to packet processing. One is content. If I look at Big Data in terms of content, video streaming comes to mind, gaming comes to mind; Big Data in the sense that there are a lot more bits involved with the communication than there is with simple voice, texting, or emails. So it means that demands on the network in terms of capacity, performance, and latency just skyrocket. So there’s Big Data in that regard. Then there’s Big Data kind of in the context of Big Brother, and you can say benign Big Brother or bad Big Brother. Benign Big Brother would be what we all acquiesce to in terms of being consumers, so let’s say benign Big Brother is just consumer data, and everywhere I go there are ads popping up for bicycle gear. So Big Data in the sense of analytics is also dominating the networks, and packet processing is required to deliver content with all the things you need such as high quality, it has to be responsive, you need big bandwidth, you have to be able to dynamically allocate bandwidth, and so on. That all requires intelligence, management, and knowing what goes on in the network. So, packet processing for both the content and analytics defines the challenge of what needs to be done in today’s communications networks.

Software-defined networking (SDN) datacenter cores with pay-as-you-go, expand-as-you-need, dynamic computing resources, that’s jumping ahead like gangbusters. Network functions virtualization (NFV) is addressing this packet processing, and also security. But I think that in NFV over the next five years there will be two stages. One is that we’re not going to be able to virtualize certain Deep Packet Inspection (DPI) functions or security protocols in the same way that we can generalize datacenter functions and applications. There is a very specific requirement for what I’ve heard best described as “hardware-engineered solutions for network functions,” and I think that in those two areas of packet processing and security the hardware will still need to be task-specific. That being said, hardware in general won’t be able to remain as dedicated appliances, but there will be places for specialized hardware at the edge.

FORRESTER: From our viewpoint the only thing that’s really changed is the level of visibility that the IoT brings to the creation of Big Data. For many years, some of our product lines have been deployed in machine-to-machine (M2M) applications. Typically our devices are deployed to do some real-time analysis relatively near the source, store the original data, and pass on a summary to the final datacenter somewhere in the cloud.

Specific to AMCs, we are seeing interest in their use as the basis of a small size, low-latency, high-performance embedded computing (HPEC) solution on the edge of wireless networks. Because of their small physical size and functional granularity, they fit well as edge-of-network-type solutions, especially with the diversity of AMCs available off the shelf from the wider ecosystem.

PICMG: Given that Big Data and the IoT incorporate a great deal of machine communications, how has the AMC ecosystem evolved to meet the requirements of M2M networking while still equipping the performance required for specialized, high-bandwidth applications?

FORRESTER: We’ve seen AMCs used in M2M aggregation or data processing or data collection types of applications for quite a while. And one of the things that’s come along more recently that’s a really good example of how it fits into that space is the advent of MicroTCA.4 (mTCA.4). Okay, it’s an mTCA standard, but it means that you have to develop some specific AMC modules because they’re double-wide (Figure 1). We’ve got modules developed for that standard and they’re basically being used in physics experiments.

Typically the physics guys like to do quite a lot of their own designs, and they really like the idea of AMCs because they’re not too big but there’s enough space to put some decent FPGAs and things like that down. [The physics applications] use rear I/O, which is why they developed the mTCA.4 standard because that introduces the concept of a rear transition module (RTM) with an AMC. So, we’ve got processor boards used to control those preprocessing-type data collection systems. Where they’re controlling that system is usually quite close to the experiment itself – they’re usually doing lots of local storage, and at the end of that they pass the data off to some other huge server, which is where they do all their post processing. So it’s a nice little M2M application where AMCs fit really nicely because of their size and granularity, and the fact that they’ve got pretty good performance. (Editor’s note: Read more on MicroTCA.4 in “MicroTCA tabbed for next-gen test” on page 18.)

21
Figure 1: MicroTCA.4 (mTCA.4) introduces a rear transition module into the specification, and also calls for double-wide Advanced Mezzanine Cards (AMCs). The larger board area allows AMCs to be outfitted with larger or multiple compute elements such as DSPs and FPGAs, making the modules applicable in a range of high-speed communications, storage, and signal processing applications. Image courtesy Nutaq.

Although mTCA.4 was designed and developed by the physics community, there are definitely some use cases outside that. Even in the telecom space because of the fact that it’s physically larger and you now have the concept of RTMs, it makes it easy to develop five-nines capability with mTCA because basically you can remove various bits and pieces of equipment without the whole application or system stopping now, which you couldn’t really do before with AMCs because if you’ve only got AMCs with a front panel connection, if you lose the AMC then you lose everything associated with it. Now you can have a rear connection and actually move that data across the backplane to something else, so a cable is not specifically tied to a module in that case.

YOUNG: In our market space of high-performance signal processing we’re seeing increasing usage of mTCA and AMCs, especially in complex test and measurement applications as customers update older, proprietary systems, which is helping us grow as a company. We’re seeing an increasing trend towards double-width AMCs, including mTCA.4, to handle and integrate large, powerful silicon devices, such as the latest DSPs and FPGAs.

FORRESTER: Because miniaturization is ever increasing, you can get some really good performance now on an AMC, and that performance is not just a CPU. You might have a CPU with some GPU cores, or quite a few vendors do things like DSPs with FPGAs, or DSPs with FPGAs with CPU cores even. So there’s a lot of functionality that you can get in a very small space, which makes [AMCs] suitable for fitting into those small physical environments, which is one of the things in their favor that you don’t need a huge box to put them in. It’s quite easy to package them whereas some of the other core network type stuff is still better suited to AdvancedTCA (ATCA) or similarly large form factor-type products.

HUIZEN: On our side in the last two years we’ve gotten into networking much more than we ever had. Traditionally we were doing more signal processing, and our AMCs were being used in base station development, in communications test systems, but it wasn’t general networking. It was all very specialized. But with the Stratix V, which is Altera’s current generation part that can run 10 Gigabit Ethernet (GbE) straight into the chip on a single lane, we started putting optical cages down on our boards and all of a sudden many of our customers with FPGAs were doing networking. The whole financial space came out of the blue and it all seemed to be centered around that when things went from 1 Gigabit to 10 Gigabit, your general “just put a 10G network interface card (NIC) in and let it go over to the Intel to process” started falling apart if you had really high-speed data rates or you needed really fast reaction times. And that’s when people started to say, “Let’s put something out there directly attached to the network,” and for us that’s an FPGA. So you can do some of the work in the FPGA and be able to deal with full line rates very quickly and not overwhelm the Intel. Now with most of our boards we can support multiple 10 Gigabit [lanes], we can support dual 40G, and later this year we’re hoping to support 100G (4x 25 Gigabits per second (Gbps)).

In mTCA, we’re going to be doing a new AMC board, which will be able to do multiple 10 Gigabit, dual 40 Gigabit, or dual 100 Gigabit. On those because we’re FPGA-based, the customer can choose what fabric they want to run on the back side, so whether they want to run Ethernet or Serial RapidIO (SRIO) or PCI Express (PCIe), the same board can support all of them. Traditionally in AMC we used to get a lot of people running SRIO because we had a lot of them that were in communications testing, and RapidIO was the fabric of choice if you had some big DSPs in there and PowerQUICC processors. I think more and more though we’re going to see PCIe for general systems, but for the networking PICMG is working on 40 GbE fat pipes and our boards will be able to support that (Figure 2 on following page).

22
Figure 2: BittWare’s A10AM3-GX/GT/SX Advanced Mezzanine Card (AMC) is based on the Altera Arria 10 GX/GT/SX FPGA and SoC and boasts up to 26 full-duplex, multi-gigabit SerDes transceivers at up to 28 Gigabits per second (Gbps).

PICMG: As most network applications today are just beginning the migration to 10 GbE interfaces, does 40G mTCA and AMC represent technology that is ahead of the market curve?

HUIZEN: I think it’s gradually going to evolve, but it’s as needed. Right now our boards can support 40 Gigabit, but we have very few people running 40 Gigabit on them. Most are still at 10 Gigabit. The guys that are running 40 Gigabit are doing things like network-attached storage (NAS), so they’re using the FPGA to interface at 40 Gigabit to some storage array and doing some preprocessing with the FPGA to speed things up.

FORRESTER: If I look back in history at ATCA, when that started out it was dual 1 Gig channels and they’ve gone from 1 Gigabit to 10 Gigabit to 40 Gigabit, and now they’re talking about 100 Gigabit. Actually the same is true of AMCs. Because the processors themselves are able to gobble up much more data and process it, there will be a need to get in higher streams and move that around the backplane. One of the advantages of AMCs, and one of the reasons it’s very well thought of as a standard, is because it’s evolved to meet the application requirements of the future. So it may not be something that many people want to use today in an AMC format, but the fact that it’s coming and will be ready at the right time for people to use is a really nice thing to have up your sleeve. From our point of view we’ve introduced an AMC recently with 10 Gigabit interfaces on it, and I’m sure we’ll do the same with 40 Gigabit at some point in the future, we’re just not quite there yet.

PICMG: What are your projections for network hardware over the next 5-10 years, and how do you see AMCs fitting into those architectures?

SPROUL: I think that today and into the future there will be a place for slimdown to address the concerns that ATCA is way too expensive. So, slimmed down ATCA, flexible AMCs so you can have multi-functional, hardware-engineered appliances. Our architecture is great for that today, and there will be a marketplace for that into the future.

Long term, I think that there will be an additional layer in the network that will be edge-like, or metro-ish. So you can define that as the whole metro thing where you’re going to have packet processing done and security done en masse. We’re going to find an architecture that’s going to be bigger than a breadbox but smaller than a refrigerator, and the trend is going to be more towards purpose-built appliances that will have more flexibility than you can get from discreet AMC modules, because each AMC module, at some level, is a compute environment unto itself, and you have to write applications or software to manage groups of AMCs. That can certainly be done, but at a certain point where hardware-engineered solutions on these communications appliances meet datacenter virtualization, I think there will be something like a hardware-engineered device that will allow for some dynamic allocation of its resources based on need. Today, these hardware-engineered data plane and security functions pretty much can be handled by 1U and 2U chassis, with today’s technology and today’s architecture. My touch points say that in NFV, the long-term money is all in research and proof of concepts for this next generation of data plane device. The blade-based, scalable, more virtualized ones. Lots and lots of pilot projects and proofs of concept, but at the same time we’re fielding and developing on what we have today, which are these ATCA/AMC 1U and 2U blades and chassis.

FORRESTER: We recently did a proof of concept on a slightly different sort of HPEC system based on AMCs, but it wasn’t actually in an mTCA chassis. We do see a few customers where they’re actually using AMCs, but not actually using them in either an ATCA or mTCA environment. A similar backplane arrangement with sometimes their own switch, sometimes even no switch, but they tend to be simpler, lower cost types of solutions that don’t need a MicroTCA Carrier Hub (MCH), for example. So if you need things like hot swap and the management capabilities, then the MCH is a really good thing to have. But if you don’t because you basically need a box and it’s got some AMCs in it and if one of them goes faulty then you swap the box out, then you don’t need that level of capability (Figure 3). So we do see AMCs as sort of a building block for processor-intensive, sometimes storage-intensive applications where people just want a box of processors with some disks, and they’re prepared to put in a bit of effort to wire them up themselves.

23
Figure 3: Concurrent Technologies recently introduced an Advanced Mezzanine Card-based (AMC-based) proof of concept comprised of a 1U Data Center Compute and Networking (DCCN) enclosure configured with RapidIO for scalable, low-latency high-performance embedded computing (HPEC).

Adax, Inc. www.adax.com sales@adax.com

BittWare www.bittware.com info@bittware.com

CommAgility www.commagility.com sales@commagility.com

Concurrent Technologies www.gocct.com sales@cct.co.uk