Switching at the speed of light

Solutions that address the problem of added latency between participating nodes in a traditional interconnect scheme are appearing to aid multimedia.

Multiprocessor, multinode, and multicore systems and the applications that drive them are at a technical crossroads. Increasing processor performance by riding Moore’s law isn’t yielding nearly what it was. Now the path to increased computing performance appears to lie with multi-core processor technology and distributed computing. At the same time, services and data access through networks continues to increase at an alarming rate. For example, a recent IDC study reports that 40 percent of organizations see data warehouses growing at 50 percent annually. And 18 percent of organizations report their data warehouse size doubles annually. The key to moving past this crossroads lies in making the interconnect in multiproces-sor or multinode systems more efficient and higher performing. This will enable multiple compute nodes to work together to achieve higher overall performance.

In this month's Software Corner column we'll look at a new interconnect technology from Lightfleet Corporation called Corowave. Corowave uses a unique laser technology with special broadcast properties that promises to be the foundation for the interconnect of the future, enabling systems to achieve next-level performance through more efficient, scalable distribution of computations between nodes.

Today's interconnect issues

The slower performance gains in processing power and doubling of data warehouse growth aren't the only issues putting increased pressure on today's point-to-point interconnect. Varying latency and bandwidth expansion problems arise when running broadcastoriented applications. Traditional interconnects add latency between participating nodes that make multimedia applications such as video meetings difficult to use. And the large latencies are not the only issue. Differences in latency between nodes working on a common application can also lead to problems and inefficiencies for that application.

Bandwidth expansion is another big concern regarding capacity problems within the interconnect itself. When a videoconferencing application between multiple sites is run, each endpoint sends out a packet stream that then must be copied and sent to each of the other endpoints in the conference. These point-to-multipoint data streams are transmitted in the form of multicast packets. From the source endpoint's perspective this is a single packet going to a single multicast address. The interconnect is responsible for expanding a multicast address packet into multiple packets that go out multiple ports to multiple destinations that make up the multicast group. This results in a one-packet-in, many-packets-out problem that can congest traditional methods of connecting multiple processors or nodes.

What does all this mean? In a nutshell, we're using a fundamentally point-to-point inter-connect for complex applications that are becoming increasingly point-to-multipoint and perhaps should be multipoint-to-multipoint (or all-to-all).

The Corowave switching interconnect solution provides the foundation for the development of a broadcast switching interconnect. The fundamental concept behind the technology is the use of lasers to send packets from a single source to multiple destinations simultaneously. This eliminates the latency problems discussed previously.

How it works

Figure 1 illustrates how the laser technology component works. A single transmitter beam is sent through a spreading lens. The broadcast light bounces off a mirror into a focusing lens destined for multiple receivers. So each node receives the data from a single source simultaneously using spreading and focusing lenses. The resulting transmission affords a flat and constant latency to each receiving node in the system.

Figure 1

It's also important to note that this laser broadcast technology is different from fiber initiatives being employed today. Fiber is a point-to-point transmission medium that uses laser light for data transmission. The Corowave technology uses light and lenses to achieve simultaneous broadcast interconnect.

Software impact

As with most disruptive technologies, it's not enough to have the physical layer solution by itself. Disruptive technology requires a software architecture that takes advantage of new capabilities yet can seamlessly interface to legacy software applications for graceful transition. Geoff Smith, Director of Software at Lightfleet, is chartered with that task.

Geoff described the new software architecture as creating a distributed shared memory environment without the shared memory overhead. Nodes can subscribe to the shared memory groups (called wavegroups) they are interested in and ignore the rest. Each node is allocated a single wavegroup for transmitting its information to the other nodes in the system. The rest of the wavegroups are used as the receive areas from the other transmitting nodes in the system. This kind of software architecture nicely parallels the broadcast architecture of the hardware interconnect component.

You'll notice in Figure 2 that there are three paths to the hardware. The first is through the block called "cwnet." The "cwnet" driver acts just like an Ethernet driver that lives below a TCP/IP stack. This way, socket applications can operate over the Lightfleet interconnect without any changes.

The "cwblk" component implements the block I/O path, the second path shown in Figure 2. This way, standard block I/O and/or file system applications have a standard way of communicating through the Lightfleet interconnect.

Finally there is a direct path through the hardware. The application programming interface exposes the wavegroup concept to the application. This enables applications to be written that take full advantage of the highly parallel reader/writer environment provided by the hardware.

The Figure 2 block labeled "cwfm" stands for "Corowave Fabric Manager." This component performs the initialization, assignment, and shared memory initialization of all the wavegroups between the nodes in the system.

Figure 2

Geoff described the fabric manager as performing the following functions:

  • Fabric mastership and allocation - The fabric manager of one node is designated the "master" and coordinates the assignment of nodes to wavegroups. From there, the fabric manager arbitrates allocation of nodes to wavegroups.
  • Maintaining coherency - Locking wavegroups and updating the comings and goings of nodes in wavegroups.
  • Scheduling updates

Now you might remember some previous columns where I described Data Delivery Service (DDS), which employs a publish/subscribe model for software applications. If you do, you'll notice the nice parallels between DDS and the Corowave software architecture described here. The software architecture not only supports the technical benefits of the hardware interconnect, but also dovetails nicely with application programming initiatives like DDS. So companies leveraging DDS for their new applications will find a natural distributed programming and hardware interconnect environment within this new laser-based broadcast interconnect.

Which markets will see the biggest impact

Chris Kruell, Director of Corporate Communications for Lightfleet Corporation, said that while the implications of this technology are farreaching, the immediate initiatives lie within the government and financial services sector. These sectors use algorithms that are very sensitive to congestion and latency. Implementing the algorithms also enables scaling by using more parallel components. Lightfleet's Corowave technology allows for broadcast with flat latency, so these applications can be scaled for the parallelism the performance target requires.

Specific algorithms Chris mentioned involve pattern matching. For example, taking various audio samples and matching these samples against a database of voice tracks, key phrases, or other properties. The more nodes involved in the algorithm, the faster the matching can be performed, which can sometimes be a matter of life and death. With the current interconnect environment, a distributed database with regular highspeed switching runs into congestion issues fairly quickly. The bandwidth expansion issues described earlier can also lead to network congestion, which limits the scalability of this kind of application.

Chris also spoke of other natural parallel processing applications like manufacturing and design simulations. "There are many types of simulations, such as computational fluid dynamics, that are natural parallel processing applications that have historically been limited by the interconnect," he said. "By leveraging the Lightfleet Corowave technology into systems used for such simulations, the technology can enable all-to-all computational capability with fewer nodes, or more nodes can be added in the same amount of physical space."

Financial services applications such as those that drive stock and commodity trading are very much publish and subscribe model applications. These applications take in all kinds of data feeds, and the amount of computation they are doing on a specific node is quite complex. Financial trading applications are also extremely sensitive to latency. These applications need to distribute hundreds of thousands, if not millions, of messages per second while maintaining extra-low, predictable latency. The shared data interconnect of the Lightfleet Corowave technology and the flat latency property it affords are extremely attractive for these kinds of applications.


The industry has been moving toward using optics and optical communication for a while. Major system vendors have projects that use optics to communicate between processors. But these initiatives continue to look at point-to-point communications.

But the majority of high-volume network applications are broadcast by nature. The Lightfleet Corowave broadcast interconnect technology, coupled with a software architecture that takes advantage of the technology, is complementary to the publish/subscribe initiatives of many application developers and interoperates with legacy applications. It adds an important piece to the puzzle of advancing today's communication infrastructure to the next level of usefulness.

For more information, contact Curt at cschwaderer@opensystems-publishing.com.