Skip To Content
ADVERTISEMENT

Optical Interconnects in Data Centers: What’s Next?

Scatterings image

Marc Taubenblatt at FiO 2017.

If there’s one motto for the current age of advanced computing and machine learning, it might be “There’s no such thing as too much data.” But the rapidly growing demand for ways to move that data around efficiently is creating some profound technical challenges—and spurring work on new optical frameworks and solutions. In a “Visionary Speakers” talk on Thursday at the Frontiers in Optics meeting in Washington, D.C., Marc Taubenblatt of IBM Corp. sketched out some of those frameworks, particularly for data centers.

An important piece of the puzzle, Taubenblatt said, lies in some specific hardware solutions, such as moving the optics closer to the electronics on the chip and increased use of next-gen optical circuit switching. But he also suggested that part of the solution could come from thinking differently about network hardware and architectures writ large. In particular, he speculated that tomorrow’s networks may move increasingly away from their current, general-purpose approach to special-purpose systems in which the network architecture is matched to the needs and workloads of the task at hand.

Hitting a ceiling

Taubenblatt, the senior manager of optical communications and high-speed test at IBM’s T.J. Watson Research Center, USA, offered some sobering thoughts to motivate his presentation. According to Cisco Systems, data-center workloads are growing at a 21 percent compound annual rate—and that number does not even include flows related to machine learning. Further, machine-to-machine (M2M) traffic is ballooning at what Taubenblatt called “an astounding rate” of 49 percent per year.

Some 77 percent of that M2M traffic, Taubenblatt noted, takes place within individual data centers, a statistic that underscores the need to solve interconnect issues above all in these power-hungry environments. Making the situation even more complex, he added, is the fact that these centers need to be set up to handle high peak loads during busy hours. “Data movement within the data center is becoming a critical feature,” Taubenblatt said. “We need to keep performance and efficiency growing.”

The problem, he continued, is that many of the solutions that the community has relied on in the past, involving increasing the data rate in the channel, are rapidly becoming impossible to sustain because of power consumption. In much the same way that power consumption previously put a lid on the growth of clock speeds for microprocessors, Taubenblatt said, data networks are unlikely to be able to increase the speed of off-chip connections as they have in the past and still keep power requirements reasonable.

Has optical switching’s time finally come?

Scatterings image

[Image: iStock]

Taubenblatt sees the switches in the data center as a key to resolving the problem. An analysis from Microsoft, he pointed out, suggests that switch I/O alone accounts for some 16 percent of data center network power consumption, and other switch functions account for another 36 percent. As traffic within the data center continues to burgeon, single-chip switches, the building blocks of a typical data center, will eventually start to consume so much power that it will become effectively impossible to cool the chips efficiently.

In the near term, companies, including Taubenblatt’s operation at IBM, are focusing on easing that power crunch through more efficient packaging of the optical links—that is, by placing the optics closer to, or even on top of, the electronic processing package. Taubenblatt showed examples of a number of on-package “optochip” prototypes that IBM has developed toward such better optical-electronic integration.

But beyond that, he argued, “where we need to get to is getting rid of the electronic switch,” and replacing it with optical switching. This, he explained, eliminates a large number of power-eating conversions between the electrical and optical domains and back, which are required at every data “hop” in today’s conventional networks. “Optical circuit switching has been talked about for quite some time,” said Taubenblatt, “but I think that finally now we’re starting to see some light.”

Fitting the network to the task

Taubenblatt showed work that IBM is undertaking to build reconfigurable optical switches using silicon-photonic technology. Yet he also suggested that, ultimately, the network architectures of the future may need to start to consider more carefully the sorts of “cognitive workloads” they need to support.

He pointed out, for example, that typical scientific-computing tasks, such as finite-element modeling, often put a premium on “nearest neighbor” interactions. On the other hand, web traffic relies on highly parallelizable “east-west” traffic within the data center to handle peak data flows. And emerging high-load applications such as machine learning and graph analytics might require random access to very large memory sets.

“The communications needs of a data center are usually made very general-purpose,” Taubenblatt said. “But the truth is, these different workloads actually have very different communications needs.” And in light of that, he suggested, the networks of the future—which may increasingly embody both optical switches and fast control algorithms to handle them—could well evolve to match specific types of workloads to the hardware and architectures best equipped to handle them.

Publish Date: 21 September 2017

Add a Comment