Skip To Content
ADVERTISEMENT

Quantum Computing: Will It Work, and When?

artist conception of trapped ion quantum simulator 

Artist’s rendering of trapped ions in a quantum simulator, from a 2017 experiment. [Image: E. Edwards/JQI]

After decades of patient scientific groundwork, the notion of “quantum computing” has, in the past several years, seen a surge in new activity and interest—not only in the lab, but at commercial firms like Google, Microsoft and IBM, and even among the public at large. Spurring that new interest have been successful lab demonstrations of systems and simulations involving multiple quantum bits (qubits) in trapped-ion systems, superconducting circuits and other platforms.

But it’s still a long way from these demos to the kind of universal, fault-tolerant quantum machine that optimists have envisioned—one that can outperform the best classical computers on a wide range of problems. And some have questioned whether such machines are even possible.

On Monday, 6 May, that question formed the backdrop of a workshop at the CLEO Conference in San Jose, California, USA, with the provocative title, “Will Quantum Computing Actually Work?” The session, organized by OSA Fellow Ben Eggleton of the University of Sydney, Australia, and by Tara Fortier and Andrew Wilson of the U.S. National Institute of Standards and Technology (NIST), brought together five experts working in various aspects of quantum computing and technology to talk about the issue. While no clear answer to the session’s title question emerged in the workshop, the discussion highlighted some of the key milestones likely on the road ahead—which is likely to stretch across decades rather than years.

Holy grail or unicorn?

Wilson, who served as the session’s moderator, got things started with a nod to the increasing scientific and public interest in the topic. “Most of us have heard amazing claims about the likely impact of quantum computing, and some of those amazing claims have come from scientists themselves,” he noted. That has spurred a worldwide uptick in spending on quantum initiatives by governments, as well as interest among commercial firms and investors.

“We now boldly claim that the future is quantum,” Wilson said. Some, he continued, have likened the search for quantum computers to the search for a holy grail. “But sometimes it’s hard to work out whether you’re looking for something that’s just hard to find, like a holy grail, or are you looking for something that’s impossible to find—more like a unicorn.”

The NISQ stage

Jerry Chow, a senior manager of the quantum computing effort at IBM Corp., picked up on the holy-grail theme, suggesting that the grail is “a universal, fault-tolerant quantum computer—one that that would have provable speedups” in processing compared with classical machines. Such a universal machine doesn’t exist at this point and is likely a long way off.

Jungsang Kim and Jerry Chow at workshop

Workshop participants Jerry Chow of IBM Corp. (right) and Jungsang Kim of Duke University/IonQ. Chow focused on the current generation of “noisy intermediate-scale” quantum computers, which he said are already enabling interesting research and publications.

In the meantime, Chow said, researchers are working with “noisy intermediate-scale quantum computers,” or NISQ computers—one example of which, according to Chow, is his firm’s own IBM Q Experience (IQX), an open quantum platform, based on superconducting-circuit technology, that IBM has maintained on the cloud for three years. Chow noted that IQX has had more than 120,000 users running more than 12 million experiments, and has spawned more than 180 scientific papers. Those numbers, he suggested, point to the potential impact of even a “noisy” quantum machine in advancing the science of quantum computing.

The current public IQX platform includes 5-qubit and 16-qubit devices, and IBM is also making 20-qubit devices available for its commercial clients. But in thinking about how to benchmark and understand such systems, Chow stressed that it’s “not necessarily a numbers game, of just pushing numbers of qubits.” Instead, IBM is focusing on a different metric, “quantum volume,” that wraps in both the number of qubits and error rates in the system. “We see this as a potential proxy for scaling these kinds of NISC computers,” he said.

Chicken and egg

The next speaker, OSA Fellow Mikhail Lukin of Harvard University, USA, has worked in a variety of quantum systems, including trapped atoms, nitrogen-vacancy centers and other platforms. But in thinking about quantum computing’s practicality and potential utility, he focused in his remarks on the software as well as the hardware.

“Despite enormous progress in the field, we still do not know how to build truly large-scale quantum machines,” Lukin said. “And what’s more striking is that if I were to build a quantum computer and give it to you, you wouldn’t know what to use it for.” That presents the quantum community with something of a chicken-and-egg problem—as many of the participants stressed throughout the workshop, the only way to start to understand what quantum computers can do is actually to build them and see.

Mikhail Lukin at workshop

Panelist Mikhail Lukin, Harvard University, noted the need to flesh out quantum applications and algorithms, not just hardware: “If I were to build a quantum computer and give it to you, you wouldn’t know what to use it for.”

Indeed, Lukin said that one reason that the current environment is “such an exciting time” is that several quantum platforms are starting to promise implementations that are “big enough, quantum enough, coherent enough and programmable enough that we hope, within the next few years, we can try to figure out what we can do with these machines.” In the quest for “quantum supremacy”—demonstrations that quantum machines are clearly superior to classical ones—Lukin pointed out that some trapped-atom and other systems have already been able to simulate or solve quantum physics problems at levels beyond the reach of classical computers.

The need for control

OSA Fellow Jungsang Kim, a professor of electrical and computer engineering at Duke University, USA, and a co-founder (with Christopher Monroe of the University of Maryland) of the quantum computing start-up IonQ, noted that the question of whether a universal quantum computer will happen really hinges on “getting to a useful application you can’t do otherwise. And that’s a fairly tall order, because you’re competing against high-performance computing, which is very powerful today.”

Kim noted that the qubits that his firm, IonQ, is working with—optically trapped ytterbium-171 ions—are “really, really good,” and that if the system is properly isolated it’s “very easy” to get to relatively long coherence times and “make your quibit as good you it needs to be.” The problem, he continued, is that the technology for controlling the qubits is not as good—and it’s the deficiencies in those control systems that are the limiting factor now in ion-trap quantum computing.

Still, Kim is optimistic that this can be overcome, and that the company can get to a 100 qubit platform “in the next few years.” If that’s achieved, he suggests, scientists will be able to “see if quantum computers can start to do interesting and useful things.”

Integrating multiple systems

Another panelist, OSA Fellow Jelena Vuckovic, a professor of electrical engineering at Stanford University, USA, has done much of her work with on-chip photonic quantum systems and integrated-photonic platforms. “For most quantum technologies,” including quantum repeaters and quantum networks in addition to quantum computers, “we need homogeneous long-lived qubits with good optical interfaces, and we need good optical connections.”

Given the number of platforms out there—including superconducting circuits, which operate in the microwave rather than the optical domain—“a lot of technologies of this type will need to be heterogeneously integrated,” according to Vuckovic. And for scaling the systems, she “strongly believes” that improving classical photonics and nonlinear optics will be crucial in addition to progress on the quantum side.

Birgitta Whaley at workshop

Birgitta Whaley of the University of California, Berkeley, said that, while there “still aren’t enough” quantum algorithms, “what’s exciting right now is that there are a lot of algorithms coming out for domain-specific scientific problems,” in areas such as quantum chemistry and quantum simulation, that are difficult for classical computers to solve.

The last panelist, Birgitta Whaley, a professor at the University of California, Berkeley, and the director of the Berkeley Quantum Information and Computation Center, turned to the theoretical side of the field. Her work started in the area of quantum error correction—a “relatively mature area in the field,” Whaley said—and has moved into continuous error correction, measurement theory and applications, quantum control and other areas.

Whaley’s group is also looking at quantum algorithms. “It used to be said that there aren’t enough quantum algorithms, and that’s still true,” she acknowledged. “But what’s exciting right now is that there are a lot of algorithms coming out for domain-specific scientific problems,” such as electronic structure and dynamics and simulations of quantum systems.

What’s it best for

Such domain-specific problems, and the best use for quantum computers in the near term, was a big theme of the lively Q&A session that followed the panel presentations. Asked to think about “the major scientific applications and/or markets for quantum computers in the next 5 to 20 years,” several of the panelists highlighted difficult physical and quantum chemistry problems, which according to IBM’s Chow are tractable even for the NISQ computer systems likely to be most common in the next several decades. (Whaley, however, warned that a number of key chemistry problems, in areas such as catalysis, depend on levels of accuracy that will require “a fully coherent machine with complete error correction.”)

Duke’s Kim agreed that quantum chemistry is one area where quantum computing’s ability to speed up solutions is known, and “where we know that quantum computers can really help.” But he added that “maybe the problems that we are going to solve are problems that we don’t currently know about”—alluding to the fact that, in the classical realm, the transistor, for example, was not used initially for computing.

As a result, he said, “it’s very important for the hardware people to push the performance of our systems,” to test out problems that can’t be simulated with classical computers. Harvard’s Lukin added that one class of algorithms, so-called heuristic algorithms useful in optimization problems, could provide useful tests for algorithms that can provide a quantum speedup. Whaley agreed that optimization algorithms constitute “a huge opportunity” for implementations on quantum machines, and one with a lot of business interest.

Taking the long view

All of the panelists seemed to agree that, in thinking about the feasibility of quantum computing, a long view is essential. Stanford’s Vuckovic noted the extended history of classical computing before it started to take off during the 1970s with advances in integrated circuits; quantum technology, she suggested, could experience a similar “phase transition” in the future. And Kim noted that, looking at the broader history of innovation, “you tend to overestimate what can be done in three years, and underestimate what can be done in a decade.”

“Sometimes, a technology infrastructure takes a long time to develop,” Kim noted. “But once it’s available, use cases quickly come about.”

Publish Date: 07 May 2019

Add a Comment