At FiO 2017, Scott McEldowney of Oculus sketched out a future vision in which augmented and virtual reality merge—in devices as “socially acceptable” as an ordinary pair of glasses.
Right now, virtual reality (VR) and augmented reality (AR) largely occupy different domains: VR seeks to immerse users in an entirely separate world, while AR overlays additional data, images and experiences onto the real one. But Scott McEldowney of Oculus Research, USA—who gave one of three “Visionary Speakers” keynotes that kicked off the Monday session of Frontiers in Optics in Washington, D.C.—sees a future in which those strands will come together.
Devices almost as lightweight as an ordinary pair of eyeglasses will someday “offer AR, and VR, and everything in-between,” said McEldowney. “And we’re going to wear them all day long.”
McEldowney, an optical engineer who has worked on both VR and AR imaging and display for Oculus and, earlier, for Microsoft on its Kinect and Hololens products, has no doubt that this “mixed reality” future is coming. But, he says, it’s likely to be many years away. And getting there, he told his audience, will require “many breakthroughs by the people in this room.”
Getting to a “socially acceptable” platform
McEldowney noted that VR is already making strides at capturing aspects of the real world and bringing it into the virtual one. That, he said, would “probably revolutionize a lot of the ways we work and play.” But according to McEldowney, VR, as it’s done today, can never produce a “go-anywhere, mixed-reality device” that is as “socially acceptable” as an ordinary, see-through pair of eyeglasses. And that level of social acceptability, he said, is “an absolute requirement for anything we’re going to wear in public.”
That means that the path to what McEldowney calls “full AR”—the mixed-reality future—will need to be via see-through AR glasses that are “light, comfortable, power efficient, stylish and socially acceptable.” And it’s a tall technical order. “It’s going to take five years, ten years, or even longer,” he said, “but eventually they will become a viable part of our everyday lives.”
Scott McEldowney at FiO 2017. [Photo: Alessia Kirkland]
That transition, however, will require some fundamental breakthroughs in optical technology—analogous, McEldowney suggested, to the long road that led from the first stirrings of a vision for “human-oriented computers” articulated by the psychologist and computer scientist J.C.R. Licklider in the late 1950s to the iPhone in your pocket today. The many years required to walk the path of that last technological revolution (and the many sub-revolutions wrapped up in it) is, McEldowney stressed, “worth keeping in mind as we think about the future of AR.”
Right now, said McEldowney, VR has already shown us that “it’s possible to put the right photons in our retina, the right pressure waves on our eardrums and the right vibrations on our skin” to create experiences that “genuinely feel real.” But a much bigger leap will come when we can “mix those virtual experiences seamlessly with the real world” in a lightweight AR package. While that leap will require technology that doesn’t exist today, said McEldowney, he pointed out that now-commonplace utilities such as Facebook and Uber would once have been equally unimaginable. “It may seem far-fetched,” he said. “But remember, we’ve already seen it happen once.”
The right photons in the right place
McEldowney spent much of his presentation discussing what it would take to get to that lightweight, mixed-reality future. Many of the requirements, of course, hinge directly on optical technology—particularly display technologies that “deliver the right photons to the right places on our retinas.” He stressed that consumer applications of AR impose some stringent requirements in depth of field, image quality, field of view and the “eye box,” the allowable error or volume of space in which the imaging system still allows an acceptable image. Unfortunately, said McEldowney, “improving one of these often works against one of the others.”
The need for a compact display, comfort and ergonomics—“a cornerstone of consumer AR,” according to McEldowney—also imposes limitations. He noted that Microsoft’s HoloLens, the current state of the art, weighs in at around 685 grams, versus 20 to 25 grams for a typical pair of eyeglasses. “We need to make an order-of-magnitude improvement,” he said, “just to get in the ballpark” of an all-day wearable device. It’s a daunting problem, according to McEldowney—but also “a great place for research,” as cutting-edge optical techniques and tools including nanomaterials, metasurfaces, MEMS and computational optics could enable significant breakthroughs.
The technical challenges of the mixed-reality future don’t stop with display technology, or even with optical technology generally. Eye-tracking, for example, is a devilishly difficult problem, according to McEldowney, given the range of possible eye motions and the wide variety of pupil shapes, irregularities, surgical complications and other factors that designers need to deal with. And major advances in artificial intelligence, human-computer interaction, computer vision, and payload and packaging will all need to underpin the mixed-reality future that he envisions.
McEldowney believes, though, that the technical effort involved is worth it. Consumer AI, when fully realized, will allow users to “modify reality, and mix in the virtual world, in literally any way we want.” And addressing the intriguing scientific and engineering challenges of getting there, he concluded, “should keep many of us in this room busy, probably, for decades to come.”