Scatterings image

[Image: Getty Images]

On the fourth day of the Frontiers in Optics and Laser Science 2018 conference, Visionary Speaker Mark Bolas, Microsoft, USA, took the stage to delve into virtual reality (VR) and augmented vision, one of the four core themes of the conference. His talk, “Bending Light to Bend Reality,” comprised two parts: the first half focused on how to build reality using optics, while the second half explored the different ways that you can bend that reality once you’ve mapped it.

Building reality

A VR pioneer and researcher exploring perception, agency and intelligence, Bolas began his much-anticipated talk by passing out a small black piece of construction paper with a tiny hole in the center to everyone in the audience. Instructing everyone to hold the paper up toward the light, he said, “I see pinholes everywhere; that’s my reality.” He then explained that the human brain uses pinholes of light to create a mental map of our world. Our world is entirely something created in our minds out of bent light—“reality is not a thing.”

Bolas illustrated this concept using cinematography as an example. Imagine, he said, watching a kung fu action sequence playing out on a spiral staircase: through different cuts and techniques, the cinematographers are choosing which pinholes to pick so that the viewer can’t help but get a feel for the spiral staircase. Cinematographers move the camera’s viewpoint to effectively carve a path through a figurative light field to create a mental map in the viewer’s mind—all on a 2-D screen.

Mapping in a mixed-reality world

Branching into the different techniques that are used to map these worlds in mixed reality (MR), Bolas explained that we live in a mixed world where optics plays a role in everything. Already, our physical worlds are blended with the digital world—people are constantly on their phones, taking emails or on an app.

To map reality, according to Bolas, you need a constellation of points or pinholes. But you also need to think about surfaces, which is where things get more complicated. Using the Xbox 360 Kinect as an example, Bolas explained that the Kinect’s sensor tracked a user’s body movements by triangulating structured light, making the human body the game controller. The second generation of Xbox Kinect was updated to use a time-of-flight sensor—a range-imaging camera system that resolves distance based on the speed of light. There are multiple techniques, but reality is mapped out using optics and tracking.

Scatterings image

Mark Bolas

Bending reality

Mapping is how MR systems like Bolas’ current project, the Microsoft Hololens, figure out the world. But the next step is mixing—bending the reality that you just created. One obvious thing to improve in AR/VR, according to Bolas, is field of view, which he said “is narrow now, but it’s getting wider all the time.”

The trick is to not ignore the periphery. When Sony’s Glasstron came on the scene, it actually stunted VR for a period of time because it had only a 40 degree field of view. People, said Bolas, weren’t that impressed; when you restrict the distal and medial FOV, it causes depth judgment errors. Bolas has found that by taking a 60 degree FOV and putting white light in the periphery, you actually get similar results as if you had a 150 degree FOV.

When it comes to bending reality, you also have to consider two different kinds of space—action space and personal space—which can be bent in very different ways. One thing that Bolas has found in VR experiments, is that people are entirely comfortable “losing” their mental map for a second as they perform an action, and then “reacquiring” it as they go. Personal space, on the other hand, is very “bendable” and can be warped by emotional intent.

He cited an experiment where 70 people “walked” through a virtual environment—and only two noticed, by the end, that the door had moved during the experiment. Those two people were first responders, a policeman and fireman, who never lose their mental map, and always know where the exit is.

Bundles of affordances

However, the map in MR is not just about space. It’s also about what can be done in that space. “We map our space with what we can do,” said Bolas. This is why people are so attached to their phones. Smartphones, according to Bolas, are just giant bundles of “affordances,” or things we can do—all in our palms, anywhere we go. Phones take over our reality, he said, because they are devices of agency.

Thus, the key to MR, in Bolas’ view, is to create a sense of satisfaction. “We have to map and sense the world to allow us to feel that as well,” he said, with space as the medium and satisfaction as the goal. And, he says, we’re starting to get to that point.

“Vision, light, is our primary driver to create in our minds what we call reality,” said Bolas. “MR is a playground for optics designers. You guys have a lot of fun bending light … this field needs you.”