Mark Zuckerberg unveils ultra-realistic VR display prototypes
Interested in learning what’s next for the gaming industry? Join gaming executives to discuss emerging parts of the industry this October at GamesBeat Summit Next. Learn more.
Mark Zuckerberg, CEO of Meta, has been spending billions of dollars a quarter on the metaverse, which has moved very quickly from science fiction to reality in the eyes of big tech leaders like Zuckerberg. And now Zuckerberg is revealing some of the progress the company is making in the realm of high-end displays for virtual reality experiences.
At a press event, he revealed a high-end prototype called Half Dome 3. He also showed off headsets dubbed Butterscotch, Starburst, Holocake 2, and Mirror Lake to show just how deadly serious Meta is about delivering the metaverse to us — no matter what the cost.
While others scoff at Zuckerberg’s attempt to do the impossible, given the tradeoffs among research vectors such as high-quality VR, costs, battery life, and weight — Zuckerberg is shrugging off such challenges in the name of delivering the next generation of computing technology. And Meta is showing off this technology now, perhaps to prove that Zuckerberg isn’t a madman for spending so much on the metaverse. Pieces of this will be in Project Cambria, a high-end professional and consumer headset which debuts later this year, but other pieces are likely to be in headsets that come in the future.
A lot of this is admittedly pretty far off, Zuckerberg said. As for all this cool technology, he said, “So we’re working on it, we really want to get it into one of the upcoming headsets. I’m confident that we will at some point, but I’m not going to kind of pre-announce anything today.”
Today’s VR headsets deliver good 3D visual experiences, but the experience still differs in many ways from what we see in the real world, Zuckerberg said in a press briefing. To fulfill the promise of the metaverse that Zuckerberg shared last fall, Meta wants to build an unprecedented type of VR display system — a lightweight display that is so advanced, it can deliver visual experiences that are every bit as vivid and detailed as the physical world.
“Making 3D displays that are as vivid and realistic as the physical world is going to require solving some fundamental challenges,” Zuckerberg said. “There are things about how we physically perceive things, how our brains and our eyes process visual signals and how our brains interpret them to construct a model of the world. Some of the stuff gets pretty deep.”
Zuckerberg said this matters because displays that match the full capacity of human vision can create a realistic sense of presence, or the feeling that an animated experience is immersive enough to make you feel like you are physically there.
“You all can probably imagine what that would be like if someone in your family who lives far away, or someone who you’re collaborating with on a project or, or even an artist that you like would feel like if you’re right there physically together. And that’s really the sense of presence that I’m talking about,” Zuckerberg said.
Zuckerberg said that realistic displays should open up a new form of art and individual expression. You will be able to express yourself in as immersive and realistic way powerful, and that will be very powerful, he said.
“We’re in the middle of a big step forward towards realism. I don’t think it’s going to be that long until we can create scenes with basically perfect fidelity,” Zuckerberg said. “Only instead of just looking at a scene, you’re going to be able to feel like you’re in it, experiencing things that you’d otherwise not get a chance to experience. That feeling, the richness of his experience, the type of expression and the type of culture around that. That’s one of the reasons why realism matters too. Current VR systems can only give you a sense that you’re in another place. It’s hard to really describe with words. You know how profound that is. You need to experience it for yourself and I imagine a lot of you have, but we still have a long way to go to get to this level of visual realism.”
3D displays are the way to get to realism for the metaverse, Zuckerberg believes.”Of course, you need stereoscopic displays. To create that sense of 3D images, you need to be able to render objects and focus your eyes at different distances, which is a very different thing from a traditional screen where typically you put your computer screen at one distance and you focus there,” Zuckerberg said. “But in VR and AR, and you’re focusing at different places. You need a display that can cover a much wider angle of your field of view than any traditional display that we have on screens.”He said that requires significantly more pixels than traditional displays have. You need screens that can approximate the brightness and dynamic range of the physical world, which requires at least 10 times and probably more, more brightness than the HDTVs that we have today.
“You need realistic motion tracking with low latency so that when you turn your head, everything feels positionally correct,” he said. “To power all those pixels, you need to be able to build a new graphics pipeline that can get the best performance out of CPUs and GPUs, that are limited by what we can fit on a headset.”
Battery life will also limit the size of a device that will work on your head, as you can’t have heavy batteries or have the batteries generate so much heat that they get too hot and uncomfortable on your face.
The device also has to be comfortable enough for you to wear it on your face for a long time. If any one of these vectors falls short, it degrades the feeling of immersion. That’s why we don’t have it in working products in the market today. And it’s probably why rivals like Apple, Sony, and Microsoft don’t have similar high-end display products in the market today. On top of these challenges are the tech that has to do with software, silicon, sensors, and art to make it all seamless.
The visual Turing test
Zuckerberg and Mike Abrash, the chief scientist at Meta’s Reality Labs division, want the display to pass the “visual Turing test,” where animated VR experiences will pass for the real thing. That’s the holy grail of VR display research, Abrash said.
It’s named after Alan Turing, the mathematician who led a team of cryptanalysts who broke the Germans’ notorious Enigma code, helping the British turn the tide of World War II. I just happened to watch the excellent 2014 film The Imitation Game, a Netflix movie about the heroic and tragic Turing. The father of modern computing, Turing created the Turing Test in 1950 to determine how long it would take a human to figure out they were talking to a computer before figuring it out.
“What’s important here is the human experience rather than technical measurements. And it’s a test that no VR technology can pass today,” Abrash said in the press briefing. “VR already created this presence of being in virtual places in a genuinely convincing way. It’s not yet at the level where anyone would wonder whether what they’re looking at is real or virtual.”
How far Meta has to go
One of the challenges is resolution. But other issues present challenges for 3D displays, with names like vergence, accommodation conflict, chromatic aberration, ocular parallax, and more, Abrash said.
“And before we even get to those, there’s the challenge that AR VR displays have been compact, lightweight headsets and run for long term batteries and those headsets,” Abrash said. “So right off the bat, this is very difficult. Now, one of the unique challenges of VR is that the lenses used in current VR displays often distort the virtual image. And that reduces realism unless the distortion is fully corrected in software.”
Fixing that as complex because the distortion varies as the eye moves to work in different directions, Abrash said. And while it’s not part of realism, headsets can be hard to use for extended periods of time because that distortion, as well as the headsets weight that can cause temporary discomfort and fatigue, he added
Another key challenge involves the ability to focus properly at any distance.
Getting the eyes to focus properly is a big challenge, and Zuckerberg said the company has been focusing on improving resolution to help this. That is one dimension that matters, but others matter as well.
Abrash said the problem with resolution is the VR headsets have a much wider field of view than even the widest monitor. So whatever pixels are available are just spread across a much larger area than for a 2D display. And that results in lower resolution for a given number of pixels, he said.
“We estimate that getting to 202/0 vision across the full human field of view would take more than 8K resolution,” Zuckerberg said. “Because of some of the quirks of human vision, you don’t actually need all those pixels all the time because our eyes don’t actually perceive things in high resolution across the entire field of view. But this is still way beyond what any display panel currently available will put out.”
On top of that, the quality of those pixels has to increase. Today’s VR headsets have substantially lower color range, brightness and contrast than laptops, TVs and mobile phones. So VR can’t yet reach that level of fine detail and accurate representation that we’ve become accustomed to with our 2D displays, Zuckerberg said.
To get to that retinal resolution with a headset means getting to 60 pixels per degree, which is about three times where we are today, Zuckerberg said.
To pass this visual Turing test, the Display Systems Research team at Reality Labs Research is building a new stack of technology that it hopes will advance the science of the metaverse.
This includes “varifocal” technology that ensures the focus is correct and enables clear and comfortable vision within arm’s length for extended periods of time. The goal is to create resolution that approaches or exceeds 20/20 human vision.
It will also have high dynamic range (HDR) technology that expands the range of color, brightness, and contrast you can experience in VR. And it will have distortion correction to help address optical aberrations, like warping and color fringes, introduced by viewing optics.
Zuckerberg held out a prototype called Butterscotch.
Designed to demonstrate the experience of retinal resolution in VR, which is the gold standard for any product with a screen. Products like TVs and mobile phones have long surpassed the 60 pixel per degree
“It has a high enough resolution that you can read the 20/20 vision line on an eye chart in VR. And we basically we modified a bunch of parts to this,” Zuckerberg said. “This isn’t a consumer product, but this is but this is working. And it’s it’s pretty, pretty amazing to check out.”
VR lags behind because the immersive field of view spreads available pixels out over a larger area, thereby lowering the resolution. This limits perceived realism and the ability to present fine text, which is
critical to pass the visual Turing test.
“Butterscotch is the latest and the most advanced of our retinal resolution prototypes. And it creates the experience of near retinal resolution in VR at 55 pixels per degree, about 2.5 times the resolution of the Meta Quest 2,” Abrash said. “The Butterscotch team shrank the field of view to about half the Quest 2 and then developed a new hybrid lens that would fully resolve that higher resolution. And as you can see, and as Mark noted, that resulting prototype is nowhere near shippable. I mean, it’s not only bulky, it’s heavy. But it does a great job of showing how much of a difference higher resolution makes for the VR experience.”
Butterscotch testing showed that true realism demands this high level of resolution.
The depth of focus problem
“And we expect display panel technology is going to keep improving. And in the next few years, we think that there’s a good shot of getting there,” Zuckerberg said. “But the truth is that even if we had a retinal resolution display panels right now, the rest of the staff wouldn’t be able to deliver truly realistic visuals. And that goes to some of the other challenges that are just as important here. The second major challenge that we have to solve is depth of focus.”
This became clear in 2015, when the Oculus Rift was debuting. At that time, Meta had also come up with its Touch controllers, which let you have a sense of using your hands in VR.
Human eyes can adapt to the problem of focusing on our fingers no matter where they are. Human eyes have lenses that can change shape. But current VR optics use solid lenses that don’t move or change shape. Their focus is fixed. If the focus is set around five or six feet in front of a person, then we can see a lot of things. But that doesn’t work when you have to shift to viewing your fingers.
“Our eyes are pretty remarkable. And that they can, they can pick up all kinds of subtle cues when it comes to depth and location,” said Zuckerberg. “And when the distance between you and an object doesn’t match the focusing distance, it can throw you off, and it feels weird and your eyes try to focus but you can’t quite get it right. And that can lead to blurring and be tiring.”
That means you need a retinal resolution display that also supports depth of focus to hit that 60 pixels per degree at all distances, from near to far in focus. So this is another example of how building 3D headsets is so different from existing 2D displays and quite a bit more challenging, Zuckerberg said.
To address this, the lab came up with a way to change the focal depth to match where you’re looking by moving the lenses around dynamically, kind of like how autofocus works on on cameras, Zuckerberg said. And this is known as varifocal technology.
So in 2017, the team built a prototype version of rift that had mechanical varifocal displays that could deliver proper depth of focus that used eye tracking to tell what you were looking at real time distortion correction to compensate for the magnification, moving the lenses on in the blur. So that way, only the things that you were looking at, were in focus just like the physical world, Zuckerberg said.
To help with the user research, the team relied on vision scientist Marina Zannoli. She helped do the testing on the varifocal prototypes with 60 different research subjects.
“The vast majority of users preferred varifocal over fixed focus,” she said.
Meta tested varifocal lenses on a prototype and they were more comfortable in every respect, resulting in less fatigue and blurry vision. They were able to identify small objects and have an easier time reading text, and they reacted to their visual environments more quickly.
Half Dome series
The team used its feedback on the preference for varifocal lenses and it focused on getting the size and weigh down in a series of prototypes, dubbed Half Dome.
With the Half Dome series, DSR has continued to move closer to seamless varifocal operation in
ever-more-compact form factors.
Half Dome Zero (far left) was used in the 2017 user study. With Half Dome 1 (second from left), the team
expanded the field of view to 140 degrees. For Half Dome 2 (second from right), they focused on ergonomics and comfort by making the headset’s optics smaller, reducing the weight by 200 grams.
And, Half Dome 3 (far right) introduced electronic varifocal, which replaced all of Half Dome 2’s moving
mechanical parts with liquid crystal lenses, further reducing the headset’s size and weight. The new Half Dome 3 prototype headset is lighter and thinner than anything that currently exists.
These used fully electronic varifocal headsets based on liquid crystal lenses. Even with all the progress Meta has made, a bunch more work is left to do to get the performance of the varifocal hardware to be production ready, while also ensuring that eye tracking is reliable enough to make this work. The focus feature needs to work all the time, and that’s a high bar, given the natural barriers between people and our physiology. It isn’t easy to get this into a product, but Zuckerberg said he is optimistic it will happen soon.
For varifocal to work seamlessly, optical distortion, a common issue in VR, needs to be further addressed
beyond what is done in headsets today.
The correction in today’s headsets is static, but the distortion of the virtual image changes depending on
where one is looking. This can make VR seem less real because everything moves a bit as the eye moves.
The problem with studying distortion is that it takes a very long time; fabricating the lenses needed to study the problem can take weeks or months, and that’s just the beginning of the long process.
To address this, the team built a rapid prototyping solution that repurposed 3D TV technology and combined it with new lens emulation software to create a VR distortion simulator.
The simulator uses virtual optics to accurately replicate the distortions that would be seen in a headset and displays them in VR-like viewing conditions. This allows the team to study novel optical designs and
distortion-correction algorithms in a repeatable, reliable manner while also bypassing the need to experience distortion with physical headsets.
Motivated by the problem of VR lens distortion, and specifically varifocal, this system is now a general-purpose tool used by DSR to design lenses before constructing them.
Meta is addressing the distortion produced by VR optics. It is creating software to compensate for distortion. The distortion of a virtual image actually changes as your eye moves to look in different directions. What matters here is having accurate eye tracking so that the image can be corrected as you move. This is a hard problem to solve but one where we see some progress, Zuckerberg said. The team uses 3D TVs to study its designs for various prototypes.
“The problem with studying distortion is that it takes a really long time,” Abrash said. “Just fabricating the lenses needed to study the problem can take weeks or months. And that’s only the beginning of the long process of actually building a functional display system.”
Eye tracking is an underappreciated technology for virtual and augmented reality, Zuckerberg said.
“It’s how the system knows what to focus on, how to correct optical distortions, and what parts of the image should devote more resources to rendering in full detail or higher resolution,” Zuckerberg said.
Starburst and HDR
The most important challenge to solve is high dynamic range, or HDR. That’s where a “wildly impractical” prototype comes in called Starburst.
“That is when the lights are bright, colors pop, and you see that shadows are darker and feel more realistic. And that’s when scenes really feel alive,” Zuckerberg said. “But the vividness of screens that we have now, compared to what the eye is capable of seeing, and what is in the physical world, is off by an order of magnitude or more.”
The key metric for HDR is nits, or how bright the display is. Research has shown that the preferred number for peak brightness on a TV is 10,000 nits. The TV industry has made progress and introducing HDR displays that move in that direction going from a few 100 nits to a peak of a few thousand today. But in VR, the Quest 2 can do about 100. And close to getting beyond that with a form factor that is wearable is a big challenge, Zuckerberg said.
To tackle HDR in VR, Meta created Starburst. It’s wildly impractical because of its size and weight, but it is a testbed for studies.
Starburst is DSR’s prototype HDR VR headset. High dynamic range (HDR) is the single technology that is
most consistently linked to an increased sense of realism and depth. HDR is a feature that enables both bright and dark imagery within the same images.
The Starburst prototype is bulky, heavy and tethered. People hold it up like binoculars. But the result produces a full range of brightness typically seen in indoor or nighttime environments. Starburst reaches 20,000 nits, being one of the brightest HDR displays yet built, and one of the few 3D ones — an important step to establishing user preferences for depicting realistic brightness in VR.
The Holocake 2 is the thin and light. Building on the original holographic optics prototype, which looked like a pair of sunglasses but lacked key mechanical and electrical components and had significantly lower optical performance, Holocake 2 is a fully functional, PC-tethered headset capable of running any existing PC VR title.
To achieve the ultra-compact form factor, the Holocake 2 team needed to significantly shrink the size
of the optics while making the most efficient use of space. The solution was two fold: first, use polarization based optical folding (or pancake optics) to reduce the space between the display panel and the lens; secondly, reduce the thickness of the lens itself by replacing a conventional curved lens with a thin, flat holographic lens.
The creation of the holographic lens was a novel approach to reducing form factor that represented a notable step forward for VR display systems. This is our first attempt at a fully functional headset that leverages holographic optics, and we believe that further miniaturization of the headset is possible.
“It’s the thinnest and lightest VR headset that we’ve ever built. And it works if it can take normally run any existing PC VR, title or app. In most VR headsets, the lenses are thick. And they have to be positioned a few inches from the display so it can properly focus and direct light into the eye,” Zuckerberg said. “This is what gives a lot of headsets that that kind of front-heavy look public to introduce these two technologies to get around this.”
The first solution is that, sending light through a lens, Meta sends it through a hologram of a lens. Holograms are basically just recordings of what happens when light hits something. And they’re just like a hologram is much flatter than the thing itself, Zuckerberg said. Holographic optics are much lighter than the lenses that they model. But they affect the incoming light in the same way.
“So it’s a pretty good hack,” Zuckerberg said.
The second new technology is polarized reflection to reduce the effective distance between the display and the eye. So instead of going from the paddle through a lens, and then into the eye, light is polarized, so it can bounce back and forth between the reflective surfaces multiple times. And that means it can travel the same total distance, but in a much thinner and more compact package, Zuckerberg said.
“So the result is this thinner and lighter device, which actually works today and you can use,” he said. But as with all of these technologies, there are trade-offs between the different things that are different paths, or there tend to not be a lot of the technologies that are available today. The reason why we need to do a lot of research is because they don’t solve all the problems.”
Holocake requires specialized lasers rather than the LEDs that existing VR products use. And while lasers aren’t super exotic nowadays, they’re not really found in a lot of consumer products at the performance, size, and price we need, Abrash said.
“So we’ll need to do a lot of engineering to achieve a consumer viable laser that meets our specs, that is safe, low cost and efficient and that can fit in a slim VR headset,” Abrash said. “Honestly, as of today, the jury is still out on a suitable laser source. But if that does prove tractable, there will be a clear path to sunglasses-like VR display. What you’re holding is actually what we could build.”
Bringing it all together in the display system Mirror Lake
Mirror Lake is a concept design with a ski goggles-like form factor that will integrate nearly all of the
advanced visual technologies DSR has been incubating over the past seven years, including
varifocal and eye-tracking, into a compact, lightweight, power-efficient form factor. It shows what
a complete, next-gen display system could look like.
Ultimately, Meta’s objective is to bring all of these technologies together, integrating the visual elements
needed to pass the visual Turing test into a lightweight, compact, power-efficient form factor — and Mirror Lake is one of several potential pathways to that goal.
Today’s VR headsets deliver incredible 3D visual experiences, but the experience still differs in many ways from what we see in the real world. They have a lower resolution than what’s offered by laptops, TVs and phones; the lenses distort the wearer’s view; and they cannot be used for extended periods of time. To get there, Meta said we need to build an unprecedented type of VR display system — a lightweight display that is so advanced it can deliver what our eyes need to function naturally so they perceive we are looking at the real world in VR. This is known as the “visual Turing Test” and passing it is considered the holy grail of display research.
“The goal of all this work is to help us identify which technical paths are going to allow us to make meaningful enough improvements that we can start approaching a visual realism if we can make enough progress on resolution,” Zuckerberg said. “If we can build proper systems for focal depth, if we can reduce optical distortion and dramatically increase the vividness and in the high dynamic range, then we will have a real shot at creating displays that can do justice and increase the vividness that we experienced in the beauty and complexity of physical environments.”
The journey started in 2015 for the research team.
Douglas Lanman, director of Display Systems Research at Meta, said in the press event that the team is doing its research in a holistic manner.
“We explore how optics, displays, graphics, eye tracking, and all the other systems can work in concert to deliver better visual experiences,” Lanman said. “Foremost, we look at how every system competes, competes for the same size, weight, power and cost budget, while also needing to fit in a compact in wearable form factor. And it’s not just this matter of squeezing everything into a tight budget, each element of the system has to be compatible with all the others.”
The second thing to understand is that the team deeply believes in prototyping, and so it has a bunch of experimental research prototypes in a lab in Redmond, Washington. Each prototype tackles one aspect of the visual Turing test. Each bulky headset gives the team a glimpse at how things could be made less bulky in the future. It’s where engineering and science collides, Lanman said.
Lanman said that it will be a journey of many years, with numerous pitfalls lurking along the way, but a great deal to be learned and figured out.
“Our team is certain passing the visual Turing test is our destination, and that nothing, nothing in physics appears to prevent us from getting there,” Lanman said. “Over the last seven years, we’ve glimpsed this future, at least with all these time machines. And we remain fully committed to finding a practical path to a truly visually realistic metaverse.”
Meta’s DSR worked to tackle these challenges with an extensive series of prototypes. Each prototype is designed to push the boundaries of VR technology and design, and is put to rigorous user studies to assess progress toward passing the visual Turing test.
DSR experienced its first major breakthrough with varifocal technology in 2017 with a research prototype called Half Dome Zero. They used this prototype to run a first-of-its-kind user study, which validated that varifocal would be mission critical to delivering more visual comfort in future VR.
Since this pivotal result, the team has gone on to apply this same rigorous prototyping process across the entire DSR portfolio, pushing the limits of retinal resolution, distortion, and high-dynamic range.
The big picture
Overall, Zuckerberg said he is optimistic. Abrash showed one more prototype that integrates everything needed to pass the visual Turing test in a lightweight, compact, power-efficient form factor.
“We’ve designed the Mirror Lake prototype right now to take a big step in that direction,” Abrash said.
This concept has been in the works for seven years, but there is no fully functional headset yet.
“The concept is very promising. But right now, it’s only a concept with no fully functional headset yet built to conclusively prove out this architecture. If it does pan out, though, it will be a game changer for the VR visual experience,” Abrash said.
Zuckerberg said it was exciting because it is genuinely new technology.
“We’re exploring new ground to how physical systems work and how we perceive the world,” Zuckerberg said. “I think that augmented mixed and virtual reality are these are important technologies, and we’re starting to see them come to life. And if we can make progress on the kinds of advances that we’ve been talking about here, then that’s going to lead to a future where computing is built and centered more around people and how we experience the world. And that’s going to be better than any of the computing platforms that we have today.”
I asked Zuckerberg if a prediction I heard from Tim Sweeney, CEO of Epic Games will come true. Sweeney predicted that if VR/AR make enough progress to give us the equivalent of 120-inch screens in front of our eyes, we wouldn’t need TVs or other displays in the future.
“I’ve talked a lot about how, in the future, a lot of the physical objects that we have won’t actually need to exist as physical objects anymore,” Zuckerberg said. “Screens are a good example. If you have a good mixed-reality headset, or augmented reality glasses, that screen or TV that is on your wall could just be a hologram in the future. There’s no need that it needs to actually be a physical thing that is way more expensive.”
He added, “It’s just it’s an interesting thought experiment that I’d encourage you to just go through your day and think about how many the physical things that are there actually need to be physical.”
GamesBeat’s creed when covering the game industry is «where passion meets business.» What does this mean? We want to tell you how the news matters to you — not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Learn more about membership.