Facebook Open-Sources Its AI Platform DeepFocus To Improve VR Visual Rendering

Facebook Reality Labs (FRL) has been experimenting with a number of ways to improve the realism of virtual reality (VR), through both hardware and software means. During the Facebook Developers Conference (F8) 2018 in May the company unveiled Half Dome, a prototype headset with a varifocal mechanism. Now the lab has revealed DeepFocus, an AI-powered platform designed to render blur in real time and at various focal distances.What Deep Focus and Half Dome are both trying to achieve is something our eyes do naturally, a defocus effect. As the gif above demonstrates, when our eyes look at objects at different distances whatever they’re not focused on is blurred and out of focus. While this may seem simple, trying to replicate the effect in VR isn’t exactly easy, but its creation has a whole bunch of use cases for the technology.The first is the goal of truly realistic experiences inside VR. “Our end goal is to deliver visual experiences that are indistinguishable from reality,” says Marina Zannoli, a vision scientist at FRL via the Oculus Blog. “Our eyes are like tiny cameras: When they focus on a given object, the parts of the scene that are at a different depth look blurry. Those blurry regions help our visual system make sense of the three-dimensional structure of the world, and help us decide where to focus our eyes next. While varifocal VR headsets can deliver a crisp image anywhere the viewer looks, DeepFocus allows us to render the rest of the scene just the way it looks in the real world: naturally blurry.”

Spotlight

Other News

Dom Nicastro | April 03, 2020

Read More

Dom Nicastro | April 03, 2020

Read More

Dom Nicastro | April 03, 2020

Read More

Dom Nicastro | April 03, 2020

Read More

Spotlight

Resources