. https://www.oculus.com/blog/introducing-deepfocus-the-ai-rendering-system-powering-half-dome/
blog article
Earlier this year, Facebook Reality Labs (FRL) unveiled Half Dome, an industry-first prototype headset whose eye-tracking cameras, wide-field-of-view optics, and independently focused displays demonstrated the next step in lifelike VR experiences. By adjusting its displays to match your eye movements, Half Dome’s varifocal design makes every virtual object come into sharp focus. This approach showed real progress in creating a more comfortable, natural, and immersive sense of perception within VR. But to reach its full potential, Half Dome’s advanced hardware needed equally innovative software. Today, we’re sharing details about DeepFocus, a new AI-powered rendering system that works with Half Dome to create the defocus effect that mimics how we see the world in everyday life. DeepFocus is the first system able to generate this effect—which blurs the portions of the scene that the wearer isn’t currently focusing on–in a way that is realistic, gaze-contingent, and that runs in real time. We presented our research paper at the SIGGRAPH Asia conference in Tokyo this month, and we are also open-sourcing DeepFocus, including the system’s code and the data set we used to train it, to help the wider community of VR researchers incorporate blur into their work. READ MORE