Skip to content

Eyes-In With Flamera, Meta's Prototype Of Reprojection-Free Depth-Correct Passthrough

Eyes-In With Flamera, Meta's Prototype Of Reprojection-Free Depth-Correct Passthrough

At SIGGRAPH 2023 I tried Flamera, Meta's research prototype of reprojection-free passthrough AR with correct depth.

Opaque headsets like Quest Pro, Apple Vision Pro, and Quest 3 use high resolution cameras on the front to show you the real world. But since those cameras are in a different position to your eyes, image processing algorithms are used to reproject the camera views to show what your eyes would be seeing.

This reprojection is an imperfect process that adds latency and results in image processing artefacts - though it is getting better with newer hardware as cameras and the chipsets to process them improve.

Flamera aims to bypass the reprojection approach entirely with a from-scratch hardware design built from the ground up with passthrough in mind.

It features 53 lenses in front of each eye, each with an aperture that only lets in light from its specific angle relative to your eye. The light from these lenses is then focused to a traditional image sensor, appearing as 53 circles. The software then "rearranges the pixels, estimating a coarse depth map to enable a depth-dependent reconstruction", a process Meta says is significantly less computationally expensive than reprojection, so has lower latency, and results in almost no visual artefacts.

So How Was It?

The strange thing about Meta's Flamera headset is that the display system it uses is pretty awful. It's a 60Hz full-persistence display of the kind more typically found in transparent AR systems, with a field of view only around half of a Quest. This is somewhat ironic given Flamera, like Butterscotch Varifocal, is presented by Meta's "Display Systems Research" team.

But Flamera's display system was chosen to achieve the minimal headset thickness with off-the-shelf components. And the demo isn't about the display system, it's about the novel capture system.

What I saw was a real-world passthrough view with no distortion or offset of any kind, where the geometry of objects I saw through the display was identical to what I saw when lifting it up. Flamera does deliver on its promise.

Still, the terrible display system was a persistent distraction from appreciating the true merits of what Flamera presents. The refresh rate for example meant the latency was inherently high, and the resolution and color reproduction led to a blurry washed out image.

In future years, I hope to see Meta pair Flamera's capture system with the also-thin "Holocake" lenses it teased last year to see what this capture approach can really look like paired with the right display system.

But Is It Practical?

Meta's Display Systems Research (DSR) team appears to be focused on two different tasks: researching the feeling of specific long-term XR display features, and occasionally figuring out specific technologies to achieve those features.

The other headset I tried at SIGGRAPH this year, Butterscotch Varifocal, was definitely the former kind. It's not presented as an actual practical way to achieve either varifocal or retinal resolution in products.

But Flamera seems to be both an investigation of the feeling of reprojection-free passthrough and a proposal for a specific approach. So that raises the question: Will we see this in products?

The cameras on the top are used as part of the demo to show what Quest 2 & Rift S passthrough is like, not for the actual Flamera system in-use.

I asked the Flamera project lead (who also had the original idea) Grace Kuo. I learned that the only fundamentally expensive part of Flamera's approach is the extremely large image sensor needed to capture all the perspectives, which also generates a lot of heat - you can see the heatsinks on each side of the headset.

But there are potential solutions to this, such as using separate much smaller image sensors for each lens, or at least groups of lenses.

Another potential future approach would be a midpoint of reprojection and Flamera, using fewer lenses to reduce the complexity and component cost and applying more gentle reprojection than what's used, resulting in some visual artefacts - though much less severe than the current traditional camera approach. This fusion of hardware and software is something Mea is interested in investigating.

Meta admits that the technologies in Flamera "may never make their way into a consumer-facing product". But even if that's the case, it still shows the spirit of novel innovation is alive & well in the company's research terms, and suggests there may be many non-obvious solutions to the problems of VR headsets discovered in the coming decades.

Community Discussion

Weekly Newsletter

See More