Faceshift’s software captures facial expressions and animates them on an avatar in real-time, showing immense potential for AR and VR. Last year, the technology was shown working with a sensor from PrimeSense that seemed almost small enough to be fitted into a tablet. PrimeSense was the company behind the sensor in the original Kinect. It was later purchased by Apple.
Check it out here:
Earlier this year the software had improved considerably and appeared much friendlier, as shown in the video at the top of this post. Animating easily customizable avatars with real facial expressions, the technology showed incredible potential for a wide range of uses. Imagine animating all the characters in a cartoon without any other actors or specialized hardware — just sitting in front of your computer. Or imagine popping those same believable performances right into a virtual world for a VR experience.
Faceshift’s public demos required a full view of the head to work so they wouldn’t work well with an HMD on, but there’s no telling what the company was working on behind-the-scenes. Researchers have been hard at work trying to figure out how to represent eye and upper facial movements using VR. The Oculus Social app on Gear VR, for example, simulates eye movements in a believable way without any extra hardware while research conducted by USC and Oculus earlier this year showed a depth sensor mounted in front of the face to capture lower facial expressions with strain gauges mounted where the face touches the headset to measure upper facial movements.
I’d be surprised if it was a coincidence Intel’s depth-sensing platform is called RealSense, suggesting to me the chip giant wants to provide manufacturers with technology to compete with future Apple devices armed with PrimeSense depth sensors. We’ll obviously have to wait and see what exactly Apple has planned for this technology.