Skip to content

Kite & Lightning Showcasing iPhone Motion Capture At SIGGRAPH

Kite & Lightning Showcasing iPhone Motion Capture At SIGGRAPH

VR development studio Kite & Lightning is taking its impressive iPhone-based motion capture pipeline to SIGGRAPH.

We’ve written a couple stories about the system which Kite & Lightning co-founder Cory Strassburger has been putting together mainly on his weekends. Strassburger uses a helmet-mounted iPhone X combined with an Xsens suit for completely wireless full body and facial motion capture. His latest additions smooth out the process even further and, with the IKINEMA LiveAction tool, a performance can be brought directly into Unreal Engine in real-time.

“With the facial capture data being only 150kbps, you could easily have a mobile companion app for your game that allows an MC or Commentator to stream their audio and facial capture performance right into a live VR match,” Strassburger explained in an email.

Kite & Lightning is a small studio that developed VR experiences like Senza Peso as early as 2014 to show some very early adopters a true sense of immersion for the first time. Strassburger and co-founder Ikrima Elhassan also sharpened their skills with some work-for-hire VR projects before raising $2.5 million in 2016 with plans to build a game with an unusual premise. In Bebylon, immortal “bebies” regularly do battle in a character-driven VR spectacle.

Strassburger’s iPhone-based capture pipeline relies on face-sensing capabilities to transform his expressions into one of these bebies in real-time. Overall, it shows the potential of using one of Apple’s newest gadgets for decent motion capture at a relatively low all-in price. Strassburger explained in an email earlier this year how this possibility affected their roadmap.

“Having any meaningful amounts of character animation on our game’s early roadmap was a total pipe dream. I knew full well how slow and expensive it was to capture decent facial performances let alone the cost and time involved in simply building the facial rig for a character,” Strassburger wrote. “If Ikrima and I actually had a conversation early on, we would have both logically agreed that a handful of facial expressions would be all we need or could even entertain given the scope of our game. Luckily that conversation never happened because it went without saying! And as the concept for the game started to take shape and the Bebylon world was being born, so was this underlying, powerful desire to see these characters walk and smack talk and tell their stories to the public! The more I started to create and write about these characters, the more their existence became pivotal to the game’s concept of inspiring players to unleash their inner wild child within this crazy virtual game world.”

Capturing performances this way also opens the door to more easily making vignettes, TV shows, movies or other types of productions with the same core content and tools. These tools might not be up to the quality some creators need for their projects, but for those that do find this to be a good enough solution “it would definitely change the scape of creating content for those mediums because it is insanely easy to capture lots of motion capture content.” Strassburger thinks indie developers might be able to approach their projects differently with this type of capture system at their disposal.

“I think it still takes a good level of artistry to make it sing and it has to sing for people to resonate with it. You still need a great story, you still need a great performer driving it,” Strassburger wrote. “You need a great virtual character to embody it, you need a good artist to fill in all the gaps and solve all the visual problems that might try to break the illusion and you still need an animator to expand upon the data for all the money shots. However the true magic to this pipeline is it gets you very far, extremely fast and when you see your characters come to life that much with so little effort, it really really fuels you to push further and put in the manual work to get it over the finish line. Most studios and projects don’t have the time it takes to climb the mountain of good facial and body capture but when a cheap tech gets you near the top of the mountain, you start thinking differently about how you can get the rest of the way.”

Community Discussion

Weekly Newsletter

See More