Skip to content

These 3 Companies Are Working To Give Us Better Bodies In VR

These 3 Companies Are Working To Give Us Better Bodies In VR

Featured image showing avatars from startup Loom.ai.

VR development is moving at breakneck speed on all fronts, and 2017 is set to see the launch of several long-awaited platforms like Sansar, which are all betting big on Social VR and user-generated content. Most of the focus so far has been on building those worlds and experiences, but if we’re ever to achieve a true sense of presence in these virtual worlds, we need to also be able to create really good avatars. Full immersion and meaningful social interaction can only really happen if we feel comfortable in our virtual skins.

Huge advancements in facial tracking technology have made it a lot easier and cheaper to render realistic facial expressions on an avatar, which might push ahead of cartoony stuff like Facebook showcased earlier this year. Social presence – that sense of really being with another person – is associated with more rewarding communication experiences and higher levels of empathy toward others. It follows, then, that for people to want to engage with each other in virtual environments, we need to foster that sense of social presence, which means getting your avatars right.

Research has found that simple things such as holding an avatar’s gaze for longer – only 4 seconds as opposed to 2 – already made people feel more positively toward the other person’s avatar that they were interacting with in the experiment. Professor Jeremy Bailenson, Director of the Virtual Human Interaction Lab at Stanford, also recently conducted a study that showed just how sensitive we are to such subtle variations in facial expressions, and how small changes can have a big impact in the quality of our virtual social interactions. His study included 158 participants and looked at whether enhancing your avatar’s smile during a virtual conversation would affect a user’s perception of the person with which they were interacting.

“Our research has demonstrated for almost two decades that small changes in the appearance and behavior of avatars can affect a social interaction,” explains Bailenson. “In the current study, simply putting a gain factor on smiles–such that a pair of people saw slightly bigger smiles on one another–caused people to speak more positively and feel happier.”

There are a lot of people out there obsessing about creating a way for users to  generate, personalize and “own” their avatars – in the sense that they are able to use them across any platform or device, and for any purpose. Companies such as Itsme, Morph3D and Loom, for example, are bringing technology to market looking to enable users to do just that. And although they’re all approaching the infamous Uncanny Valley from different directions, there’s a general consensus that these avatar-generating tools need to be agnostic. That means using Dolby-like licensing models and open APIs so that users can port their avatars into whatever platform they want, whether that’s Sansar, Steam or Facebook. We talk to those start-ups about how they see the future of avatars shaping up.

Body Scanning: Itsme

https://vimeo.com/181236162

“We are the Identity-Makers of the New World,” says Itsme CEO Pete Forde, who wants people to use their technology to create expressive and persistent identities. “What we have done is to create a scalable process that is capable – with careful execution – of being the method by which the world gets turned into avatars.”

His Toronto-based company has been working on their body-scanning tech for the past three years, and claims to have invented a method that is free for the end user and fully automated, allowing the person to see a 3D avatar render within one minute of getting scanned.

They already captured about 8,000 people so far during a five-city tour of Canada with Samsung. Their first product is due to launch in January and will be a personalized avatar keyboard, called Itsmoji, which allows you to use versions of your animated personalized avatars as emojis.

Forde believes the realism afforded by this technique – which captures both your body shape and movements – will be key to developing avatars for social platforms:

“Normal people have absolutely zero interest in meeting complete strangers online, and what Facebook figured out early on is that people will not engage unless they are connected with at least 10 real friends within 14 days. Attempting to learn from history, our growth model is based on creating relationships based on tight pockets of friends that actually know each other.”

As people invest more time and project more of their own identity into those avatars, it is important to also ensure there are policies and procedures in place to lock them down in terms of privacy and security: “think FB-style granular permissions,” says Forde.

Itsme is currently closing their seed investment round and preparing SDKs for Unity and Javascript to let developers use avatars directly in their projects and products. They’re also working to create apps that allow you to do all sorts of interesting things with your avatar, such as applying Snapchat-like filters to them, getting a 3D figurine printed, or going to the Shopify store (one of their current partners) to try on some clothes and see how they’d look on you. Future social use cases could also include buying tickets to sold-out sports events and watching them in 360, with your friends sitting next to you in the front row.

“I don’t have a problem with artistic/cartoon avatars per se,” Forde concludes. “There’s a place for them, and if people want to play as a robot or a bunny, knock yourselves out. I don’t see it as an either/or proposition. But if I put you in the dance club and you’re controlling one of these avatars, I want someone to be able to join the party with you and have them essentially play ‘who is the real person’ Turing style. What excites me is that we can now start to experiment with all of this stuff instead of wondering if someday it might be possible.”

Buid-a-Self: Morph3D

https://www.youtube.com/watch?v=XiAw19Wo-ow

With Morph3D’s Ready Room tool, you can create custom persistent avatars that can be used across any number of VR platforms, and Philip Rosedale’s High Fidelity social VR platform is among its first customers.

It relies on more traditional methods of creating avatars by tapping into its enormous crowdsourced database of assets created over the past 15 years by the members of Daz 3D (a separate company run by the same management team as Morph 3D).

For game and app developers this is a huge time saver as it plugs into the Unity engine and lets players craft their own characters. Currently there are over 400 3D characters available to use in VR applications on their platform. Each of those can also be morphed using sliders, providing an incredibly broad range of customization possibilities on top of that.

When you customize your character, your clothes automatically morph to fit your chosen shape – be it of a little girl or a bulky man — and the final result can be imported into various social VR applications. Morph3D’s Director of AR/VR Chris Madsen says the objective is to make the process user-friendly and intuitive enough “so my mom can make a character for virtual reality.”

Over time it will be interesting to see how Morph 3D can be used in a broader variety of non-gaming applications, and how it applies to AR/MR as well as VR hardware. They have already started developing an interface which works with HoloLens voice control, for example, where you can modify the size of your character by saying commands such as “small” or “big”.

Selfie Avatars: Loom.ai

https://www.youtube.com/watch?v=uXaWgKLlSj8

Loom’s technology turns selfies into personalized 3D avatars by applying machine learning to automate human face visualization. It uses public APIs and VFX to create life-like visualizations which can then be animated and used for a range of applications. Video embedded above shows how an avatar generated from a single inset image (in these cases of celebrities such as Will Smith and Angelina Jolie) can look remarkably life-like and expressive.

These are the same types of techniques used in films such as The Avengers to transpose mark Ruffalo’s perceptually salient features onto his version of the Hulk, but using machine learning allowed the company to take what has traditionally been an extremely long, complicated and expensive process into something that’s now available to everyone.

“The magic is in bringing the avatars to life and making an emotional connection,” explains Loom CEO Mahesh Ramasubramanian. “Using facial musculature rigs powered by robust image analysis software, our partners can create personalized 3D animated experiences with the same visual fidelity seen in feature films, all from a single image.”

Loom counts Jeremy Bailenson  – plus Halo creator Alex Seropian  – among its advisors, and has an otherwise impressive pedigree, with a founding team that includes visual effects and animation veterans from LucasFilm and DreamWorks. Ramasubramanian worked on films such as Shrek, while CTO Kiran Bhat was R&D facial lead on The Avengers and Pirates of the Caribbean.

Loom.ai just announced today that it raised a $1.35M seed round from a range of investors including Y Combinator and Greg Castle from Anorak Ventures, who was a seed investor in Oculus.

“Easily getting your likeness into the digital world has widespread applications,” says Castle. “The impact of experiences is significantly increased when you can visualize yourself in a game, simulation, communication environment or advertisement.”

Bailenson believes this approach will revolutionize how avatars are made, bringing a greater sense of copresence to virtual and augmented reality by giving us avatars that are lifelike and can be both animated and stylizized.

“This is important because social VR is likely to be the home run application in VR,” he explains. “And that all starts with building avatars that look and behave like their owners.”

Member Takes

Weekly Newsletter

See More