Skip to content

From Cave Paintings To Hand-Drawn VR: Early Adopters

From Cave Paintings To Hand-Drawn VR: Early Adopters

Welcome back VRgonauts!

In this episode you’ll follow along as the team forges its way through a tangle of technologies to arrive at our particular path to success.

Having agreed that prehistoric cave painting would be the subject of the piece, TED Ed educator Iseult Gillespie began to prepare the script. Meanwhile I had two creative problems to solve: What was the visual direction for the piece? And what was the technical path for getting there? Having been a Flash animator for the past 15 years, I was confident about creating all of the 2D animation. The bigger hurdle in this project was going to be the scary tech salad of having to  mix and match different software and plug-ins to take us over a finish line. Because the budgets were tight, I’d have to be MacGyver-esque in my methods. This was some nervous territory.

My first call was to Goose Manriquez. Goose is as solid a 3D production man as they come. He does it all, from conceptual design to final animated frame. I was initially convinced that exporting my flat 2D assets to a 3D app was the only way to go. Goose explained that since the viewable cave world existed inside a sphere, we should create a sphere in 3D. Then we could just project layers of animation at different depths inside it. At this point I thought I was building a game world and would be relying on a game engine. I had set up a creative “rule” that there’d be no ability to walk around within the cave environment because I wanted the final output to be hosted on YouTube. Therefore I needed “precooked” video which could not feature any locomotion within the world. Rather, the viewer would be confined to an immersive 360 degree viewpoint from a static spot in the center of the sphere. But I sensed that a game engine could be useful in many other winning ways, like tracking viewer gaze, for instance. Knowing the viewer’s sightline could be helpful in triggering associated audio events.

Initial sketchbook thoughts about how to set up the 360-degree view.

We experienced a host of small battles with basic I/O. Ultimately we were having trouble reliably exporting video from our 3D animation software and getting a usable output for presentation in the Cardboard viewer. But there were many such hurdles that we went “around” as opposed to “over” given the project’s budget. Since we were still at the beginning of our production investigations and we were having so many of these problems so early on, we knew that we’d have to find a different path. Unfortunately this exploration ate up a lot of production time, and at this point I lost Goose to his next big gig.

At one point we were experimenting with 3D anaglyph output for the 3D.

Taking stock of what I’d just learned, I thought that going straight to the source and creating the entire piece in the game engine was our best path forward. But that, too, proved to be costly and time consuming because the engine had its own finicky relationship to outputting 360 video.

Ugh! This was becoming a problem. A project-threatening problem. I called up some different colleagues for a crisis conference and, over the course of some heated back-and-forth discussions, a solution was becoming clear: If I was producing a flat 2D video, why wasn’t I creating the piece in a video app? This makes obvious sense in retrospect, but at the time the idea struck me as a revelation. That’s a good example of getting too married to a production idea without always looking at the bigger picture. Adobe After Effects was my first choice for video production, and confirming that there was indeed an AE plug-in that rendered in stereoscopic output sold me on the method.

I then got in touch with Jon Francis and Rich Prahm at Idle Hands Studio, an After Effects house that produces top-tier motion graphics and credit sequences. We had worked together on a Flash + AE combo production a few years earlier, and I was always looking for another excuse to work with these guys. I remember the first conference call the three of us had. I was carefully explaining what I was after visually and how solid the piece had to be technically, when Rich interrupted, “Oh, so you want us to do some GOOD work, not just crap it out?” We all burst into laughter and that broke the ice. It was obvious we were all on the same page and had bought in to the project to the same degree.

[gfycat data_id=”ValuableThickAlabamamapturtle”]

Since this particular production “recipe” would be completely new to all of us, I proposed to TED Ed that we create a 20-second “proof of production” test which would allow me to better estimate time and cost numbers. I was thankful for their patience because that testing period went on for weeks as the team ran into roadblock after roadblock in our efforts to define exactly what the final piece should look like and how it should work.

There are no books (yet) about how to create hand-drawn animation in 360 degrees with stereoscopic depth. We were on our own to figure out the solutions to a host of technical challenges. Some of these required a rediscovering of the basics of filmmaking: how to define a horizon, what is the range of the peripheral view, and how large should the characters be within a “frame” that spans endlessly in every direction? Some other challenges were more specific to current VR technology: How many layers of depth should we design for? How would 3D audio function, and what about the possibility of dynamic lighting?

Production layout screen showing our peripheral viewing field.

In addition to the issues listed above, we were hampered by the fact that the central character in the test was carrying a lantern deep in a pitch-black cave. This meant that AE was creating the dynamic lighting on the walls to match the moves of the swinging lantern. Since the lighting was not being created inside of Animate (Flash), I had to wait painfully long stretches between formal renders coming out of Idle Hands before I could iterate a new version utilizing what I’d learned. Thankfully this hang-up disappeared in later environments when we didn’t need dynamic lighting. We finally arrived at a stable production flowchart after about two months of trial and error. At this point we had 20 seconds of full color animation running at 30fps in 4k resolution with (at least) 3 layers of depth in the environment.

After Effects lighting test showing dynamic lantern tracking.

 

We felt substantially confident in our process that we presented the test to TED Ed not only as proof that we could pull the piece off technically, but that those 20 seconds were polished enough to be placed into the final piece “as is”. We all celebrated with a tequila dinner and mapped out the final production schedule.

Now the grand design of the entire piece opened up before me and I shuddered at the total girth of design and animation work that I had committed to. I had a few more “private dinners” with some gourmet tequila, then began the process of diving headfirst into the production!

It was now exactly one year since I first received that Google Cardboard player with my Sunday paper.

Don’t miss our next installment: “One step forward, three steps back.

Michael “Lippy” Lipman is a classically trained 2D animation director who first found success as a feature film animator in Hollywood. With the introduction of interactive media he transitioned to producing major entertainment pieces for CD ROMs, console gaming, online advertising, and internet-based animation.  Currently his company 360360VR is enlisting VR/AR technologies to further the immersive storytelling possibilities for his client, TED Ed and others. He lives in the San Francisco Bay Area.

UploadVR Member Takes

Weekly Newsletter

See More