project1 from Ben Gillespie CU TAM on Vimeo.


volumetric lighting test in blender

For this project, I wanted to seamlessly integrate 3d renders into a physical live action space as best as I could. I wanted to cut to the music, and provide a fun viewing experience for 30 seconds to a minute. It was seen as a prelude to the final project.

I was pointed towards mixamo’s auto-rigging and auto-animating capabilities after looking through a couple reddit posts on reddit.com/r/blender, and I had no shame in taking pre-made models and “canned” animations, applying textures, and integrating them into footage I shot (especially after Ezra Cohen’s tutorials). Mixamo was a life-saver, providing customizable animations and humanoid models extremely quickly.

I wanted to make the humanoids look weird and mnetallic and creepy, like you’re caught in a bad trip or something. The creepiest thing I could find to do with these models was select the skin seperate from the armature and manipulate that based on position, scale or rotation. The body would quickly disjoint and contort in weird ways, instantly creating random freaky stranger things lookin monsters with human faces. I definitely want to use this technique in a future music video. See below for an example of the messed up figure with a floating gas mask

weird from Ben Gillespie CU TAM on Vimeo.

For the motion tracking, some of it was using After Effects’ built-in motion tracking, and some of it was eyeballed. I never used Blenders 3d camera tracker for a few reasons: First, the feet on the models are moving anyway. It didn’t make sense to spend 30 minutes getting a rock-solid track thats going to look goofy anyway. Second, there’s not enough 3d parallax in camera movement to make a full 3d track worth it. I rendered out the images in blender as transparent png sequences, and placed them as 2d videos in 3d space within After Effects. I saved a bunch of time on rendering and tracking this way so I was able to make more scenes and rapidly “prototype” different scenarios. I only used around 30% of the scenes that I actually rendered out, and in the second iteration I’ll probably onbly use 30% of what’s in the video now.

What I learned:
I thought it would take a lot longer to get to the point where I felt comfortable sending out multiple renders per hour. I was also impressed by how quickly I could move from Blender to After Effects to Premiere. I guess I just had a mental block that 3d and vfx were a lot harder than they are. I still have a lot to learn, but I feel a lot more comfortable with blender’s UI and sending out renders, render settings, materials, and all that good stuff.

What I want to improve in future projects/the next iteration:
The gas mask and black metallic vibe of the humanoid is a little gimpy.

Comments(0)

Leave a Comment