So here’s the story on how I made it, my first animation project in Blender. It’s not your usual kind of animation, so if you haven’t seen it yet, check out the video of Barney Stinson recommending me for a job on my previous post.
It used this scene from the How I Met Your Mother series. As you can tell by this blog I’m very much new to Blender, so if you have more experience or if you just have a different experience you may notice that you’d come up with different (and most likely better) solutions to the problems I found. I’d love to hear those, so please do share them in the comments! Here we go.
The first thing I did was to look for a high-quality (HD) video of the original scene I was about to disgrace. I found it (don’t ask how :P), and the format was Matroska. For a moment I wondered if Blender could use matroska files, and there’s no better way of finding out than just trying. I fired up the video sequence editor (VSE) in Blender, Add > Movie, browsed into the location and selected the .mkv file. It showed on the sequencer, and it played when I clicked the play button.
Almost awesome – the sound and image weren’t in sync. I played with it a bit until I found the Sync Mode selection box which was in No-sync (duh). Setting it to Frame Dropping or AV-sync solved the problem, and no worries: even though while previewing the video will look bumpy, once you get to the final render it will be as fluid as the original video (provided you keep a decent frame rate, of course).
That's the little thing that took me a while to find out about.
Nearly awesome now, but I still detected a bug on the sound track: the music on the scene was playing pretty loud, but the voices and other sounds were nearly muted. I couldn’t find a way to solve this in Blender, so I decided to try something else. I downloaded and fired up Audacity (a free and open source tool for sound recording and editing), and Imported the .mkv file as an audio file. Fantastically, Audacity recognizes a bunch of sound tracks from the file, and I figure that’s pretty much why Blender failed to accurately reproduce the sound: it was loading only the first sound track. I got rid of the stuff I didn’t need, and collapsed and exported all the soundtracks into a single .wav file. Back to Blender, I got rid of the .mkv sound track, added the new one in sync with the video and Ta-daaaa! It’s awesome now. I then trimmed the video scene and was good to go.
Cutting between the video and the 3D scene
So then I had to figure out how to work with the VSE, namely how to cut between the video and my Blender scene, which would replace the image of Barney’s computer screen. I tried a bunch of different ways that didn’t work, but before I got frustrated I found the Multicam Selector, in the Add menu under Effect Strip. So here’s how that works:
- after you’ve added the Movie track and the Scene track (which represents your Blender scene) one on top of each other in different channels on the sequencer, you then add a Multicam Selector strip on a channel above them;
- you line it up with the first frame of your tracks, and select the channel that should be visible (movie or scene) in that time frame using the Multicam Source Channel control in the Effect Strip group; in my case I started with the video, which was channel 1;
- you then stretch the Multicam Selector strip all the way until the last frame where the channel you chose should be visible before the first cut to another channel happens;
- right below the Multicam Source Channel control you’ll find a set of numbered buttons preceded by “Cut to” – press the corresponding button to cut to the desired channel (in my case channel 3, the Blender scene);
- this created a new Multicam Selector strip, with the Source Channel automatically set to the channel you’ve cut to; now all you have to do is go back to step 3 and repeat for all the camera cuts you need until the end of your video.
And that's how it looks in the end
After all this, I clicked play and every time the camera turned from Barney to his screen, it would now show Blender’s default cube :) Somewhat awesome!
Replacing the computer screen with my own
Time to turn that default cube into a computer screen that mimics Barney’s – blue background and white text. Easy enough. I placed the camera facing the cube’s side, scaled the cube so it could fit all the text I wanted, and changed its material color to a nicely saturated blue. Then I added a Text object and wrote the whole text I was going to replace Barney’s with. Obviously I placed it between the camera and the cube (which wasn’t much of a cube by now, but I’ll keep calling it that), very very close to the cube. I set the material to emit some white light as well.
Then it was time to place the camera in a similar position to the original video. For this I used the Opacity control in the Scene strip on the VSE, making it transparent so I could overlay it with the original video and work out a good starting position for the camera.
There it is: opacity at 0.5, letting me compare my scene with the video
When I was happy with that I added a keyframe… because next I’d have to animate the camera to follow Barney’s typing. I figured out, again with the help of the overlaid channels, what were the starting and ending positions of the camera for each movement, added keyframes on both and let Blender do the nice interpolating (Bezier, by the way). In the end of this process, and after tweaking the interpolation curves that Blender came up with, I had a camera animation that closely followed that of the original video. Awesome!
Creating a believable screen
If you look at the original video, there’s actually a lot of different kinds of noise going on on Barney’s screen.
Vignette, stripes, noise...
So far, my screen was very flat – just plain blue with white letters. So I set out to make my screen more believable by adding some effects.
Actually I only learned what vignetting was after I had worked this scene out, and by coincidence. I also learned later, in Mike Pan’s blog, that I could do it in the compositor. But back then, the effect was described like this in my head: “Hmn, the picture seems to go darker towards the corners. It’s like there’s a spot lamp pointing at it and it can’t properly light the whole image.” So yeah, you guessed it, I used spot lamps to simulate the effect – two of them (one of them an instance of the other so that they’d share their settings), overlapping at the center of the picture, and with a Blend setting at 1.0 for a smooth transition from the center to the outer edge. I parented the spot lamps to the camera so that they’d follow the movement I’d already set up.
(Not really) horizontal lines
For some reason Barney’s screen seems to be textured, with (not really) horizontal lines. Back then I was still hell-bent into making my screen look exactly like his, so I set out to find a texture to mimic this. A denim texture is what I used (no kidding), and it looked close enough for the time being. I tilted the cube so that the lines would not really be horizontal.
“Hmn, that’s ok, but it’s too clean. This needs noise.” It is a TV series – even if it’s in HD, it’s gotta have some noise. So I added a procedural noise texture, tweaked the settings and got to a muuuuuch better screen, in my opinion.
V-sync artifact (that’s what it’s called, right?)
If you watch the original, you’ll notice a (not really) horizontal band, although subtle, running through the screen. Of course I had to simulate that :) I used a Blend type procedural texture and animated its offset to make it travel the cube throughout the video. On the corresponding image (coming soon) I made the band more obvious on the left side, and on the right side is the final render – it’s really really subtle in a still, but you notice it when it’s animated.
Glare and color balance
Like I said, in the beginning I was determined to make my screen as similar as possible to the original. After a while I realized I could come up with my own screen, as long as it was believable and, more importantly, I was happy with it. The glare is a perfect example: I didn’t find it in the original video but I thought it would make it more believable and look cool at the same time. I also used a color balance node to give the blue a power-up (it was pretty pale). Oh yeah, I added motion blur for the quick camera movements too. This was my first chance to play with the compositor. So here are the images from each of those steps:
Click to enlarge! (better resolution)
Animating the text
Now all that was left was the typing animation. I thought I could just get to my text box and set keyframes on each letter, but it turns out it can’t be done. Or rather, I didn’t find a way to do it, and I’m not sure I want you to tell me how in the comments because my solution involved quite some time of boring and repetitive work. Just kidding – if you know a better way please do let me know. But what I did was to create 79 text boxes, one for each character that had to be animated. Yeah, character by character I created it, moved it in place, made it invisible and set a keyframe when it should become visible, in sync with the video’s keystroke sounds (well more or less, I cheated, I just synced the first and last strokes of each sequence and then randomly added the ones in between).
... next letter, make it invisible, jump 3 or 4 frames, add a keyframe on the (in)visibility, next frame, make it visible, add a keyframe on the visibility... next letter...
Oh yeah by the way, before all that keyframing madness I had a BIG “uh-oh” moment. Like the repetitive work wasn’t enough, once I had all the characters set up and ready for keyframing I had a problem: I had duplicated all the text boxes from the original, to keep the text material, rotation and scaling the same for all characters, but what I didn’t know was that they also shared animation properties. I found it strange, since according to Roland’s book:
These duplicated objects are somewhat dependent on one another. (…) Animation is also duplicated, but the objects don’t share animation data. This means that if you change the animation of one of the objects, that change will not be reflected in the other object.
But the fact is when I set one of the characters visible and set the keyframe, if I moved back and forth one frame then ALL the characters were visible. Before I decided to animate the visibility I was trying to animate location and the same would happen – all the characters would fly to the same spot. Maybe I didn’t actually get what Roland was trying to say, maybe there’s more to it, or maybe it’s just a bug. I didn’t really try to figure it out back then (good point, I’ll try to figure it out now and let you know what the conclusions are), I just tried to find out “HOW THE HECK CAN I ANIMATE THESE CHARACTERS INDIVIDUALLY, COZ IF I CAN’T THE WHOLE THING IS RUINED!” Sorry about the animosity :) I did find the answer in a Blenderartists topic. You can remove links between objects by using Make Single User, so to make the animations independent: select a duplicated object, press U (or Object > Make Single User) and click Object Animation. Then the behavior was exactly as expected, and I was able to put the whole typing animation together. Which was awesome :)
[Update: I tried to replicate that problem before I asked for help, and… I couldn’t :| Something strange must have happened, but if it ever happens again I’ll make sure I find out what’s causing it.]
Not much more to say here! I just got the settings right: resolution, frame rate, (low) anti-aliasing, no shadows, output format and encoding… and Render! Then about an hour later I was looking at my first finished project. The whole thing is awesome, but if I get that job, it’s going to be legen… wait for it…