Mocap II – Capture to Final Render

I really promise I meant to update this while I was working on it, but as usual, life has kind of gotten away with me so here I am now updating in the last minute.

After capturing the live data in the Mocap room at AUT you import it to Cortex and do something that is called ‘cleaning the data’. To be honest, even though I remember doing it last year I had forgotten every last thing that I learned, so it’s a good thing we had some very useful PDF’s to help guide me.

I think I did an okay job of cleaning it up and making sure all the markers were not muddled up, although there were a few issues on the chest, I think that was more to do with the way the markers were on the suit.

After this, we import the cleaned data to Motion Builder and retarget with one of the characters there. This involved lining up the shoulder, head, hand markers, etc onto the correct place on the character, then save your new re-targetted character.

Then you can send your rigged character with motion control from Maya to Motion Builder and merge your re-targetted character with your new rig. Once you have resized the characters and made sure they line up well you can hide the character showing only your rig and bake the motion to the control rig.

Once the motion has been baked to the control rig you can create animation layers and fix any limbs that might intersect another part of the body or the floor. This is especially important if you are planning on pinning Ncloth to your character, as it really messes up the simulation (I found that out the hard way).

It took me quite some time to figure out how to use the feature on Motion Builder that prevent the characters toes from going through the ground plane. I knew it was there, as Greg my lecturer had shown me in the class, but the 2017 version had put it in a slightly different place so it took about 40 mins of searching to find it because Autodesk hasn’t seemingly updated the new location in any of their tutorial pages.

Once I figured out how to effectively skin my word mesh to the rig and control rig it was easy enough to edit the limbs and make sure the figure had good motion. It was mostly time to consume, and it’s easy to miss jerky or unusual motion if you are scrubbing through the timeline. This is why I did many playblasts from Maya, which allowed me to better easily see if I had smooth motion or not.

Sadly there were few motions that I couldn’t smooth. I suspect that it’s only me that it bothers. The reason for this is the way the original actor moved her arms. As she rotated her arms hands seem to look natural upside down, but when you flip a word that way it looks weird. Or at least it appeared that way to me.

I decided to just do a straight swap between rigs after pondering how to do the transition. I made these either on a camera change, or when my dancer did something like a huge leap. It is actually rather effective.

For my audio, I removed the original background music and just left Dr Maya Angelous voice over, then I used a piece of music that I know is royalty free from Bensound. I also replaced the audio so it flowed better with my chosen music. The Music piece I choose was too long for my sequence so I also shortened it.

I spent two days working out camera angles for my sequence. I know that this is an area that I am weak in so wanted to devote a good amount of time to this to ensure visually it flowed well.

I had an idea of how I wanted the lighting to look, but after spending so much time placing the poetry on the walls, and the dedication on the back wall I realised that I need a well-lit scene.  The only drawback to this is it takes so much longer to render due to all the geometry.

In total it took over 60 hours to render this sequence, at approximately 2 minutes per frame at HD750.

Photo Gallery or process, please click on images to view in full.

 
 

Leave a Reply

Your email address will not be published. Required fields are marked *