The question I was trying to answer for my Motion Capture project was: Can I use motion capture to successfully animate a non-human character?
The answer, much to my surprise, is YES!
It’s safe to say this motion capture final render won’t work its way into any award shows, however, I realise that I shouldn’t be so hard on my self. I took a risk and tried to achieve something that was outside my comfort zone, so it’s also safe to say I actually managed to pull it off, if not somewhat awkwardly.
This is one project that I will be spending more time on though, so check back later and hopefully, Vardo will be doing some pretty inspiring things.
What I have learned throughout this experience is, that I have managed to create a solution that will work for previs, I have a way to go before I can call this avenue of research finessed enough to satisfy me. I need to create a solution that will keep the wheels reliably on the ground, and also have them move more easily with the body, I need to figure out how to add doors, and windows to Vardo that are able to be keyframed and move seamlessly with the Vardo’s body.
I have concluded that the best, and a somewhat easiest way to achieve a good result to have the motion capture actor stand with their feet together. This way, when weighting the skin to the foot controls it is easier to maintain an equal distance from the wheelbase.
While running my tests I compared both methods, and although I do love the way Vardo jumps all about in the version where the entire mesh is parented to the rig, I much prefer the usability of having the wheels and base separate. Going forward, I will consider trialing a blendshape, or ncloth simulation.
Overall, I am happy that I made it work to a satisfactory level, but I would love to see this through to a more usable and versatile solution.
Below is a comparison of the two different models I trialed.
I have a tendency to overdo things, so after my first few trials, I realized I had to pull it back to be able to test my mocap without taking HOURS repainting skin weights. Below is a super sped up video of the modeling process.
I do have footage of how terrible this model worked out and will include it in my making of the video, one thing I soon realized what I wasn’t going to be able to make the wheels, doors, and window shutters move independently of the while ensuring the travel with it in a seamless manner.
Below is a schematic of how I finally managed to put Vardo together. But parenting the wheelbase to the foot controls I could move the entire vehicle with the main character controller, and by parenting the wheels to the base I could place keyframes on them.
UPDATE I just had a brainwave about how I could make this better!
I am quite fortunate to have a friend named Jenny who was happy put on the mocap suit not once but twice. For some reason we had technical difficulties with the first capture session so had to go back. In preparation for my trails, I asked her to do a series of basic moves: Happy, sad, angry, etc, as well as a hip bend and turn.
Here is a photo of what such a good sport she is.
Then after collecting the data off, I take it to the cortex suit. Fixing the data is something that does actually get easier in time, so I managed to clean all my data in a few hours. If anything can be said about it, it’s tedious and requires high attention to detail. I can’t record the screen so you will have to settle for this low budget cellphone footage. however, it does the job.
Then it’s off to motion builder where you retarget all the motion points to a character. Which led to some interesting results to start with.
Somewhere between my last post and this one my experimental motion capture project took a bit of a left turn. In part because of time constraints, but also because another avenue took my interest.
I decided to combine my Animation Capstone Vardo with my Motion Capture in an effort to create a way I could use motion capture technology to create a pre-vis in preparation for creating the keyframed animation.
I wasn’t entirely sure I could achieve this, which worried me. I don’t take failure well, especially when I am sure that I can figure something out so I doggedly started my research into how I could proceed.
After casting a google net I realized that there isn’t that much research into this practice yet, and although plenty of people are making arms longer to create an ape, or applying the data to a monster character there hasn’t been too much experimentation using it with vehicles.
The anthropomorphized Vardo of my Capstone is inspired in part by the Pixar animation Cars, but you can learn more about my project by watching my video essay on the subject of using an anthropomorphized non-human character.
Below is the video that inspired my own search for a solution to my problem, and although they are using methods that outweigh my current knowledge with time and practice I believe I will be able to come up with an appropriate solution.
I have really enjoyed this semester of MoCap, last year I don’t think I truly don’t think I got it, but having the freedom to create a piece of art that means something really captivated my imagination.
As an artist, I never know when my next bit of inspiration will strike, so it was really serendipity when I glanced at the poster on the bus stop. One thought really changed the direction of my idea, and I am glad I did, as I am proud of the result.
I have learned a lot this semester that will not only aid me with my future MoCap endeavours but also supplemented my knowledge and skill for my Major of Animation. I found the tutorials especially helpful.
I still feel like I need to improve upon so much, and wish that my skill level matched my visions. If I could improve upon this sequence then I would have to say that I would do something more creative when swapping between rigs. I did experiment with nCloth, but in the end decided to keep it simple so the message was preserved.
I would like to explore particle effects more in the future; however, they just did not fit with my vision for this project.
Below is the final version.
And This is the ‘making of the video’, I quite like watching it along with the video of Mel’s live dance.
I really promise I meant to update this while I was working on it, but as usual, life has kind of gotten away with me so here I am now updating in the last minute.
After capturing the live data in the Mocap room at AUT you import it to Cortex and do something that is called ‘cleaning the data’. To be honest, even though I remember doing it last year I had forgotten every last thing that I learned, so it’s a good thing we had some very useful PDF’s to help guide me.
I think I did an okay job of cleaning it up and making sure all the markers were not muddled up, although there were a few issues on the chest, I think that was more to do with the way the markers were on the suit.
After this, we import the cleaned data to Motion Builder and retarget with one of the characters there. This involved lining up the shoulder, head, hand markers, etc onto the correct place on the character, then save your new re-targetted character.
Then you can send your rigged character with motion control from Maya to Motion Builder and merge your re-targetted character with your new rig. Once you have resized the characters and made sure they line up well you can hide the character showing only your rig and bake the motion to the control rig.
Once the motion has been baked to the control rig you can create animation layers and fix any limbs that might intersect another part of the body or the floor. This is especially important if you are planning on pinning Ncloth to your character, as it really messes up the simulation (I found that out the hard way).
It took me quite some time to figure out how to use the feature on Motion Builder that prevent the characters toes from going through the ground plane. I knew it was there, as Greg my lecturer had shown me in the class, but the 2017 version had put it in a slightly different place so it took about 40 mins of searching to find it because Autodesk hasn’t seemingly updated the new location in any of their tutorial pages.
Once I figured out how to effectively skin my word mesh to the rig and control rig it was easy enough to edit the limbs and make sure the figure had good motion. It was mostly time to consume, and it’s easy to miss jerky or unusual motion if you are scrubbing through the timeline. This is why I did many playblasts from Maya, which allowed me to better easily see if I had smooth motion or not.
Sadly there were few motions that I couldn’t smooth. I suspect that it’s only me that it bothers. The reason for this is the way the original actor moved her arms. As she rotated her arms hands seem to look natural upside down, but when you flip a word that way it looks weird. Or at least it appeared that way to me.
I decided to just do a straight swap between rigs after pondering how to do the transition. I made these either on a camera change, or when my dancer did something like a huge leap. It is actually rather effective.
For my audio, I removed the original background music and just left Dr Maya Angelous voice over, then I used a piece of music that I know is royalty free from Bensound. I also replaced the audio so it flowed better with my chosen music. The Music piece I choose was too long for my sequence so I also shortened it.
I spent two days working out camera angles for my sequence. I know that this is an area that I am weak in so wanted to devote a good amount of time to this to ensure visually it flowed well.
I had an idea of how I wanted the lighting to look, but after spending so much time placing the poetry on the walls, and the dedication on the back wall I realised that I need a well-lit scene. The only drawback to this is it takes so much longer to render due to all the geometry.
In total it took over 60 hours to render this sequence, at approximately 2 minutes per frame at HD750.
Photo Gallery or process, please click on images to view in full.
I just fell down a rabbit hole of the internet but in the process discovered this video that has a quick overview of Mash inside Maya and it’s so good it makes my brain want to explode. I can really see the vector graphic thing working well for my character!
Anyhow, I wanted to share the video here while it was still in my mind so I can come back to it later. I’ll add a few more here as I go.
I had to test drive the SVG vector tool immediately! What I can say, is that I don’t think I’ll be doing my character quite like the ones in the image below.
Here is a video that is talking about Mesh scattering.
Here is the webinar on Mash, that I have to watch. I have a lot of watching and learning to do.
I have no idea what I should be calling these titles, so this one has WHP as a shorthand for Words Have Power, which is what I think my sequence is starting to be called, it’s like my sequence is starting to take on a life of its own, it’s one of my art babies. One I am hoping is cultivated and grown, not the kind you have to retire to the place all your old files go to collect dust.
I have scoured the internet in search of inspiration for what my character is going to look like, and although I still haven’t got it nutted out entirely here are some images that have inspired me.
So many images here are leaving me feeling inspired, in particular, the one on the top right. I like the combination of lines of the face, hair, arms and legs, and the text making up the dress. Although the one on the bottom right appeals to me too. These are both options I intend to explore while creating my sequence.
the second set was supposed to focus more on what the area my character is going to dance in will look like, but I was also drawn to a mixture of ink lines and font used for clothing. I rather like the 2nd to top image on the left of the 3d words jutting out of the walls. In the video, Dr Angelou talks about how words get into your things, into your walls, your carpet, your floor, so I intend on incorporating words either I this manner, or by using a texture map.
This is just a small collection of the images I have collected during my research. Stay tuned for my next post which will show you some of the concept art works I have been working on.