Role Of 3d Animators In Motion Capture Film Studies Essay

In the world of upcoming technologies and innovations it has become hard for traditional techniques to withstand. Same is the case here in terms of 3d animation which is become an integral part of the film industry here for a long time and the motion capture which is upcoming and is here to stay. Motion capture being the favourite of every live action movie director is gaining attention in the film industry.

In producing entire feature films with Computer animation, the industry is currently split between studios that use Motion Capture, and studios that do not. Out of the three nominees for the 2006 Academy Award for Best Animated Feature, two of the nominees (“Monster House” and the winner “Happy Feet”) used Motion Capture, and only Pixar’s Cars was animated without Motion Capture. In the ending credits of Pixar’s latest film “Ratatouille,” a stamp appears labelling the film as “100% Pure Animation — No Motion Capture!”

For 3D animations, objects are built on the computer monitor and 3D figures are rigged with a virtual skeleton. Then the limbs, eyes, mouth, clothes, etc.of the figure are moved by the animator on key frames. The differences in appearance between key frames are automatically calculated by the computer. To gain more control of the interpolation, a parameter curve editor is available in the majority of the 3D animation packages. The parameter curve editor shows a graphical representation of the variation of a parameter’s value over time (the animation curve). Altering the shape of the curve results into a change in interpolation and therefore into a change in the speed of motion. By changing the interpolation it is possible to avoid surface interpenetration (such as fingers intersecting each other) when transitioning from one hand shape to the next. The realism of keyframe animations depends largely on the animator’s ability to set believe keyframe (realistic hand shapes) and on his ability to control the interpolation between the keyframe i.e., the speed and fluidity of motion. Rendering takes place in the animation finally.

History of 3D animation

In the year 1824 Peter Roget presented his paper ‘The persistence of vision with regard to moving objects’ to the British Society. In 1831 Dr.Joseph Antoine Plateau (a Belgian scientist) and Dr.Simon Rittrer constructed a machine called a phenakistoscope. This machine produced an illusion of the movement by allowing a viewer to gaze at a rotating disk containing small windows; behind the windows was another disk containing a sequence of images. When the disks were rotated at the correct speed, the synchronization of the windows with the images created an animated effect. Eadweard Muybridge started his photographic gathering of animals in motion. Zoetrope (series of sequential images in a revolving drum) when the drum is revolved the slits in the drum creates the illusion of motion and becomes first movie- similarly film creates this illusion by having one image then black then image then black again. Thaumatrope twirl it and the two images superimpose on each other. Two frame animation.

In 1887 Thomas Edison started his research work into motion pictures. He announced his creation of the kinetoscope which projected a 50ft length of film in approximately 13 seconds. Emile Renynaud in 1892 combining his earlier inventions of the praxinoscope with a projector opens the Theatre Optique in the Musee Grevin. It displays an animation of images painted on long strips of celluloid. Louis and Augustine Lumiere issued a patent for a device called cinematography capable of projecting moving pictures. Thomas Armat designed the vitascope which projected the films of Thomas Edison. This machine had a major influence on all sub-sequent projectors. J.Stuart Blackton made the first animated film which he called “Humorous phases of Funny faces” in 1906. His method was to draw comical faces on a blackboard and film them. He would stop the film, erase one face to draw another, and then film the newly drawn face. The stopmotion provided a starting effect as the facial expressions changed before the viewer’s eyes. Emile Cohl makes En Route the first cut-out animation. This technique saves time by not having to redraw each new cell, only reposition the paper. Winsor McCay produced an animation sequence using his comic strip character “Little Nemo”. John R Bray applies for a patent on numerous techniques for animation. One of the most revolutionary is the process of printing the backgrounds of the animation. In 1914 Winsor McCay produced a cartoon called “Gertie”. The trained Dinosaur” which amazingly consisted of 10,000 drawings.

In 1914 Earl Hurd applies for a patent for the technique of drawing the animated portion of an animation on a clear celluloid sheet and later photographing it with its matching background (Cell animation).

Cell and Paper Animation Technique:

By the mid-1910s animation production in US already dominated by the techniques of cell and paper. Cell animation was more popularized in America than in Europe because of

Assembly line Taylorism that had taken America by storm. Cell Animation was most appropriate to the assembly-line style of manufacturing because it took a whole line of persons working on very specific and simple repetitive duties. On the other hand, in Europe where the assembly-line style of work was not encouraged, clay animation and other forms of animation that required only a few individuals working on the set at a time was more popularized. Because the actual set could only afford a limited amount of individuals working at one time together and no more this style and other alternative forms of animation became more widely accepted. Disney-cell animation – draw each image one at a time using onion-skinning technique.

Traditional cell animation – drawings created one by one animators create the keyframe and assistances create in-betweens; onion skinning process used to make easier the reference drawing of each additional image.

The international feature Syndicate realised many titles including “Silk Hat Harry”, “Bringing up Father” and “Krazy Kat”. In 1923 the first feature-length animated film called “El Apostol” is created in Argentina. 1923 saw the discovery of Disney Brothers Cartoon Studio by Walt and Roy Disney. Walt Disney extended Max Fleischer’s technique of combining live action with cartoon characters in the film “Alice’s Wonderland”. Warner Brothers released “The Jazz Singer” which introduced combined sound and images. Ken Knowlton working at Bell Laboratories started developing computer techniques for producing animated movies. University of Utah, Ed Catmull develops an animation scripting language and creates an animation of a smooth shaded hand. Ref: E.Catmull,”A system for computer generated movies”, Proceedings of the ACM National Conference, 1972. Beier and Neely, at SGI and PDI respectively publish an algorithm where line correspondences guide morphing between 2d images.”Demo” is Michael Jacksons video Black and White.Ref: T.Beier and S.Neely,”Feature-Based image metamorphosis”. Computer Graphics July 1992.

Chen and Williams at the apple publish a paper on view interpolation for 3d walkthoughs.Ref: S.E.Chen and L.Williams,”View Interpolation for image Systhensis”. Computer Graphics Proceeding, Annual Conference Series1993. Jurassic Park uses CG for realistic living creatures. The stars of this movie directed by Steven Spielberg were the realistic looking and moving 3d-dinosaurs, created by Industrial Light and Magic. With each new step into the next generation of computer graphics comes new and more believable CGI characters such as those found in Dinosaur. In Dinosaur the creation and implementation of realistic digital hair on the lemurs is included. After seeing it, George Lucas, director of the Star War series, concluded the time was there to start working on his new Star Wars movies. In his opinion 3d-animation was now advanced enough to believably create the alien worlds and characters he already wanted to make since the early late seventies.

In the year 1995 Toy Story the first full length 3D CG feature film. The first CGI feature-length animation and Pixar’s first feature film. The primary characters are toys in the room of this six-year-old boy Andy, and is mostly told from their point of view. On entrance of computers and 3d driven software feature length films of high polish can be created virtually in 3d. Toy Story is considered to be a first animated feature ever generated completely on computers. Disney and Pixar partnered up to create this film. Star Wars, almost every shot of this movie is enhancing with 3d-animation. It features very realistic 3d-aliens and environment. Lord of the Rings: Two Towers was the first Photorealistic motion captured character for a film; Gollum was also the first digital actor to win an award (BFCA), category created for Best Digital Acting Performance.

MOTION CAPTURE

Motion capture, motion tracking, or mocap are terms used to describe the process of recording movement and translating that movement onto a digital model. For medical applications and for validation of computer vision and robotics, and it is used in military, entertainment, sports too. To recording actions of human actors, and using that information to animate digital character models in 2d and 3d computer animation is how it is termed in film making. Performance capture is referred when it includes face, fingers and captures subtle expressions. Movements of one or more actors are sampled many times per second, although with most techniques motion capture records only the movements of the actor, not his/her visual appearance, in motion capture sessions. This animation data is mapped to a 3d model so that the model performs the same actions as the actor.

Although there are many different systems for capturing motion capture data, they tend to fall broadly into two different categories:

One contains optical systems, which employ photogrammetry to establish the position of an object on 3D space based on its observed location within the 2d fields of a number of cameras. Data is produced by these systems within 3 degrees freedom from each marker, and rotational information must be inferred from the relative orientation of the sensors with respect to a transmitter. Collecting of motion data from an image without using photogrammetry or magnetic equipment is referred to as motion tracking.

In The Lord of the Rings in 1978, animated film where the visual appearance of the motion of an actor was filmed, then the film used a guide for the frame by frame motion of a hand-drawn animated character; the technique is comparable to the older technique of rotoscope. The camera movements can also be motion captured so that a virtual camera in the scene will pan, tilt, or dolly around the stage driven by a camera operator, while the actor is performing and the motion capture the camera and props as well as the actor’s performance. By doing this, it allows the computer generated characters, images and sets, to have the same perspective as the video images and sets, to have the same perspective as the video images from the camera. The actor’s movements are displayed through the computer process, providing the desired camera positions terms of the objects in the set. Match moving or camera tracking is referred to retroactively obtaining camera movement data from the captured footage.

Read also  Examining The Use Of Motion Capture Film Studies Essay

History of Mocap:

The mocap technology of the modern day has been developed by the led in the medical science, army, and computer generated imagery (CGI) where it is used for a wide variety of purposes. Mocap had successful attempts long before the computer technology had become available.

Early attempts:

The invention of zoopraxiscope was because a of a bet of $25,000 on whether all four feet of a horse leave the ground simultaneously or not. Endeared Muybridge (1830-1904) who invented the zoopraxiscope was born in England and became a popular landscape photographer in San Francisco. Muybridge proved the fact that all four feet of a trotting horse simultaneously get off the ground. He did so by capturing a horse’s movement in a sequence of photographs taken with a set of one dozen cameras trigged by the horse’s feet. The earlier motion capture devices are considered to be zoopraxiscope. This technology was perfected by Muybridge himself. His books, Animals in motion (1899) and The Human Figures in Motion (1901) are still used by many artists, such as animators, cartoonists, illustrators, painters as valuable references. Muybridge is a pioneer of a mocap and motion pictures.

In the same year a physiologist and the inventor of a portable sphygmograph was born in France and his name is Etienne – Jules Marey. Sphygmograph is an instrument that records the pulse and blood pressure graphically. Modified versions of his instruments are still used today.

Marey met Muybridge in Paris in the year 1882 and is the following year he invented the chronophotographic gun to record animal locomotion but quickly abandoned it, this invention was inspired by Muybridge’s work. He invented a chronophotographic fixed-plate camera with a timed shutter that allowed him to expose multiple images on a plate in the same year. The camera initially captured images on a glass plate but later he replaced glass plates with film paper, by this way film strips where introduced to the motion picture. Marey’s subject wearing his mocap suit shows striking resemblances to skeletal mocap data in the photographs. Research subjects of Marey included cardiology, experimental physiology, instruments in physiology, and locomotion of humans, animals, birds, and insects. Marey used one camera in motion capture comparing to Muybridge who used multiple cameras.

After the year in which Muybridge and Marey passed away Harold Edgerton was born in Nebraska. In the early 1920’s Edgreton developed his photographic skills as a student while he studied at the University of Nebraska. While working on his masters degree electrical engineering at the Massachusetts in 1926 at the Institute of Technology(MIT), he realized that he couldn’t observe the a part of his motor which is rotating as if the motor were turned off by matching the frequency of the strobe’s flashes to the speed of the motor’s rotation. Stroboscope was developed to freeze fast moving objects and capture them on film by Edgerton in 1913. Edgreton became a pioneer in high-speed photography.

The first successful underwater camera in 1937 was designed by Edgreton and made many trips abroad the research vessel Calypso with French oceanographer Jacques Cousteau. The design and building of deep sea flash electronic equipment in 1954 was done by him. Edgreton passes away in 1990 where his long career as an educator and researcher at MIT.

Rotoscoping:

Max Fleisher and art editor for Popular Science Montly who was born in Vienna in 1883 who moved to the U.S with his family, he came up with an idea of producing animation by tracing live action film frame by frame. Fleisher filmed David his brother, in the year 1915 in a clown costume and they spent almost a year making their first animation using rotoscope. He obtained a patent for rotoscope in 1917.In the year 1918 when World War I ended he produced the first animation in the “Out of the Inkwell” series and he also established Out of the Inkwell,Inc.,which was later renamed as Fleischer Studio. In this series the animation and the live action was mixed and Fleischer himself interacted with animation characters, Koko the clown and Fitz the dog. Before Disney’s “Steamboat Willie,” in the year 1924 that’s 4 years before he had a synchronised soundtrack. Characters such as Popeye and Superman were all animated characters from Fleischer’s studio. Betty Boop first appeared in Fleischer’s animation and later became a comic strip character. In 30’s early animations were filled with sexual humour, ethnic jokes, and gags. When the Hays Production Code (censorship) laws became effective in 1934 it affected Fleischer studio more than other studios. Betty Boop lost her garters and sex appeal as a result.

After almost after 4 years of production Walt Disney presented the first feature length animation, “Snow White and Seven Dwarfs.” “Snow White” was a huge success. The distributer of Fleischer’s animation Paramount pressured Max and David Fleischer to produce feature length animations. The two feature films “Gulliver’s Travel” (1939) and “Mr. Bugs Goes to Town” (1941) were produced by the money borrowed from Paramount. Both of the films were a disaster in the box office. The failure of “Mr. Bug” made Paramount fire the Fleischer brothers and changed the studio’s name from Famous Studios. Max Fleischer sued Paramount over the distribution of his animations. He signed a Betty Boop merchandising deal for King Features, a unit of the Hearst Corporation before he died in the year 1972.

The use of Rotoscoping can be seen in the Disney animations, starting with “Snow White”. Later Disney animations characters were highly stylized and Rotoscoping became a method for studying human and animal motions. Comparison between film footages and the corresponding scenes in the animations reveals skilful and selective use of Rotoscoping by Disney animators. They went above and beyond Rotoscoping. “Snow Whites” can be attributed to Walt Disney’s detailed attention to the plot, character development and artistry.

Both Max Fleischer and Walt Disney were highly innovative individuals; however, it is said true that “Disney’s memory belongs to the public; Max’s to those who remember him by choice” (Herald son, 1975).

Beginning of Digital Mocap:

In the 1970’s the research and development of digital mocap technology started in pursuit of medical and military applications. In 1980’s CGI industry discovered the technology’s potentials. In the 1980’s there were floppy disks that were actually floppy and most computers were equipped with monochrome monitors; some with calligraphic displays. To view color images, for example rendered animation frames, images had to be sent to a “frame buffer,” which was often shared by multiple users due to its cost. Large computers were housed in ice cold server rooms. Offices were files with the noise of dot matrix printers. In the 1980’s ray tracing and radiocity algorithms were published. Based on these algorithms renderers required a supercomputer or workstations to render animations frames in a reasonable amount of time. Personnel computers weren’t powerful enough. CPU’s, memories, storage devices, and applications were more expensive than today. Wavefront technologies developed and marketed the first commercial of the shelf 3D computer animation software in 1985. At that time only a handful of animation production companies existed. Most of the animations that they produced were “flying logos” for TV commercials or TV programme’s opening sequences. The pieces were 15 to 30 seconds long. In the 1980’s the readers who saw “Brilliance” probably still remember the astonishment of seeing a computer generated character, a shiny female robot, moving like a real human being.

“Brilliance” being the first successful application of mocap technology in CGI,”Total Recall” was the first failed attempt to use mocap in a feature film. The post production companies contracted to produce effects for the 1990 science fiction film starring Arnold Schwarzenegger and Sharon Stone, Metrolight Studio being one of them. Metrolight decided to use mocap to create an animation sequence of moving skeletons for the scene in which Schwarzenegger’s character goes through a large airport security X-ray machine, along with other people and a dog. Operator from an optical mocap equipment company was sent out to a location with mocap system. A team from metrolight followed the operator’s instruction while capturing performances by Schwarzenegger and other performers. They went home believing that the capture session had gone well and the mocap company would deliver the mocap data after cleaning and processing. What so ever metrolight never received usable data and had to give up using mocap for the scene.

Metrolight’s unfortunate experience teaches us one lesson that we should hire only a service provider with a good track record and references.

In 1995 FX Fighters released its first real-time fighting with 3D characters in 3D environments. It’s also one of the first video games that used mocap technology to give realism to 3D characters movements. By the user input using a set of motion captured actions, game characters are animated in real time. The pieces of actions are played in such a way that the player does not notice the transition from one action to another giving an impression that the player is fully in control of a game character’s movement. Seeing the success of the game, other game companies were encouraged to use mocap in their games.

In the 1980s and 1990s these pioneering efforts have shown remarkable development and achievement in digital mocap. In the recent years, in addition to medicine, and entertainment, mocap applications have been found in many other fields. Mocap is used by various sports to analyze and enhance the athlete’s performances and prevent injuries. Designers use mocap to understand users movements, constrains, and interactions with environments and to design better products. Mocap is used by engineers to analyze human movements and design robots that walk like us. Mocap is also used by art historians and educators to archive and study performances by dancers and actors. For instance, in 1991 an intricate performance by legendary French mime Marcel Marceau (1923-2007) was captured at the Ohio State University to preserve his arts for future generations.

3D ANIMATION PRODUCTION PIPELINE

Sales pitch

Convincing the big jobs to work on the story.

Story plot solid summary

What the films about, what happens in it and extra variations that may or may not appear in the final product.

Storyboards

Basic sketches of the scenes.

(Time usually taken = 6 months)

Voice recording

At first the artists themselves do the voice acting to put a connection from the story board to the script to give an idea of the film, later on celebrities are paid to be the character voices.

Storyboard reel

Read also  Alfred Hitchcocks Use Of Sound Film Studies Essay

Pictures in a timescale with voice recordings playing in conjunction, basically a really basic film.

Concept art

Artists try to create the look and feel of the scenery and the characters from the scripts, voice talent and the basic drawings, the artists get first crack at how lighting sets the mood too

Modelling

The characters, props and landscape have started to be created in 3d; hinges have been added to them to give them movement. Everything is still in frame form, no textures have been added yet (think skeletons).

Dressing

The models and props are skinned according to the mood and feel the team wants for the film to portray.

Shot layout

The Basically skinned objects and characters are set into positions to work out camera angles and movement, nothing is truly animated or skinned yet, the recordings of these final cuts are passed onto the animation team.

(Time Usually taken = 4 weeks)

Animation

The models are animated, everything such as the skeleton is already there so they are basically choreographers (think puppeteers). They move the mouth and ligaments according to the sounds and the scripts.

(Time usually taken 4 weeks)

Shading

shading changes surfaces according to the lighting on it, it affects the model’s colour depending on the lighting situation e.g. light bouncing off a shiny metal surface is successfully done thanks to a shader. Shaders are added to the landscapes, models and props.

Lighting

Lighting is added to the scenes, Lighting is what actually makes everything look great. Lighting is based on the mood scripts.

(Time usually taken = 8 weeks)

Rendering

The final product is rendered; this can take a hell of a lot of time to render one frame depending on the quality of the graphics put into.

Touch-ups

Things such as music scores, special effects and sound effects are added, the film is also recorded to an appropriate format.

MOTION CAPTURE PRODUCTION PIPELINE

Pre-Production

Storyboard development & Shot analysis

It is important to work out exactly what action is needed at this stage, plus any restrictions which may impede the actor. There are several factors which need to be addressed:

Does the actor’s size correspond to that of the character.

Should the actor have any props, or costume (for example having the actor where horns for your demon character in your mocap session, will prevent the arms going through the horns at the implementing stage) The spatial surrounding should be a factor.

Will the motion need to be blended (e.g. A running motion, as the motion capture studio will only capture a fragment of the run).

Character Rigging

Develop a character rig, which involves the following:

Matching the actor’s size as much as possible.

Constraining the joints.

Problems may include exporting out of your animation package into the correct format (e.g. .xsi into fbx) Several different export formats should be tested to realize which suites best with the character rig (e.g. .bvh, .fbx, etc).

Actual Motion Captured

This can be viewed on a rig in real time. There are several different forms of Motion Capture devices. The most commonly used are:

Mechanical, Optical, and Electromagnetic (magnetic)

Cleaning Data

This involves several data manipulators being applied to the motion capture data. In optical motion capture systems, for example, after you capture the movements of your actors, the data is stored as raw 2D data. ‘Reconstruction’ process will convert it into continuous 3D trajectories. ‘Label’ process will label all the trajectories and so on. Additional processing may be needed when there are data gaps, jitters and other data-noises.

Implementing data

This is simply the process of applying your data to your skeleton rig provided at the initial stages. There can be several problems at this stage depending on the formats and animation package chosen. For example there is an issue with UVs, materials, scaling etc. It is suggested you follow each package pipeline to minimize these issues.

APPLICATIONS OF MOTION CAPTURE

The process of recording movement and translating that movement onto a digital model is called as motion capture, motion tracking or mocap. Its applications are used in the military, entertainment, sports, medical applications and for validation of computer vision and robotics etc.

Games

The largest market for motion capture is game development. Games are drawing as much revenue as movies; it is easy to see why game development often calls for enormous quantities of motion capture. There are basically two types of 3d character animation used in games: real time playback vs. Cinmeatics. Real-time allows the game player to choose from pre-created moves, by controlling the character’s moves in real-time. Cinmeatics are the fully rendered ‘movies’ used for the intros and ‘cut-scenes’. Often the last part of game production, or a process that is sub-contracted to a separate studio,cinematics are generally not essential to game-play, but do add a lot of appeal to the game, and help immensely with story development and mood generation.

Video and TV

Performance animation

In live television broadcasts real-time motion is becoming popular. Using motion capture we can place a virtual character within a real scene, or to place live actors within a virtual scene with virtual actors, or virtual characters with a virtual scene.

For real time broadcasting mocap requires mocap-up of any non-standard physiology to keep the performers motion from causing the character’s limbs to interpenetrate its body. Joints limits on the shoulders and knees also help maintain believability of the character. A real-time adaptation feature such as motion builder’s real-time motion mapping is essential when the character’ body is very different from the actors body. While combining live elements with virtual elements the real and virtual cameras must share the same properties otherwise the illusion looks strange.

Daily features

Producing daily 3d animated features becomes easy with use of the phasespace optical motion capture system combined with motionbuilder.,allowing TV stations to keep their content fresh and exiciting,and giving viewers yet another reason not to touch that dial.

Post-Production for ongoing series

using motion capture for ongoing series is gaining popularity. The result of creating

a weekly show without motion capture invariably causes shows to be late or production studios to go bankrupt. Having an efficient motion capture pipeline is essential to the success of an ongoing character animation based series.

Film

The use of motion capture in the films is increasing day by day. For creating character based animation motion capture is essential that move realistically, in situations that would be impractical or too dangerous for real actors.eg. Titanic were characters falling down off the ship. Motion capture was used extensively in Titanic for filler characters. Many of these shots would have been difficult or impossible to do with real cameras and a real ship, or real models, so virtual models, actors, and cameras were used. Some film characters require the use of motion capture, otherwise their animation seems fake. More and more independent companies are starting to put together desktop studios-the idea of two or three people creating an entire movie are not far off, if motion capture is used correctly. Motion capture animation can be done very quickly and inexpensively, without scheduling expensive motion capture sessions in a studio.

Web

Motion capture is ideal for the web, whether used to create virtual hosts or greeting cards. Motion capture brings a human element to the web as the web becomes more sophisticated and bandwidth increases, in the form of characters that viewers can relate to interact with.

Live events

Motion capture generated performance animation can be thought of as ‘Improvisation meets Computer Graphics (CG)’. A good improviser acting through a CG character in real-time can create a very intriguing lass sting experience for the viewer at trade shows, meetings or press conferences. Integrating with live actors father helps create a fascinating experience.

Scientific research

While doing perceptual research motion capture is useful. By presenting test subjects with abstract movements, distilled from motion capture data, repeatable experiments can be developed that provide insights into human perception.

Biomechanical analysis

Motion capture is relied by biomechanical analysis for rehabilitation purposes. Motion capture can be used to measure the extent of a client’s disability as well as a client’s progress with rehabilitation. Motion capture can also help in effective design of prosthetic devices.

Engineering

For producing ergonomically practical product designs motion capture is essential, as well as designs for physical products that are comfortable and appealing. When it comes to working in an enclosed space, the gypsy has tremendous advantages over optical or magnetic systems, such as a car interior or an aircraft cockpit. Optical systems require a large distance between the subject and the cameras and are easily occluded. Magnetic systems have a major problem with metal in the capture space.

Education

By giving motion captures training it can make a huge difference in an animators training. While access to motion capture is not a substitute for developing good art skills and good traditional character animation abilities, it can go a long way towards making someone more employable.

VR (virtual reality)

For VR training applications motion capture training is indispensible. It makes for much better immersion than using a joystick or a positional handle.

How does motion capture differ from key frame animation?

Motion capture doesn’t work or completely based on traditional animation technique but it is a technology to capture real motion of the moving character or object same. Those motion data you can use that data to animate any 3d character or object. By using mocap animator doesn’t need to do key framing to animate the character, they need to manipulate mocap data only to get desire motion instead. This means that, you can get realistic animation in less effort and time.

Key frame animation is more time consuming and need more talent to put life into the character, compare to mocap. In key frame animation you need to create pose and motion by using software tools, there is no ready to use data. Each and every moment or motion animator has to create by own.

What are the challenges and opportunities for a key frame animator in the motion capture era?

To modify mocap data they need animator, also to combine motion and all studio needs animator. Because mocap capture into the short sequences which need to import and manage according to the need and for that studio always need animator.

Here are a few questions which state’s the need for an animator in the motion capture work.

Who will manipulate all mocap data into the character?

What if scene required some particular non realistic animation?

What if film budget is not that to high that can afford mocap?

What if scene needs multiple mocap data into the one shot?

Case Study 1

Read also  A Literary Analysis Of Pulp Fiction Film Studies Essay

Movie: The Incredibles (2004) Movie: The Polar Express (2004)

Director: Brad Bird Director: Robert Zemeckis

Writer: Brad Bird

The two animated features that achieve remarkable creative results using different styles of human character animation. The Incridibles is an example of first-class keyframe cartoon character animation that integrated 2D traditional styles with the 3D computer style that we have come to expect from Pixar projects. The Polar Express offers an innovative approach that animates computer-generated virtual characters by realtime human performances and keyframe touch-ups.

Comedy and Action through Squash-and-stretch

The Incredibles:

The two aspects represent significant departures from the topic and the style of earlier Pixar movies: the human characters are central to the storyline and they are animated with considerable squash-stretch. To make the later possible the technical character team and the character setup rigs was a driven by the animator’s needs, and keeping the overall look-and-feel similar to earlier versions of Pixar animation software was an important consideration.

In the The Incredibles the main software tool used to animate was layering of two key stages in the animation process: the bone and muscle calculations, and the squash-and-stretch system. It would have been difficult to provide the animators with realtime feedback without the layering of these two stages. The first stage of this layering of these two stages it would have been muscle calculations through all the character’s positions in a shot. With the use of statistical analysis the software determined the most significant changes in the character’s skin throughout the shot, and ”baked” those deformations into the model so that they could effectively be used as blend shapes. The animators did not have to deal with the bone and muscle system after doing the first pass animation. Stage two consisted of the layered animation process consisted of applying the squash-and-stretch to the baked geometry as a post-process, and animators were able to visualize this in realtime.

Another technique used to keep the playback as close as possible to 24 frames per second is geometry decimation. Animators were involved in the process of hand-craft decimation that brought some meshes to about 25% of the full geometry but kept, for example the occurring in the body geometry and not the full occurring in the body geometry and not the face. Decimated models did everything that a full model did, and the deformation hierarchies remained the same but with much less overhead. Shots got finalized during the animation stage using the decimated version but the final rendering used the full geometry.

The approach to facial animation in the Incredibles followed the Pixar tradition of allowing animators direct control of all parts of the face. Facial animation was done without using blend shapes, and with a multitude of deformers tied to macro-controllers. Compared to the facial animation tools used in Finding Nemo these had more better features, nothing groundbreaking but many incremental advances, for e.g.: a greater number of controls to allow for squash-and-stretch, a new eyebrow package with spline-based controls, and the ability to do wild cartoony distortions of the face including the eyeballs – something that Pixar animators traditionally did not distort.

In The Incridibles the implementation of referencing across models was another animation tool that broke with past practise. This standardization of a basic common rig structure for all characters made it easier for animators and TD’s to share poses and facial expressions between characters. However some characters had multiple rigs. The standard rigs were limited to normal squash-and-stretch articulation, but a few special-case rigs were developed for complex distortions. For e.g. Helen (Elastigirl) in her stretchy suit, the transformation of baby Jack Jack into a monster. Bob the dad had at least two rigs: one for the fit superhero version, and another one for his fat version with a gut and shorter legs. All models had switches inside to preserve old behaviours by linking different rig versions for each character. New employees at Pixar did a lot of the character articulation work in The Incredibles, and the passion for their work and talent are a testament to what the computer animation industry is all about.

Performance capture and Emotion:

From the animation point of view, the initial intent in Robert Zemeckis’ The Polar Express was to create computer-generated human characters that were not keyframe-animated cartoons. The scope and the scale of the technology assembled and developed at Sony Image works (called Image Motion) to make this task possible is impressive. The Polar Express production team used motion capture technology in an innovative way and developed a unique production pipeline.

In The Polar Express they were unable to use keyframing as the primary animation technique; motion capture remained the oblivious choice for animating the somewhat realistic-looking human characters in the movie. There is a big difference between plain motion capture and performance capture. While motion capture seeks to record a cold sequence of moves performance capture seeks to record the emotion and the intention contained in the way an actor moves and pauses.

The Polar Express uses the later approach, and for that they assembled one of the most complex capture systems ever: four Vicon systems linked together, with 72 cameras in an area measuring 10 feet square. This configuration allowed the realtime body and face capture about two mm in diameter. The facial rigging was driven by the muscle compression for each muscle represented in the system, and the data obtained from the facial markers was converted to a muscle system custom-designed for this production. (The Facial Action Coding System (FACS) developed by Paul Eckman and used in Gollum’s facial animation system was not employed in The Polar Express). To capture reliable data of eyelids, eyeballs, mouths and fingers it was difficult sometimes inspite of the impressive performance capture setup.

The Polar Express production teams also came up with a new production pipeline approach to integrate captured data with cinematography and animation in addition to their performance capture innovations. Large scenes, for example, with captured performances were initially created without a specific camera. This was very different from traditional animation where the scene layout and staging is always storyboarded and laid out from a very specific point of view before the animation stage. The “rough integration”, which is the initial scene, contained only body motion and it could be played back in realtime from any angle by a director of photography (DP). This approach allowed the DP to establish shots by using a “wheel” interface for positioning and moving the camera in the scene while the rough capture was being replayed in realtime, in a mode similar to live action. There is a representation of a twist in the traditional animation pipeline due to this innovation.

After the approval of the director and the editorial team the shots continued ahead to “full integration” of body and facial capture. The shots moved on to the animation department, where the original performances were fine-tuned in different ways once this stage was finished. Knowing the capture limitations mentioned earlier one would imagine that only eye, mouths and fingers were keyframed during the animation stage, but in looking at the finished movie one can find glitches in these areas and overall facial and body motion. This shows that some of the captured performances were clearly edited, possibly changed altogether. It is actually difficult to know exactly how much keyframe enhancement actually took place in The Polar Express, but knowing and understanding these facts will certainly help future users of this performance capture system optimize their work.

The style of many of the performances is a bit too straight-ahead for the stylized look of The Polar Express models. A few more keyposes, motion holds and clearer silhouettes might have made some of the action read better. Perhaps this is a matter of opinion and stylistic preference, but the lack of consistent emotion is most of the character’s faces is not. While the overall quality of the body performance capture seems consistent and believable, the same cannot be said of the eye animation, in particular. The eyelid, eyeball and mouth animation, all crucial components of facial expression, were keyframed during the animation stage and not captured from the actors perfomerances. All throughout the movie the motion of the eyelids is minimal, giving some characters a flat look, and too many eyeballs seem focused at infinity. These minor but persistent animation inconsistencies end up distracting, and they represent the weak link in The Polar Express. The odd facial animation style is as if the actors in a live-action movie had a facial twitch every few minutes: no matter how good their performances were, their twitching would surely confuse or dilute some of the emotional intent and take away from the believability of their characters. Some of the production technique and pipeline developed for The Polar Express are remarkable, and while the movie was crafted with first class storytelling and rendering, I wish there had been more blending of performance capture with keyframe animation, and I can’t help but wonder how this movie would have looked if it had been produced as a live-action movie with human actors and digital visual effects.

EVALUATION OF EMOTION from 3D animation and Motion capture movies:

In the field of the psychology there is not a consensus on describing the meaning of emotion, because there are a lot of definitions. But in the last years, there has been a general consensus that the emotions can differ and can be measured. There are 3D models to represent and measure the emotions.

To evaluate the three main levels of that facial expression sympathy, narrative involvement and narrative realism, the viewer has to answer the following questions to measure them:

1. Narrative Realism:

a. Evaluate the “visual look” of the scene and the character (from 1, non realistic to 9, very

realistic).

b. Evaluate the “animation” of the character (from 1, non human movement to 9, very realistic

human movement).

c. Evaluate the “facial expression” of the character (From 1, non realistic to 9, very realistic).

d. Evaluate the technical quality of the scene (modelling, texture, lighting, rendering) (From 1,

low quality to 9, high quality).

2. Sympathy:

a. Evaluate the empathy or identification level with the character. In other words, are you

happy or unhappy watching this type of character? (From 1, unhappy to 9, very happy).

b. Evaluate the level of arousal such as excitement level that the character transmits us.(From 1,unhappy to 9,very happy)

3. Narrative Involvement:

a. How much did you enjoy watching this scene? (From 1, nothing to 9, a lot of).

b. Was the character believable into the scene? (From 1, nothing to 9, a lot of).

Order Now

Order Now

Type of Paper
Subject
Deadline
Number of Pages
(275 words)