Examining The Concept Of Motion Capture Film Studies Essay
MoCap technology has revolutionary technology quickly, especially technologies that use in movie and games industries, MoCap has come a critical role in the creation of animation as smooth as real. The report will begin mentioned about MoCap technology such as what it is , How it useful , and Who will use this type of technology to bring more understanding to the readers. After that the report will focus on the Weta digital, a world s leading film studios which has continued to develop MoCap technology until it is highly successful. Nowadays, MoCap is use in many famous movies such as many characters in Beowulf, alien characters in District 9, character Gollum in Lord of the Rings trilogy, a giant gorilla named Kong in King Kong, and etc.
And for more knowing about MoCap, so that Avatar , an epic movie which had the most gorgeous computer graphic (CG) details, was chosen for being an example of the latest evolution of MoCap. Besides, this movie was greatly well-known and also influences the revolution of film industry explicitly as well.
WHAT IS MOCAP?
MoCap is an abbreviation of Motion Capture; this technology is a process by which movement is digitally recorded. The technique was originally used for military tracking purposes and in sports as a tool for biomechanics research which focused on the mechanical functioning of the body, like how the heart and muscles work and move. In the last twenty-five years, motion capture has become an essential tool in the entertainment business, giving computer animators the ability to make non-human characters more life-like. It’s a technology used in animated films and television as well as video games.
Historically, MoCap in animated movies was created using an extension of the rotoscoping technique. In this technique, an actor is filmed making certain movements or gestures while wearing markers on specific points of his or her body. Each marker in each frame of film is then manually encoded into the computer. As animation software improved, it became possible to apply an algorithm to the markers that attach them to a 3D object, creating what is now called motion capture.
MoCap can be categorized by their four primary input methods which are Prosthetic, Acoustic, Magnetic, and Optical method.
1. Prosthetic (or mechanical) method: This is one of the early methods for capturing the motion from various parts of human anatomy. These methods include simple “on/off” type of motion detection systems as well as complex motion tracking systems. The latter type of prosthetic motion capture could be an ideal approach if it wasn t for the complex mechanical requirements and the performance-inhibiting qualities generally associated with such designs. However, the type of data provided could be clean rotational data collected in real time without any occlusion problems. This method is based on a set of armatures which must be attached all over the performer s body. The armatures are then connected to each other by using a series of rotational and linear encoders. These encoders are then connected to an interface that can simultaneously read all the encoders in order to prevent data skewing. Finally, through a set of trigonometry functions, the performer s motion can be analyzed. These design restrictions seem to be quite difficult to overcome, and will probably limit the use of these type of devices for character animation.
2. Acoustic method: Acoustic capture is another method currently used for performance capture. This method involves the use of a triad of audio receivers. An array of audio transmitters is strapped to various parts of the performers body. The transmitters are sequentially triggered to output a “click” and each receiver measures the time it takes for the sound to travel from each transmitter. The calculated distance of the three receivers is triangulated to provide a point in 3D space. An inherent issue with this approach is the sequential nature of the position data it creates. In general, we would like to see a “snap shot” of the performer s skeletal position rather than a time skewed data stream. This position data is typically applied to an inverse kinematics system(1) which in turn drives an animated skeleton.
One of the big advantages of this method is the lack of occlusion problems normally associated with optical systems. However, there seems to be several negative factors associated with this method that may or may not impede its use. First, there is the fact that the cables can be a hindrance to various types of performances. Second, the current systems do not support enough transmitters to accurately capture the personality of the performance. Third is the size of the capture area, which is limited by the speed of sound in air and the number of transmitters. In addition, the accuracy of this approach can sometimes be affected by spurious sound reflections.
3. Magnetic method: This is a popular method used for performance capture. Magnetic capture involves the use of a centrally located transmitter, and a set of receivers which are strapped on to various parts of the performer s body. These receivers are capable of measuring their spatial relationship to the transmitter. Each receiver is connected to an interface that can be synchronized so as to prevent data skew. The resulting data stream consists of 3D positions and orientations for each receiver. This data is typically applied to an inverse kinematics system to drive an animated skeleton. This magnetic approach shares the same lack of occlusion problems with the audio method. But it also shares the same negative factors such as the hindrance of cables, lack of sufficient receivers and the limited capture area. In addition, being magnetic the system is affected by any sizable areas of metal in the vicinity of the capture area, such as girders, posts, etc.
4. Optical method: Optical systems have become quite popular over the last couple of years. These systems can offer the performer the most freedom of movement since they do not require any cabling. Optical systems incorporate directionally-reflective balls referred to as markers which attach to the performer. Optical systems require at least three video cameras, each of which is equipped with a light source that is aligned to illuminate the field of view for that camera. Each camera is in turn connected to a synchronized frame buffer(2). The computer is presented with each camera view in order to calculate a 3D position of each marker; the resulting data stream therefore consists of 3D position data for each marker. This data is typically applied to an inverse kinematics system, to animate a skeleton.
One typical problem with optical systems is the fact that it is quite easy for the performer to occlude, or hide, one or more markers thus creating “holes” in the data stream. Adding more cameras and/or more markers can minimize this occlusion problem. However, adding more cameras makes tracking each marker more complex, resulting in increased CPU time. Increasing the number of markers can result in exponentially increasing the “confusion factor”, i.e. keeping track of which marker is which. Optical systems are also limited by the resolution of the cameras and the sophistication of their tracking software.
>> http://www.wisegeek.com/what-is-motion-capture-technology.htm
>> http://vizproto.prism.asu.edu/classes/sp03/motioncapture.htm
WHO USES MOCAP?
Films:
MoCap is being used more and more in films nowadays. MoCap based animation is essential for creating characters that move realistically, in situations that would be impractical or too dangerous for real actors (such as characters falling off the ship in Titanic. MoCap was also used extensively in Titanic for ‘filler’ characters (fit in between real actors) or in situations with virtual camera fly-bys over a virtual ship. Many of these shots would have been difficult or impossible to do with real cameras and a real ship, or real models, so virtual models, actors, and cameras were used. Some film characters require the use of MoCap, otherwise their animation seems unreality. More and more independent companies are starting to put together desktop studios – The idea of two or three people creating an entire movie isn’t that far off, if MoCap is used correctly. The Gypsy(3) is ideal for small and large shops. MoCap animation can be done very quickly and inexpensively, without scheduling expensive motion capture sessions in a studio.
Games:
Game development is the largest market for MoCap. With games drawing as much revenue as movies, it is easy to see why game development often calls for enormous quantities of MoCap. The immense competition to produce the ‘coolest game possible’ (thus becoming a top-seller – hopefully) means that greater production capabilities mean higher quality. More time is left for aesthetic finishing touches and fine-tuning of game play.
Generally there are two main types of 3D character animation used in games: Real-time playback vs. cinematics. Real-time allows the game player to choose from pre-created moves, thus controlling the character’s moves in real-time. Cinematics are the fully rendered ‘movies’ used for intros and ‘cut-scenes’. Often the last part of game production, or a process that is sub-contracted to a separate studio, cinematics are generally not essential to game-play, but do add a lot of appeal to the game, and help immensely with story development and mood generation.
Video and TV Performance Animation:
Real-time motion is becoming popular for live television broadcasts. MoCap can be used to place a virtual character within a real scene, or to place live actors within a virtual scene with virtual actors, or virtual characters within a virtual scene.
MoCap for real-time broadcast requires mock-ups of any non-standard physiology (big stomachs, tails, etc.) to keep the performer’s motions from causing the character’s limbs to interpenetrate its body. Joint moving limits on the shoulders and knees. A real-time adaptation feature such as Film BOX Animation’s real-time motion mapping (from the performer’s skeleton to a different proportioned character’s skeleton) is essential when the character’s body is very different from the actor’s body.
When combining a live elements with virtual elements the real and virtual cameras must share the same properties (perspective, focal length, depth of field, etc.) otherwise the illusion looks strange.
Post-Production for Ongoing Series:
MoCap for ongoing series is becoming very popular. Creating a weekly show without MoCap invariably causes shows to be late or production studios to go bankrupt. Having an efficient MoCap pipeline is essential to the success of an ongoing character animation based series.
Web:
MoCap is ideal for the web, whether used to create virtual hosts or greeting cards. As the web becomes more sophisticated and bandwidth(4) increases, MoCap will help bring a ‘human element’ to the web, in the form of characters that viewers can relate to and interact with.
Live Events:
MoCap generated Performance Animation can be thought of as Improvisation meets Computer Graphics (CG) . At trade shows, meetings or press conferences, a good improviser acting through a CG character in real-time can create a very intriguing lasting experience for the viewer. Integration with live actors further helps create a fascinating experience.
Scientific Research:
MoCap is useful for perceptual research. By presenting test subjects with abstract movements, distilled from motion capture data, repeatable experiments can be developed that provide insights into human perception.
Biomechanical Analysis:
Biomechanical analysis for rehabilitation purposes relies extensively on MoCap, for its ability to produce repeatable results. MoCap can be used to measure the extent of a client’s disability as well as a client’s progress with rehabilitation. MoCap can also help in effective design of prosthetic devices.
Engineering:
MoCap is essential for producing product designs that are ergonomically practical, as well as designs for physical products that are comfortable and appealing. Even though there are restrictions of these systems. Optical systems are easily occluded and require a large distance between the subject and the cameras. Magnetic systems have major problems with metal in the capture space.
Education:
MoCap training can make a huge difference in an animators training. While access to MoCap is not a substitute for developing good art skills and good traditional character animation abilities, it can go a long way towards making someone more employable.
Virtual Reality (VR):
MoCap is indispensable for VR training applications. It makes for much better immersion than using a joystick or a positional handle.
>> http://vizproto.prism.asu.edu/classes/sp03/motioncapture.htm
TECHNOLOGIES IN FILM BEFORE BECOMING MOCAP
1971: Metadata
An experimental 2D animated short by Peter Foldes drawn on a data tablet, who used the world’s first key frame animation software, invented by Nestor Burtnyk and Marceli Wein.
1973: Westworld
First use of 2D computer animation in a significant entertainment feature film. The point of view of Yul Brynner’s gunslinger was achieved with raster graphics.
1976: Futureworld
First use of 3D computer graphics for animated hand and face. Used 2D digital compositing to materialize characters over a background.
1977: Star Wars
First use of an animated 3D wire-frame graphic for the trench run briefing sequence.
1981: Looker
First CGI human character, Cindy. First use of shaded 3D CGI as we think of it today.
1981: Wolfen
First use of in-camera effect for thermal vision sequence (see Predator).
1982: Tron
Extensive use (15 min. fully computer generated) of 3D CGI including the famous Light Cycle sequence. Also includes very early facial animation (for the Master Control Program).
1983: Rock & Rule
First animated film to use computer graphics.
1985: Tony de Peltrie
First CGI-animated human character to express emotion through his face and body language.
1985: Young Sherlock Holmes
Lucasfilm creates the first photorealistic CGI character, ‘stained glass knight’ with 10 seconds of screentime.
1986: Labyrinth
First realistic CGI animal.
1987: Captain Power and the Soldiers of the Future
First TV series to include characters modeled entirely with computers.
1990: Total Recall
Use of motion capture for CGI characters.
1990: RoboCop 2
First use of real-time computer graphics or “digital puppetry” to create a character in a motion picture.
1991: Terminator 2: Judgment Day
First realistic human movements on a CGI character. First use of a personal computer to create major movie 3D effects.
1993: Quarxs
First broadcast series of animated CGI shorts.
1993: Jurassic Park
First photorealistic CG creatures.
1993: Insektors
First computer animated TV series. First use of character animation in a computer animated television series.
1994: ReBoot
First full-length computer animated TV series.
1994: Radioland Murders
First use of virtual CGI sets with live actors.
1995: Casper
First CGI lead character in feature-length film (preceded Toy Story by six months). First CGI characters to interact realistically with live actors.
1995: Toy Story
First CGI feature-length animation.
1997: Marvin the Martian in 3D
First computer animated movie viewed with 3D glasses.
1999: Fight Club
First realistic close-up of detailed facial deformation on a synthetic human.
2001: Final Fantasy: The Spirits Within
First feature-length digital film to be made based on photorealism and live action principles.
2001: The Lord of the Rings: The Fellowship of the Ring
First use of AI for digital actors (using the Massive software developed by Weta Digital).
2003: The Matrix Reloaded
The Burly Brawl – the first use of “Universal Capture”, the combination of dense (rather than point-based) motion capture and per-frame texture capture.
2003: Gollum from the Lord of the Rings trilogy
First photorealistic motion captured character for a film, Gollum was also the first digital actor to win an award (BFCA), category created for Best Digital Acting Performance
2004: The Polar Express
First CGI movie that used motion capture for all actors.
2009: Avatar
First full length movie made using performance-capture to create photo-realistic 3D characters and a feature a fully CG 3D photo-realistic world.
>> http://en.wikipedia.org/wiki/Timeline_of_CGI_in_film_and_television
WETA DIGITAL: THE WORLD LEADER OF CG STUDIOS
Weta Digital is a world leading visual effects company based in Wellington, New Zealand. They provide a full suite of digital production services for feature films and high end commercials, from concept design to cutting edge 3D animation.
Weta was formed in 1993 by a group of young New Zealand filmmakers including Peter Jackson, Richard Taylor and Jamie Selkirk. It later split into two specialized halves – Weta Digital (digital effects) and Weta Workshop (physical effects).
One of Weta s first projects was to provide visual effects for Peter Jackson s film Heavenly Creatures. They went on to work digital magic on Peter s blockbuster movies The Lord of the Rings trilogy and King Kong. And they also work with other Hollywood directors, providing digital effects for box office hits like I, Robot, X-Men: The Last Stand, Eragon, Bridge to Terabithia, Fantastic Four: Rise of the Silver Surfer, The Water Horse, Jumper, The Day the Earth Stood Still, District 9 and The Lovely Bones. Moreover, their teams of digital artists are world-leaders in all areas of visual effects production, including animation, motion capture (MoCap), crowd generation, modeling, compositing, and film scanning and recording.
In August 2006, Weta signed on to help James Cameron, a world famous director, to produce Avatar. Production design for the film took several years. The film had two different production designers, and two separate art departments, one of which focused on the flora and fauna of Pandora, and another that created human machines and human factors.
In September 2006, Weta was developing and combining its technologies with Cameron s technologies to reach a new level of creative and technological excellence, delivering the film in 3D. This combination renders a great CGI(5) power to both Weta and Cameron through all the process in making an epic movie like Avatar.
Most recently James Cameron s Avatar had proven that Weta was the CGI professional. This leaded Weta to win an Academy Award for Best Visual Effects. Their work on the film involved using a new camera system and shooting on a virtual stage. Finally, Weta s reputation was spread throughout the world for the power of creativity and delivery which keeps them in high demand with some of the world s leading film studios and up until now there are very least of the visual effect companies that can match Weta CGI creating potential.
>> http://www.wetafx.co.nz/about/
>> http://en.wikipedia.org/wiki/Avatar_(2009_film)
MOCAP & AVATAR
Each of Cameron s film introduces new technologies that change the way people make films. Avatar is the end result of the successful combination of a great talented director and a world class visual effect creator like Weta. Avatar making process step over the limit of the recent film making which supported by the innovation of the two new types of cameras which are the 3D Fusion camera and the Simul-Cam virtual camera, both of them are served as the part of MoCap.
3D Fusion camera: This camera is used for match-move, where you have motion capture CG characters and have to match their moves to composite them into an established shot. There are micro-motors on the Fusion rig to adjust the separation and convergence angle of the stereo Sony F950 cameras. The standard three lens functions of zoom, focus, and iris, plus interocular distance(6) and convergence are all under software machine control. Beam splitters provide the capability of having an interocular of half an inch even though the cameras are four inches wide. The cameras are mounted in the Fusion rig at a 90-degree angle.
Simul-Cam virtual camera: A virtual camera with a series of super high intensity LED lighting system that fires off the LEDs in sync with the motion capture cameras running at 20 microsecond exposures, and these active LED markers on the Pace rigs. Therefore, this virtual camera would see the markers, but not see any of the bright live action stage lighting or even the sun, this mean Simul-Cam virtual camera can eradicate a great weakness of an original motion capture camera.
>> http://hdusermagazine.com/wordpress/?p=12
The great benefit that can obviously seen while using MoCap in Avatar is its capability in capture a realistic movement of various types of object such as human, animal, and etc. Because all of the creatures and some humanoid characters like Na vi tribe in Avatar are based on an imagination of James Cameron, to shot those creatures by recording their reality life like many other movies did is impossible and almost impossible if the animator have to do every movement of all Na vi tribe characters and creatures in this movie by using only their high speed computers and mouse as well. Moreover, the realistic movement that recording by MoCap technology can persuade people to believe that if there are such kinds of those living things exist in the real world they must live in the same way as they did in film. Besides, there are many flexible way of using MoCap technology to match the diversity of users purpose, this is why MoCap is the best answer for making Avatar.
WHICH DEVELOPMENT DIRECTION OF THE MOCAP IS HEADED?
The uses of each type of MoCap usually depend on the motion data needed in each type of industry. Thus, the trends of each type of MoCap development are also difference. For example, in the film industry, a lot of research going into optical MoCap due to its flexibility and acceptable quality data while other industry investing in the development of other MoCap type seem to be a better choice due to the higher quality data but have some other restrictions. However, all type of MoCap future development is sharing some similar trends which are as followed:
? Every people in each area of using MoCap expect that this technology can provide them the result with a great accuracy (or quality); including improved physical abilities, so that characters can touch each other and feet meet solidly on the ground. This expectation directly affects the trend of the MoCap and all of the technologies evolution as well.
? When groups of performers are captured simultaneously, the number of polygons available to be digitized for each performer is decreased. Therefore, image quality will also reduce. Many MoCap manufacturer try to solve this problem and make MoCap to grow its ability to capture data from multiple characters.
? For better capturing details, MoCap s preview speed tend to drop down rapidly. Improving the speed of MoCap technology will provide a better use for the consumers.
? Capturing space or so called volume of MoCap is too narrow for a big project like capturing a big group of performers, so that increasing in volume will increase the value of MoCap technology.
? MoCap manufacturing cost is still too high which resulting in MoCap market price is very expensive. If manufacturers can lower the cost, so that consumers and independent artists can easy to access and experiment or even expand the technology much faster.
>> http://web.mit.edu/comm-forum/papers/furniss.html
INDEX
(1) Inverse kinematics: It is the process of determining the parameters of a jointed flexible object such as joints of creature models including humanoid type in order to achieve a desired pose. Inverse kinematics is a type of motion planning. Inverse kinematics are also relevant to game programming and 3D animation, where a common use is making sure 3D characters connect physically to the world, such as feet landing firmly on top of terrain.
>> http://en.wikipedia.org/wiki/Inverse_kinematics
(2) Frame buffer: It is a video output device that drives a video display from a memory buffer containing a complete frame of data.
>> http://en.wikipedia.org/wiki/Framebuffer
(3) The Gypsy: It was the world’s first Inertial Gyroscopic Motion Capture System (Gypsy) . Gypsy was able to capture movement using inertial sensors or gyros attached to lycra suit, it records simultaneous action and reaction of performance. It uses 19 customized Inertia Cube (TM) gyroscopic sensors for detecting nuance of movement and optimizing data output, while global translation system promotes precision of actor positioning and can be enhanced by addition of optional ultrasonic tracking technology. System also allows actors to touch or hug without occlusion.
>> http://news.thomasnet.com/fullstory/528380
(4) Bandwidth: bandwidth is often used as a synonym for data transfer rate – the amount of data that can be carried from one point to another in a given time period (usually a second).
>> http://searchenterprisewan.techtarget.com/sDefinition/0,,sid200_gci211634,00.html
(5) CGI: It is a short term of Computer-Generated Imagery, this is the application of the field of computer graphics or, more specifically, 3D computer graphics to special effects in films, television programs, commercials, simulators and simulation generally, and printed media. Video games usually use real-time computer graphics (rarely referred to as CGI), but may also include pre-rendered “cut scenes” and intro movies (or full motion videos) that would be typical CGI applications.
>> http://en.wikipedia.org/wiki/Computer-generated_imagery
(6) Interocular distance: The distance between the centers of rotation of the eyeballs of an individual or between the oculars of optical instruments.
>> http://www.thefreedictionary.com/interocular+distance
Order Now