Application Of Sound In Movies And Games Film Studies Essay
Approaches to Game Audio. Modern computer games are undeniably becoming more realistic and cinematic. As graphics become better quality and game consoles become more powerful, the distinctive line between films and computer games that was set in the 1970’s now hardly exists. Graphics have a big part to play in films and modern computer games, but an even bigger part of films in my opinion is the music and the sound in the film general. The music helps create the tension, emphasise dramatic moments in the plot and giving films another dimension to what is shown on screen. These techniques in music and sound/sound effects in film have been taken and used into the production of the modern computer game.
Although the music and sound in film and modern computer game may have the same cinematic practice, the way it is implemented in computer games is very different. This is because of the interactive nature of the gaming environment in the computer game. Throughout this assignment I’m going to be looking at what kind of strategy’s game audio developers have created and adopted from film to produce effective audio in the music, dialogue and effects in computer games. We will firstly be looking into the techniques on how film music and sound is made, and the different strategies have been used to produce them. Then at the strategies and techniques on music for modern games, then discussing and evaluating the different and similar techniques and strategy’s that computer games have adopted.
Sound in film
Music and sound have always been a big part of film. Even before technology was advanced enough to play dialogue or music, silent films would normally accompanied by a pianist or even a whole orchestra. But why is sound so important to film? Composer Aaron Copland describes five purposes of film music which we will be going into detail about:
Creating a convincing atmosphere of time and place;
Underlining psychological refinements – the unspoken thoughts of the character;
Serving as a kind of neutral background filler;
Building a sense of continuity;
Underpinning the theatrical build-up of a scene, and rounding it off with a sense of finality. (Copland 1975: 154-5)
Creating a convincing atmosphere of time and place
Composers can achieve creating a time and place by capturing the ‘musical colour’, creating the feeling that the music would be created in that location or time. They can achieve this by using instrument of that era or location- Bagpipes would suggest the scene was in Scotland, while a String quartet could symbolise a Scene from the 1800. Although this is an effective way of locating the film musically in some respects, sometimes the composer or director of the film may not want to ‘use “authentic” Chinese music but just want to achieve a Chinese “flavour” or “colour” by using a pentatonic scale with Western instruments’ (Source 1) This can create a better effect overall, hinting towards where the scene is musically without using unwanted interments.
Underlining psychological refinements – the unspoken thoughts of the character
In some instances, music can portray a psychological element much better than dialogue can. This type of film music seems more effective if the composer reads over the script at the creation stage, making suggestions on where he would like the music to be. ‘Far too often, however, this possibility is passed over and music is not allowed to speak’ (B source 1). Music can also give the viewer a better understanding on what is going on within the head of a character, not specific detail, but an overview. Leonard Rosenman thinks that ‘The musical contribution to the film should be ideally to create a supra-reality, a condition wherein the elements of literary naturalism are perceptually altered. In this way the audience can have the insight into different aspects of behaviour and motivation not possible under the aegis of naturalism.’ This show that music can give a film another dimension, bringing out emotions and insights into characters where film without sound cant.
Serving as a kind of neutral background filler
Through research many composers feel this is one of the hardest things to do within composing a piece of music. Roy A. Prendergast point out that creating ‘background’ music for a composer “calls for him to be at his most subordinated. At times one of the functions of film music is to do nothing more than be there” This shows that sometimes music is just needed to blend in the background “as though it would exist as sound rather than as ‘constructed’ music”.
Building a sense of continuity
What is meant by continuity in film is that music can hold a film together “Music can tie together a visual medium that is, by its very nature, continually in danger of falling apart” (B source 1).The editor of the film will probably be the most aware of this particular attribute of music in films. Music can also bring a film together as a whole using a “unifying musical idea.”
Underpinning the theatrical build-up of a scene, and rounding it off with a sense of finality
When used correctly, music can add a huge amount of intensity and intimacy to a scene, it can “evoke a gut reaction unobtainable in any other way” (B Source 1) But music isn’t a miracle worker, a lot of good composers have been asked to create music for a weak scene, hoping that it will the scene stronger. But if the scene is weak it is near impossible to make it stronger, and sadly it is normally the composer who critics blame.
Three forms of music
The sound in Cinema takes on three different forms: Speech, Music or Noise.
Speech
The dialogue in films are “spoken by the actors or narration heard as a voice over” (Source 2). A major component in speech for films is an ‘automated dialogue replacement’. When recording on set there is normally back ground noise of traffic or planes flying overhead, this drowns the actor’s lines out. Because of this, the dialogue is recorded within a studio. Recording dialogue in this way makes it possible for actors to make their lines more dramatic then recording on location. An example of this is “the screaming by the actor in Jurassic Park actually was recorded in the sound studios in New York and Los Angeles” (source 2). In the studio an ADR expert would match the new recording with the film, making sure that they were properly in sync so that the audience isn’t aware of it being recorded in a studio.
Music
Music is a very powerful component to a film, in many different ways. For example in musicals such as ‘The Sound of Music (1965)’ the music try’s to captivate the audience creating an “emotional response when music and words are linked in a scene” (B source 2). Music also helps explain and move the plot instrumentally. Themes are very commonly used for individual characters, for example ‘Star wars’ when Darth Vader appears on screen his theme music is played.
There are two different types of film music. Firstly there is Source music, which is the realistic part of the scene; Street Musicians, Rock band playing in the background etc. The second film type is Underscoring; “Music motivated by dramatically consideration” (B source 3). The composers are normally not needed until a rough cut of the film is made, or even later. Within films that have to be shot in time with music that does not exist yet “a temporary music track may be played on set” (B source 3).
Many films without music feel very empty; this is why films are normally ‘pretracked’. The editor would cut scenes to stock or classical music so that “the tempo and phrasing lend structure to the footage”. Composers then have the task too music that is similar but different. In scoring, one of the first steps is deciding which scene will have music; this is called ‘spotting’. The composer will have to cue the start and end of the scene with a stop watch. The music editor will then create a time sheet or breakdown which will change the feet of the film into seconds, or even smaller time scale. He will then detail the action in a cue, making it easier for the composer to write any way he wants, “The closes synchronisation of music to action is called ‘Mickey-mousing'” (B Source 3)
There are two main types of cues in films. Long Cues which are length normally classical pieces of composition, an example of this is a scene from ‘Gold finger’ where planes are dropping sleeping gas on to a military base, throughout this scene that lasts over two minutes, the same composition is played throughout.
Short cues are normally just a few bars of music; this normally acts as punctuation. For example, in many horror films when someone is going to be stabbed with a knife, this is normal accompanied with a strings playing a high sustained chord, giving a sense of anticipation and making the audience uncomfortable.
At the recording stage of the music, the conductor of the piece is alerted by a ‘streamer’, “a three-foot scratch in the film at the end of which a punched hole producing a light on the screen that acts as the starting cue”. For the additional short cue scenes, there are similar ‘streamers’ that will alert the conductor. Throughout the recording, the conductor will normally have a set of headphones where he can listen to the dialogue or a click track.
Noise
This can be any other sound that can be heard within the movie for example footsteps or birds. Usually the quality recorded on set is poor, which means that special sound effect (SFX) have to be produced in a studio. To produce these sound effects, sound technicians called Foley artists will record different sound effect with items in a studio. For example to create a horse walking, the Foley artist can use two half of a coconut and tap them on the table or floor, or using a metal sheet to create the sound of thunder. Many of these sound effects can be bought digitally from companies that have noise libraries for a cheaper cost then to get Foley artists. But many producers for major motion pictures will employ people and spend more money on Foley artists to create custom sounds for the movie.
Timing
There are two different timings within films: Viewing Time and Story Time. Viewing time means the physical length of the film. This normally differs from Story time, which is the time that passes within the film itself. Films can normally cover a few years in the characters’ lives, but most films that we see only take around a couple hours to watch. “Viewing time becomes an instrument of the plot’s manipulation of story time.” (B Source 3)
Matching a sound or dialogue within the films images is called synchronization. When dialogue or sound is synchronized properly, the audience will not normally be able to tell if everything has been over overdubbed, or sound effects have been added.
Sound in games
The process of producing game audio resembles in many ways the production of film audio. Game audio follows the same ‘five purposes of film’ rules made by Aaron Copland, has similar recording techniques for live sound and Foley, and use many of the same software and recording equipment. “A lot of game play, i.e. the battles or the big set pieces, essentially has to be scored in some form or fashion. So that’s all similar to a film.” (B source 4)
Although these traits are very similar between game audio and film audio, there are very distinctive differences in the processes. Most of film audio is normally done on the post-production stage, which takes place “after the film has been edited and the visuals have been locked (the final version set)” (B source 4). A large amount of time is spent mixing and balancing the sounds at this stage, which is one of the most significant differences between film and games sound. “Post- production” within film doesn’t generally exist within game audio. This is because the timings are variable, the music needs to adapt to the game play. The reason for this is because some game players may vary greatly in the length of time needed to complete a level, and often they can complete tasks in different ways. The way they have come across this hurdle is by creating adaptive audio or interactive audio to others.
To find out how game audio makes this adaptive or interactive audio, we are going to look through the three production stages. The three ‘stem’ can be found in the same file or separate files depending on the individual or company’s choice.
Pre – Production
The first step on pre – production for game audio is “the creation of an audio design document”. An audio design document contains the details of the design and implementation of the game’s audio. At this early stage, the audio team don’t have much information; they may only have storyboards and characters sketches on which they have to develop an audio design document. But with this information the can start at an early stage of the production, making sure that the audio plays a significant role in the game. The first thing is to determine what type of game it is, finding out the genre and theme for the audio. One technique of finding out the right genre is by creating a temp track. A temp track means placing pre-existing music where the final composition should be. This helps the composer have an idea on where to work from.
The second step is how the sound will interact with the gameplay. Rules will normally be placed out by the games designer describing what role the sound design will have within the game. The next stage is to find out which parts of the game should have ambient sound and music. This is known as ‘spotting’, it involves” defining cue point entrances, exits, play-ins/play-outs, and game state changes, as well as deciding if the games variables (such as players health, surface properties, and so on) will be used to change sound parameters”(B source 4). To help the music fit into context with the game, a music cue list can be created. This helps break down the storyboard or script into segments and chapters, creating “an individual map for the game, as well as for each individual level”.
At theses first stages the audio team will need to look for ‘spots’ for environments, action, pace, tension/release, objects, characters/personality’s and so on. Another important step is spotting the game in terms of emotion; Within a games there is normally movement or a certain rhythm in terms of emotional peaks and valleys” (B source 4). To help the sound designer which sounds may interfere with one and another and which sounds need emphasis, an emotion map can be created showing “tension and release points”. At this stage level descriptions are broken down into different sound types, for example ambient noises, weapons sounds, Foley etc. These sound types will be created within an asset list which can be used at a later stage too “track recording in order to reproduce sounds at a later stage, if necessary”
Production
At this stage the production of game sound is normally taken place in several different locations. Like production of film sound: sound libraries are used and normally these effects are layered and manipulated to create the right effect. Bigger companies may have a dedicated Foley studio where Foley artists are hired and are told to create original sound effects for the game. The same techniques in film are used in game audio for sound.
One of the most useful techniques that are being used in game audio is the processing of ‘digital signal processing’ (DSP) in real time. Being able to do this saves a lot of recording time for the sound developers. Before processing DSP in real time effects on certain sounds had to be re-recorded individually, for example; “to get the effect of footsteps to change when walking from, say, a stone path into a cave, the effects would have to be pre – recorded onto the footsteps file” (B Source 4) But now only one sound sample of footsteps needs to be recorded, this is because the DSP filters can be set for location, making the audio responded physics graphic engines which can create more realistic sounding effects in real time while playing the game.
The final part of the audio production process is “the integration of the music, sound effects, and dialogue”. The integration of audio into game decides how the audio will be triggered within the game. This also controls what part of the audio might change the games state or the games parameters. For example “Music or ambience tracks may be triggered by location, by game state, by time-ins or time-outs, by players, or by various game events.”
For the composer’s music to work with the game, it must be able to integrate into it. Cutting the music into chunks, splits or loops may help in creating a much more dynamic score for the game. An hour or so of music can be stretched out to many hours by cutting it up and looping it. To make it easier to cut, normally the songs have to be quite rhythmic. In game music compositions, the composer will normally make sure that the music is played within the same key throughout. This is because it is easy for sections to be cut and looped without the audience realising.
The music composers for ‘Red Dead Redemption’ composed the music by creating many riffs and small composition lasting less than a minute each, all within the same key. After mastering each one individually, they then put it into the music engine of the game where “segments of the musical material are constantly chosen at random an put together to form a piece of music.” (B source 5)
Music can not only be cut, but also elements of it can be changed in real time in the game engine, for example DSP effects, tempo and instrumentation. Software such as Wwise and FMOD are being used much more these days to create editing such as this. This is because the software lowers cost and production time, and makes it easier to integrate a more dynamic score within the game.
Segments of the musical material are constantly chosen at random and put together to create a piece of music.
Post – Production stage
At this stage of production the main task is the mixing of the audio. The mixer has to find anything within the mix which is unnatural, too much repetition or anything having an unnatural imbalance. Within current games, the different stems of audio are competing with each other because they are all in the same ‘aural space’. This gives a great risk of sounds being in the same sound range and masking each other out. This is also problematic within film, but because of the unpredictability on where different sounds are going to be heard in a game, makes the mixing an even more difficult task. A strategy mixers have used to tackle this obstacle is by prioritizing sounds real-time in games. Using FX and ducking the music when dialogue happens, is an effective way of making sure needed information is passed on to the player. Another effective technique is making certain frequencies in the music quieter; this makes room for the dialogue to be heard and the music still has a presence.
Strategies adopted by game audio developers
To make it easier to see the similarities and differences in film and game audio, they have been made into two different sections
Similarities
Film and game audio have very similar, if not exact recording techniques. They both have to overdub dialogue and ambience. Game audio however, is more reliant on over dubbing, this is because there is no original sound because they have to create the ambience and dialogue from scratch. They both follow similar rules in what they want from the audio, and how they want their audience too feel. The music is very similar, both film and game audio often have scores arranged by professional composers, have a dedicated team that that deal with all the audios needs.
Differences
Evaluate
Conclusion
Overall thoughts
Examples, including discussion and evaluation, of strategies adopted by game audio developers;
Conclusion;
References.
Order Now