Difference between revisions of "Development:Audio"

From VsWiki
Jump to: navigation, search
m (Creating Alien Voices)
m (Voice Acting)
Line 51: Line 51:
  
 
* Oswald "flight tutorial" mission:  completed by [[User:Turbo|Turbo]] for [[User:Pyramid|Pyramid]]
 
* Oswald "flight tutorial" mission:  completed by [[User:Turbo|Turbo]] for [[User:Pyramid|Pyramid]]
 +
* Aera "ship to ship communication:  in progress by [[User:Turbo|Turbo]] here http://vegastrike.sourceforge.net/forums/viewtopic.php?f=29&t=13311
  
 
Voice acting is easy to learn and fun to do.  For instructions on voice acting, see [[User:Turbo|Turbo]]'s [http://www.willadsenfamily.org/us/don/ttlg/voice/va_tutorial.htm tutorial].
 
Voice acting is easy to learn and fun to do.  For instructions on voice acting, see [[User:Turbo|Turbo]]'s [http://www.willadsenfamily.org/us/don/ttlg/voice/va_tutorial.htm tutorial].

Revision as of 14:15, 16 December 2008

Introduction

This guidelines standardize the sound and music in order to achieve equal quality for different contributions throughout the game. This standards are applicable for:

  • music
  • voices for intership radio transmissions, and perhaps campaign fixers
  • other sounds

Sound Requirements

The following quality settings apply:

  • 44kHz sample rate (the best that most microphones under $200 can produce)
  • Mono for voices and sound effects
  • Stereo for music
  • Vorbis OGG format

Music Requirements

Music Engine

The soundserver (integrated in VS) handles playing songs, while the Vega Strike engine handles sequencing them using the defined playlists (which are selected according to the situation by dj_lib.py).

The soundserver uses SDL to play them, which supports .mp3, .ogg, .wav, I don't think it supports flac (perhaps on Windows), and it does support .mid (but I have the feeling it will get dropped, at least temporarily, during the soundsystem rewrite, as supporting midi with only OpenAL is simply impossible - we'll get something later, though, don't despair).

.m3u lists are just like winamp lists, which are a bunch of files, each on its own line (relative paths are computed against where soundserver.exe is, which is usually inside the bin folder - and usually you want to always say ../music/something ), and you can add comments (on their own line, I don't think it works on the same line) by starting the line with '#'.

Playlists are stored in the user-data folder (usually under ~/.vegastrike), inside their own folder hierarchy (~/.vegasrtike/playlists). Inside playlists, you may find additional grouping, by situation:

  • battle
  • peace
  • threat

Inside each, you'll find a playlist for each faction. Usually, the faction is selected according to the system owner at the time battle/threat/peace happens, though inside peace you'll have also around_sig and away, which get selected according to whether you are near a significant unit (base or planet - dockable) or not. That all gets configured in dj_lib.py, so if you add playlists, you have to add them to dj_lib.py as well - I doubt anyone will have to add playlists, though.

Inside playlists, you can (currently - until the extension commands get added) bias the frequency of some tracks by simply repeating them on the playlist. The engine will avoid playing the same track over and over, having a record of recently played tracks.

(information provided by klauss)

Music Engine Future Plans

Eventually, we'll add sequencing commands to playlists, disguised as special comments (probably '#!<command>') - that will allow us to specify a bayesian network for the tracks, with (potentially) custom transition - that means, each track can have a certain subset of tracks follow it, with a certain assigned probability. That's not that far in the future, it needs not wait for the rewrite, as it's doable as an enhancement to the current engine.

(information provided by klauss)

Voice Acting

Voice acting provides in-game voices for ship-to-ship radio transmissions, and perhaps some day for campaign fixer conversations.

Current voice acting projects are as follows:

Voice acting is easy to learn and fun to do. For instructions on voice acting, see Turbo's tutorial.

Creating Alien Voices

A subset of voice acting is the creation of alien voices. The short tutorial below describes how Turbo created the voices for the Aera ship-to-ship communication.

This is the process I used to create alien-sounding speech without actually developing a language. If someone wants to do this for another race, please do, and start a new thread in http://vegastrike.sourceforge.net/forums/viewforum.php to get feedback on your work.

To begin, you should look at my voice acting tutorial to get some basics about sound editing tools and how to use them: http://www.willadsenfamily.org/us/don/ttlg/voice/va_tutorial.htm tutorial

There has been a lot of discussion about the automatic translator that does the "voice-over" of the alien speech. I used MicroSoft Narrator's "Michael." MS Narrator's voice meets the criteria for a cold soulless machine voice. He can be a bit hard to understand, though. If you have a cable that lets you bridge your audio output to your microphone input, that should give the best quality if you experiment with volume settings. If not, put the microphone near the speaker, have all the programs open( the script text the translator will read, and Audacity or other recording software) open, start MS Narrator, start recording, then click on the text window so Narrator reads that). When it gets to the end, stop recording and remove the extra bits at the front and back that were not part of the text.

Next, remove any noise and clicks using Audacity's "Remove Noise" and "Click and Pop Removal" tools. Then, separate each line into a separate file for editing. For each of the translator's lines, normalize the translator speech to full volume, that is, amplify quieter parts for a consistent (but not constant) volume using the "Amplify" tool. Make any final edits, such as forcing silence on the parts between words to remove any lingering noise. Save each line into a separate file.

The aera speech is based on humpback whale songs I found on the Internet. I used whales because they came to mind when I read that aera have vocal resonators (Artstyle_guide:Aeran) in their heads, instead of vocal cords in their throats. Other alien races' speech could be created with similar techniques as described below, using different animal sounds or your own voice as the basis for each species.

I modified the raw whale sounds by removing noise and clicks, then adjusting the tempo to 350% and the pitch down an octave. Audacity makes this easy. For example a tempo change in Audacity will not change the pitch, so you avoid the "Alvin and the Chipmunks" effecct. Then I saved the individual bits of modified whale sound into different files so I had a variety of tones, growls, and grunts from which to choose. This took about 2 hours.

Next open each translator's line one at a time. Mute the translator's speech so it will be silent while you work with the aera speech. Imported the aera speech bits into a separate track of the same file. Generally I used tones to represent positive feelings, growls for negative feelings, and grunts as modifiers, but different aliens may do it differently. Change some individual parts to add variety: change pitch here or there, reverse a few parts if it sounds good, and add silence. It is best to do this with the (muted) translator speech on the screen so you can see how the patterns line up, and it is easy to select a few seconds and play it to see what it sounds like without the voice over the top. For silent portions of the translator speech, put a particularly interesting part of the alien speech there. If the translator is difficult to understand for a particular word, make the alien speech silent at that time. There is no reason an alien language would have the same tempo as its translation, after all. It is best to keep both the translator and alien approximately the same length, because when you go with the final OGG format both will be mixed into stereo anyhow.

Finally, deamplify the aera speech to about 25% of the volume of the translator voice. Do this by selecting that track and the Amplify command. Then, unmute the translator, and listen to the alien voice and translator together. Make any final edits to the alien speech. If you can play the result with your eyes closed and understand the translator, while being aware of the alien voice behind it, you are done with that line. The last part of the process, assembling the alien speech parts and final editing, took about 4 hours for the 37 lines found in the source document.