Audiogames are computer games played only by sound.
What does this mean
Playable by sound means that you can close your eyes and complete or play a game based only on auditory feedback. The information that would usually be conveyed through the screen like the world, your character, different kinds of items you have to interact with will have to find their place in audio.
Even in games where graphics is the main output medium, sound plays a crutial role in describing the world. If you play a shooter, you can usually hear footsteps all around you in 3D that represent where the player is in the virtual world.
Let's run with the shooter example for a moment.
By default, if you wear headphones, 3D audio is limited. Usually this means that only the left and right position, the so called stereo panning, and the volume is modified. There are far more suffisticated methods of audio positioning, however this is the most common one.
You can create quite an immersive experience using only stereophonic sounds. It is usually possible toderemine where about something is, at least in relation of 2 coordinates. Usually, in scrollers, this is left or right, vertical distance and horizontal distance. There are several audio side scrollers that utilize this well to show you where you are in the world and where the other items are.
Even in 3D, left, right and volume can still explain a lot. Tradeoffs have to be made here, so at least one of the main factors has to go. Usually, you won't be able to tell if an item is behind you or in front of you. Similarly, you won't be able to tell if an item is above you or below.
With binaural positioning, all these problems are fixed. A virtual ear is modelled and sound is modified based on how it would arive in your ear. This includes the tiny time delays between where the waveforms bounce into your ear canal, the ear closer to the sourcereceives the sound before the other. Also, your head geometry will dampen the sound, so frequency filters will also have to be aplied.
If this interests you further, I will probably write a blog article about this soon.
I'm still confused.
Right, don't worry, I'm just building up an understanding on how sound is usually handled in games, since most would be familiar with this.
Audio games make due with just this medium of representation.
You hear the quiet wind in the background. There's an animal on your left, and a talking person on the right. In front and behind you can hear several other sources. The goal is to interact with the talking person.
Since you hear the talking person in front and to the right, you simply turn slightly to the right until you hear the talking person in the center.
As you start to make your way forward, several issues arise.
What's around me?
Usually, items can be made to make noise, and most items have an ambience of some kind. A computer usually has a fan noise that could signify it. An animal, like a dog, needs to breathe or walks around. This you can tell no problem with just sound.
However, walls and small barriers do not make sound, and usually don't have ambience attached to them. How can we tell those?
There are numerous ways to handle this, which I will also get into in another blog post. It could be anything from you being told using text to speech or recorded voice clips, echoing footstep sounds, or a scanner or radar that will tell you.
Right, so you're being told there's a barrier. So you side step a little bit to get around it. You can orient yourself by the noises in the background or the person talking.
As you approach the person, a dialog opens prompting you to select something to introduce yourself with. What do you do now?
Usually, people who rely on their ears to use computers have speech synthesizers installed. We need those for our screen reading programs, to hear what item we have selected on the screen. It's very easy to interface with these and simply talk to our programs directly.
Either that, or you simply provide another form of text to speech, like Windows' SAPI or equivalent, to read the messages for you. Arrow keys should move in menus, so the selected option, as well as any status information is announced. Enter selects the option, and our dialog is executed.
Other things like markers, crosshair information, or anything needs to be put into audio. Usually, we have little blips and bloops for these that people will have to get familiar with when starting to play.
What about other games?
Right, so non-realtime games, games that mainly use a menu structure system, or anything else is also super easy to understand. Simply have your speech or sound output pronounce the features you want your user's attention at. In a menu, this would be the currently selected option, or any critical event happening in the game like status messages, updating health or other information, etc.
I hope this wasn't too deep of an introduction. If it was, no worries, I will split all of this up into blog posts and take you through the design of an audio game step by step. Eventually. But not right now. :)