Sonification is the use of non-speech sound in an intentional, systematic way to represent information (Walker & Nees, 2011).

Fascinating Twenty Thousand Hertz podcast, Video(less) Games, in which options for games composed mostly or entirely of sound are described.  Gamers and developers discuss their motivations for contributing and the experience of play.  At about 15:09, you hear about how Steve Saylor, a blind video gamer and game accessibility consultant, describes how he developed a rich series of audio cues that can be enabled.  These cues tell players about environmental features and action in the game.  Listen to hear the experience of the audio layer on and off.

Games composed mostly or entirely of sound are not new–the Twenty Thousand Hertz episode describes a text adventure game called Zork II that utilized a text-to-speech engine in the early 1980s.  But the idea of developing a convention for audio cues within a game or even across multiple games, reminded me of the sonification of math equations I first saw in the Complex Images for All Learners accessibility guide from Portland Community College.  The DIAGRAM Center has a wonderful article on sonification with audio examples that can be played back at different speeds.  Sonification is also not new.  But the provision of multimodal data representations does not seem to be widespread in higher education, at least not that I have seen.

Similar technologies are also being piloted in the realm of traditional sports, such as tennis.  The New York Times published a story by Amanda Morris describing aa new technology called Action Audio that aims to make sports accessible to people with blindness or low vision.  Action Audio converts data–such as data from the 10-12 cameras on an Australian Open tennis court–into 3-D sound in less than a second, allowing that audio to be broadcast alongside live radio commentary.  You can hear an Action Audio sample of an Australian Open tennis match.  To get the full benefit, the use of speakers or headphones with both left and right channels is ideal.

These innovations make me think about the materials that I create or make available to my students.  What would educators need to know to become proficient in the use, evaluation, and creation of multimodal data representations?  In the case of sonification, it might take an educator knowing where to find high-quality sonifications that had already been created.  It might require training in how to produce and design sonifications.  In terms of design, how can our existing base of research and theory help guide our decisions?  These are fascinating questions that I would like to explore more thoroughly and bring back to the courses I teach.