Sonification is data display using any sound that isn’t speech.
If your phone has ever rung, you’ve experienced sonification. It’s simple data—“someone is ringing you”, or if you’ve got custom ringtones, perhaps “Mum is ringing you”.
Sound can communicate many kinds of data; an ambulance’s siren, for example, could convey the location of the ambulance. A sound could also have a specific, learnt meaning, in which case we’d call it an ‘auditory icon’. Auditory icons are everywhere, culturally speaking: doorbells, air raid sirens, the Netflix ‘ta-dum’ sound.
The sonification on these pages is achieved through parameter mapping, which we need for more complex data. This works by taking a variable in the data and mapping it directly to a characteristic of the sound.
For example, you might take the wholesale price of electricity over time, which we used in our December 2021 article on adult social care.
You could map that price to the pitch of a pure sine wave ‘beep’ tone, with 0.2 seconds of sound for each month of data, like this.
It works particularly well with time series data, because sound happens over time anyway—our piece on the house price to wage ratio is a good example. If there’s an important time-based pattern in the data, we might add periodic clicks—for example, one click per year of data—to make time easier to track.
Why do you sonify data?
There are 3 advantages of data sonification that are highly relevant to data journalism.
First, if we only display data using graphs, only sighted people can access data display. That’s an ethical problem, but it also means we’re losing out on the opinions and insights of people who are visually impaired.
Second, we notice things differently with different senses. Interacting with data using our ears as well as our eyes gives us more opportunities to notice patterns, engages people with a wider range of sensory learning styles and is much more fun. That’s why we sometimes combine sonification with animation.
Third, sound is emotionally impactful. Adding more creative options to data display gives us more ways to feel the data.
How do you make sonifications?
Almost all the sonifications here are made with SuperCollider, a free, open-source programming platform for sound synthesis. It was originally developed by James McCartney, a software engineer in the astronomy department at the University of Texas at Austin and later a CoreAudio engineer at Apple, and released in 1996. It is now developed and maintained by an active and enthusiastic community and works on macOS, Windows and Linux. Eli Fieldsteel’s YouTube channel has an excellent, free tutorial series if you’d like to learn.
We’d also like to thank the Philharmonia Orchestra for open-sourcing their sample library, on which many of the granular-synthesis-based sonifications here are based. We also use the BBC Sound Effects Archive, for example, in our piece on forest growth. º
by Jay Richardson