Exploring the sound of data
Conversations with Data: #80
Do you want to receive Conversations with Data? Subscribe
Welcome to our latest Conversations with Data newsletter.
This week's episode features data journalists Duncan Geere and Miriam Quick, the co-hosts of Loud Numbers, the new data sonification podcast. The pair speak to us about what sonification means for data storytelling and what kind of stories work best for this medium.
What we asked
Talk to us about Loud Numbers, the data sonification podcast you launched.
Miriam: Loud Numbers is a data sonification podcast created by Duncan and me. Data sonification is the process of turning data into sound, and we take it a step further by turning those sounds into music. In each episode, we introduce a data story, explain how we sonified it, and then play the sonification we’ve created.
Duncan: To us, sonification is the art and the science of turning data into sound. There are loads of different ways to do that -- from the simple pinging of a smartphone notification all the way up to a complex eight-part symphony where the data governs how all the different melodic lines interact with each other.
Talk to us about your career and how it intersects with sonification.
Miriam: I've been a data journalist and a researcher since about 2011, but I started working with data and data visualisation through music. I did a PhD in Musicology at King's College London. As part of that, I used quite a lot of data in my doctorate. I've always been involved in music and data. For several years, I've been keen to work on a project that turns data into music in a systematic and musically exciting way.
Duncan: I started out in tech writing. I later moved into data journalism and began covering science and environmental topics. As I began to write about more complicated subjects, I realised that I wanted more tools at my disposal than just words. In my 20s, I did some DJing and was in a couple of bands. I thought it would be really cool to combine these things. I'd come across a couple of inspiring examples of sonification, but I wondered why there wasn't a sonification podcast out there. After waiting to see if anyone would develop one, I realised that I had to just do it myself.
What other sonification work had you done prior to this?
Miriam: I had some experience with sonification before. I'd done a project called Sleep Songs, which was a collaboration between myself and the information designer Stephanie Posavac. We measured our breathing rate while asleep along with our husbands and took that data. Stephanie turned it into a visual artwork. I turned it into two pieces of music where the rhythm of the inner parts in the music corresponds to the changes in our breathing over the eight hours that we were asleep.
How much music theory is required for data sonification?
Duncan: My musical knowledge is limited. It depends on what kind of sonification you want to do. If you want to sonify something that is very musically complex, then you'll definitely benefit from having some theoretical background. But you do not necessarily need it. There's a lot that can just be done with tempo. For example, you can show how frequent events happen maybe throughout history or perhaps just by triggering specific audio clips at different volumes at different times. There's a fantastic sonification piece called "Egypt Building Collapses" by Tactical Technology Collective. You don't need any music theory or even code for that. You just line up the sound files on your timeline and then play them at different volumes.
Let's talk about your Tasting Notes episode where you sonify the experience of beer tasting.
Duncan: For this episode, we had 10 different beers, which made 10 different pieces of music. So one per beer. Each one has 10 different parameters associated with it around taste, aroma and look. We got these numbers from Malin Derwinger, a professional beer taster in Sweden. We asked her to taste 10 different beers and give us her scores for them. The louder the sound associated with each parameter, the stronger that taste or that aroma.
Miriam: We're not creating these with the intent that people are going to be able to read specific numbers out of them. We wanted that to be an intuitive connection between the sounds that we use to represent the taste and the taste themselves. For example, the dizziness has an upward sweep because of its bubbly sound, the sweetness is a pleasant, harmonious chord because the sweetness is a pleasant sensation. Bitterness is hard-edged. For malts, we used a guitar chord because we associate beer with being in a bar, listening to a band.
What data stories work best for sonification?
Miriam: The data stories that work best are those that are based on time-series data. It works particularly well for showing a clear and simple trend. For instance, something that gets larger over time or something that gets smaller over time. All but one of our Loud Numbers sonifications uses time-series data. The exception is the beer episode. Stories about speed and pace work well too. There's a good New York Times video from 2012 about Olympic men's 100 metre finish times. It uses sonification to play a note every time one of the athletes crosses an imaginary finish line. It aims to show how narrow the margins are in sprint races from 1896 to 2012.
Duncan: There's also a fantastic New York Times article from 2017 about the speed of fire of automatic weapons. This is obviously a highly emotionally charged subject. Instead of using a gun sound, which I think would be quite tasteless, they use a very minimal sound to signify the speed at which these different weapons fire. I thought that was very interesting because one of the powers of sonification is that it can carry emotional weight in a way that a standard bar chart doesn't. You can create a sonification so that it gets so loud that it hurts. You can never make a bar chart where the bar is so long that it hurts. Sound reaches people in a way that is much harder to do with traditional visuals. It's hard to match the emotional intensity that you can reach with sound.
What other fields are using sonification?
Duncan: There are people working with sonification in journalism, but also in science, particularly earth sciences and astronomy. There are loads of astronomers who work on education, including partially sighted astronomers who use sonification to understand trends in complex datasets. Artists are also creating installations that involve sonification. Others using sonification are from the worlds of traditional music and computer-based music.
What is your technical process and methodology for sonifying data?
Duncan: For most of our sonifications, the tech stack is pretty simple. We use Google Sheets to get the data together and analyse it. Then we load that data into a piece of software called Sonic Pi. That's where it becomes sound. Sonic Pi is a coding language based on Ruby that allows you to automate the most tedious parts of the sonification process, like figuring out which data values to turn into which sound values.
So, if the data is eight, what does that mean in terms of volume, pitch or whatever else you're deciding. That process is called parameter mapping and it's a really important part of sonification. Then we run the data through the code and it generates a series of notes or volumes. What comes out of it is more or less a complete piece of music or, worst case, a collection of sounds. Then we polish it up and turn it into an actual track in Logic Pro, which is a digital audio workstation. The sonification work happens in Sonic Pi, but Logic Pro is where it becomes music.
What challenges have you encountered with creating sonifications?
Miriam: From the very start of this podcast we wanted the tracks that we made to sound like music and not to sound like experiments. The issue we had is that data and music don't really follow the same rules. They're quite different to music. It's got its own rules and structures. So when you translate data into sounds without thinking carefully about the system, they can easily sound random and a little bit meandering or meaningless. We thought this is not enough. We want to make tracks that actually sound decent. You've got to think carefully to come up with systems that optimise the storytelling potential of the data or showcase the trend that you want to reveal. A lot of it is trial and error, but a lot of it is really about simplification of the data.
What is next for data sonification? Do you think it will become mainstream?
Duncan: I think it will become one of the tools that people use in their toolbox. It will definitely become more acceptable, particularly as stories become more multimedia. This is because sonification really does have the potential to tell data stories powerfully and emotionally. It also conjures up what you might call new virtual worlds, particularly when it's combined with video, animation, and graphics. But sonification also works by itself.
Miriam: One thing that may increase the acceptability of sonification is the explosion in the popularity of podcasts in recent years. People are getting more used to absorbing information through audio alone. At the moment, sonification does have a novelty appeal. Some mainstream news organisations are starting to use this medium. The BBC and The New York Times have been using it for quite a few years now. The Economist have a COVID-19 podcast that includes a regular sonification feature. I think that it's going to become more and more familiar territory.
Drones aren't just for photojournalists. Data journalists can also take advantage of them for their stories. Monika Sengul-Jones explores how to boost your storytelling with this technology, as well as the potential pitfalls for using them. She also provides a guide for journalists getting started. Read the full article here.
Data journalism training opportunity
Are you a freelance journalist reporting on development issues? Do you want to gain data journalism skills? Then the data bootcamp for freelancers is for you! Organised by the Freelance Journalism Assembly, this interactive 20-hour, two-week virtual training will teach you how to find, clean and analyse data. You'll also learn how to create data storytelling formats. Apply for one of the 25 scholarships. Deadline: 10 September, 24:00 CEST.
Our next conversation
Our September conversation will feature journalists Antonio Baquero from OCCRP and Maxime Vaudano from Le Monde. They will speak to us about working on OpenLux, a collaborative international investigation on the hidden side of the Luxembourg offshore industry. We will hear how they uncovered this story with Python and OCCRP's Aleph, an investigative data platform that helps reporters follow the money.
Tara from the EJC data team,
PS. Are you interested in supporting this newsletter or podcast? Get in touch to discuss sponsorship opportunities.
If you experience any other problems, feel free to contact us at email@example.com