The Sonification of Postmates Data
Turning data into sound seems like something weird and unnatural — and you’d be right. But the thing is turning data into anything is weird and unnatural. Taking a bunch of information on how a group of humans spend money to get their meals delivered throughout a week, grouping it together, doing some math, and then representing it as a line going across a page is just as weird.
It’s also fun to dig through our data to learn things like alcohol orders spike during national elections, or that a famous person ordered a bunch of unusual stuff, or that people aren’t as excited by kale as they used to be. However, all of that is still barely scratching the surface.
At the end of the day, Postmates has a huge amount of data on a fundamental human behavior: eating. It’s easy to lose sight of how fascinating our data is when we’re focused on making the numbers reach goals we’ve set.
This is the reason why I wanted to turn our data into sound. I wanted to take a step back and highlight the fact that we’re capturing the activity of millions of human beings just being human.
We’re used to seeing charts and graphs, we all know how to make them, and can quickly see what’s going on. But that’s the problem. It’s almost too easy, the impact isn’t as strong as it should be. Here’s an example.
Look at this chart:
Let me guess what your thought process might be when interpreting it:
- Ok cool, the white line shows when deliveries happen throughout a day.
- The yellow line shows the average distance Postmates are travelling each hour of the day.
- Those two delivery peaks are definitely lunch and dinner.
- Dinner seems like a pretty big deal.
- It’s super dead at 5am.
- It looks like Postmates don’t need to travel as far during the highest peaks, and farther when there’s not a lot of activity.
- Yeah, this all makes sense, that’s what I’d expect it to look like.
And if that’s your thought process, you definitely understood the gist of the chart. But if you ask me, that analysis is missing the humanity of what this chart represents.
The word “Represents” is important here. These lines aren’t anything on their own, they’re just lines. We’re seeing the behavior of tens of thousands of people ordering and delivering things combined together. Human behavior drew these lines, not me.
This is a critical concept of these sonifications. If human behavior can be sampled and aggregated to form a line on a page, can it also be used to make notes come out of my speakers? Of course it can! Music has a language through which it can turn data into sound, it’s called MIDI (Musical Instrument Digital Interface), which has been around for decades. MIDI is how a computer can understand a musical performance; it reads how hard a note is struck, the pitch, as well as a number of parameters that can be used to control the sound (volume, for instance).
If we’ve got data on when our customers request deliveries and can turn it into a chart, and computers can read data on when notes are played and turn it into music, it’s not crazy to think that we can transform our data into music.
Let’s hear what it sounds like to have this exact same data control the volume and pitches of a simple synthesizer:
We’re hearing a synth playing notes that are determined by the distance, and volume that is controlled by the number of deliveries.
This is where we can start to get creative. We can use this exact same data to control something a lot more musical. Here’s what the same data sounds like when it’s controlling a colorful piano-based sample library.
Now that this same exact data has been converted to a format you have to experience rather than just look at, it makes it feel a little bit more human. It moves naturally, it feels like a person could have performed it. Technically, hundreds of thousands of people did actually perform this. How cool is that?
Is this a better way to gain insight from the data? Maybe not. Would I play this sound for investors to convey how we’re doing? Definitely not. Does it help remove us from the context of things that we’re familiar with to remind us that we’re looking at human behavior and not just a bunch of metrics? I hope so.
We’ve created 12 one-minute long pieces all generated from our data. They’re divided into three categories:
In Trends you’ll be able to hear the changing popularity of items and entire categories of food, what time of the day do specific items get ordered, and how seasonality impacts customer preferences.
All of the sounds you’ll hear in the trends category are based on sampled physical keyboard instruments. Pianos, mellotrons, anything that makes a sound from physical things you can see moving in the world. We’ll also use effects to give the sound some color.
The visuals are based on particles. Individual particles flowing and lighting in reaction to the data.
In Demand you’ll hear the sound of exponential growth, and how different cities and regions behave compared against each other.
The sounds in the demand category are all based on sampled string instruments. Violins, vioas, cellos, basses, in sections and individually.
The visuals here are all based on light as it relates to textures. Light through glass, reflected off of surfaces, refracting over a fabric, etc.
In Density you’ll hear how spread out our customers are throughout the country, and how both our Postmates and our merchants are distributed in different cities.
The instruments in the density section are all coming from synthesizers that are run through effects.
The visuals here are based on light and color, so you’ll see brightness and saturation reacting to the data.
Behind the Scenes
If you’re wondering how I created these, check out this Colab notebook I made to generate the midi file in the examples above. No programming or music production experience required, just patience and curiosity. Copy it, go through the demo, plug in some of your own data and get creative.
The Sonifications you’re about to hear are a bit more complex than this, but this rough framework is where I started with every one of these.
Listen, Watch, Read
Without further ado, here’s the list of in-depth blog posts that explain exactly what’s going on in every one of these sonifications:
Transforming our data into sound might have been an idea I had a couple years ago, but it wouldn’t have ever been anything beyond fun little sound files that only exist on my computer if it weren’t for the people that helped put all of this together. Massive thanks to everyone that worked on this:
Kevin Byrd: His direction is embedded deep into this entire project. I showed him a sonification I made a year ago, and he digested it, stewed on it, made a framework, and got everyone together to actually turn it into something. Also made the opening titles and owned every design decision.
Parteek Saran: Created all of the incredible visualizations.
Jessie Prothero: Distilled and captured the essence of this project in the main blog posts, edited my long rambling posts, wrote a whole bunch of the captions.
Justin Sanders: Crafted the copy for the rest of the captions.
April Conyers: Helped get the word out.
Andy Tu: Came up with the name “Audiorders”.
Dan Tomko: I took ideas directly from him for at least one of the datasets in this project. But mostly I just wanted to publicly challenge him to come up with the next crazy Postmates data project that the world should see. YOUR MOVE.
Anyone that helped make the python-midi package: I am not a programmer, and this idea would have stopped before it even started if it wasn’t for the people that worked on this.
My awesome wife: I for sure spent an insane amount of time making these that I could have been hanging out with her, and she had a massive amount of useful feedback every step of the way. These would absolutely be worse if it wasn’t for her.
Postmates is always looking for creative data-focused people to join our team. If you want to make things like this, check out https://careers.postmates.com/ and say that Alex sent you.