Financial Times’ Data Visualization Editor Alan Smith let us know about some new work they’ve been doing to explore using data sonification to create generative soundtracks for data visualizations.
He also shared a video, embedded above, that explains their approach to creating a soundtrack to an animated data visualization of 40 years of US economic data, using the data itself to trigger vocals and music.
Smith cites two main reasons for their interest in data sonificication: making the visuals more memorable, by communicating information in multiple ways; and making animations accessible to readers with visual disabilities.
Tools used:
- Historical yield curve data are freely available from the US Treasury.
- The data animation was created using the open source data visualization library D3.js, with WebMidi.js generating simultaneous MIDI (Musical Instrument Digital Interface) note messages.
- Apple’s Logic Pro X was used for sound generation.
You can see additional details, along with the final visualization, at the Financial Times site.
The proposal I’d heard was that people can subconciously process background sounds, for example whether to change gear in a car based on the engine rpm, or whether to cross the road, and maybe this could be applied to traders working with live financial data
There have been other projects that explored using sonification with live financial data to create ambient background sounds, giving you a sort of ‘ambient awareness’ of the state of the financial markets.
I changed my mind on data sonification after watching this video:
https://www.youtube.com/watch?v=Ocq3NeudsVk
+ditto… good video
This was literally the plot of a Douglas Adams novel.
I wish that the term “Data Sonification” was limited to taking data points, converting them to samples (amplitude values) and then allowing a user to playback that resulting audio sample at various sample rates until they hear something fun/useful/interesting/revealing/etc.
When data points are converted into MIDI notes, and assembled as in the above example, I guess they are using sound to reveal the data, but it is so many steps removed that I think it warrants a different term. Data “Tonification”? Maybe?
When I first started using samplers about 20 years ago, I used to think about what it would sound like if you took various data sets, like atmospheric pressure, or the ocean surface position at a particular location, or the sway of a skyscraper) and took a long, high resolution data collection, then converted the values into an amplitude waveform, and just heard it as a sonic quality. With the ocean example, what if the tides were a fundamental frequency, and the waves were an overtone of some sort. With the sway of a tower, would it sound like the tine of a toy piano being “bowed” by the wind? Would the atmospheric pressure sound like a weird blorpy blob? No idea.
Cool. I heard Romanian witches are also using MIDI now to improve telling the future for their clients