Belarusian composer Ales Tsurko has created a new project, microscale, that turns Wikipedia articles into music.
microscale is created in real-time from random Wikipedia articles. Each article functions as a step sequencer, where the letters are the sequencer steps and the track titles are regular expressions that switch the steps on and off.
In an interview about the project, Tsurko says that ‘The concept of the album is to show that through transforming one media (text) into another media (music), the meaning can be transformed โ the article has its own meaning, but the music has a completely different meaning.’
The web version is hackable โ the regular expressions, samples, tempo and other parameters can be edited or replaced โ allowing microscale to be used as an instrument and sequencer for the listenerโs own music.
A physical/digital version is also available as an individually rendered CDR and one-time rendered download from Preserved Sound’s Bandcamp page. A preview is embedded below:
That example is impressively dynamic and ambient. Not at all what I expected to hear.
So garbage in garbage out.
Wouldn’t a ‘random’ button be easier?
Random implies a virtually uncontrollable unstructured output, which, when taken only by itself, is insufficient to obtain musical composition.
Taking information from articles, books etc. results in a mix of structured and unstructured information, which, when taken only by itself, may expect to yield better musical composition results.
Fair enough ๐
fair enough ๐
you still don’t really know what you’re getting unless you study the algorithm or takes notes of thousands of trials. But cool enough. It’s free I’d try it for kicks.
This is really cool. Very well done!