Here’s an application of evolutionary theory you don’t see everyday: the evolution of music by natural, make that public selection.
Researchers from the Department of Life Sciences at the Imperial College of London and the Media Interaction Group at the National Institute of Advanced Industrial Science and Technology in Tsukuba, Japan joined forces to investigate how consumer preferences–as opposed to directed artistic efforts–can affect the evolution of music. They set out to answer some very interesting questions, including: Is it possible to make music without a composer? If so, what kind of music is made? What limits the evolution of music?
Inspired by research on evolution in microbes and studies on how art and music develop and change in response to cultural forces, the team created “Darwin Tunes,” a computer-based system for simulating natural selection within a “population” of audio clips. Darwin Tunes is powered by an algorithm that creates “digital genomes”–computer programs, which, when executed, create short loops of sound. Like a biological genome that serves as a blueprint for an organism, each digital genome specifies certain parameters–in this case things like instrumentation and note placement. The algorithm does not receive any melodies, rhythms, or other human-created sounds as inputs, so the music created by Darwin Tunes is truly computer-generated.
Running the algorithm once produces a population of 100 audio loops that go through a number of “life cycles” during the course of the experiment. Which loops get to “reproduce” and which “die off” is determined by the ratings given by a group of nearly 7,000 human listeners who use a five-point scale ranging from “I can’t stand it” to “I love it.” Those clips that are deemed most pleasing reproduce and those that are hard-on-the-ears go extinct. In evolutionary terms, listener ratings are the “selective pressure” acting on the population.
As with living organisms, the offspring of the audio loops differ from their parents for reasons that also mirror biological evolution. Each audio loop in the second generation is produced by combining the genomes of two first-generation loops (akin to sexual reproduction in nature). The genomes of the second generation are also modified with new, random musical “genetic material” akin to DNA mutations in nature. Each new generation is again rated by listeners.
By repeating this process a few thousand times, the research team found that clips changed over time–moving from sound that would most aptly be called “noise” to sound that qualified as “music.” The difference is easy to hear in the clips below, which contain loops produced initially by Darwin Tunes (generation zero), loops from generation 1,500, and loops from generation 3,000.
As any musician or music lover can tell you, the qualities that make a piece of music appealing are complex. To better understand which traits were being “selected for” in the Darwin Tunes populations, the researchers looked to the emerging field of music information retrieval (MIR) technology. MIR is what allows services like Pandora and iTunes to suggest new music based on the songs already on a user’s playlist. Using two MIR algorithms to analyze the various generations of clips (both those that evolved with listener input and controls that were randomly assigned ratings), the researchers identified two specific traits that were changing over time: the presence of chords commonly used in popular music and the complexity of rhythmic patterns in the music.
While these two features are clearly important, the researchers conclude that there are many other musical factors in the evolution of these clips and that additional experiments using a wider variety of MRI algorithms would be interesting. The results of the study appear in today’s early online edition of the Proceedings of the National Academy of Sciences. You can listen to and rate clips by visiting the Darwin Tunes website.
Want to read more about the science of music? Check out the research conducted by the Pattern Analysis and Intelligent Systems Research Group at the University of Bristol. Their work to develop a mathematical equation that can predict hit songs was presented in December 2011 at the 4th International Workshop on Machine Learning and Music. Visit their Score a Hit website or download the short paperthat appeared in the conference proceedings.
Written by Christine Hoekenga
Christine is a freelance writer, editor, and content strategist, specializing in science and nature. She holds an Bachelor's degree in Environmental Science and Media Studies and a Master's of Science Writing. She has been working in science communication and education for nearly a decade as a journalist, an organizer for conservation groups, and a museum educator. Before joining the Visionlearning team, she served as the New Media and Online Community Manager for the Webby award-winning Smithsonian Ocean Portal. Christine is assisting Visionlearning with developing new modules and glossary terms, managing the blog, and outreach through social media.