
Some one-off thoughts on an article I read...
A recent issue of
The Economist had an interesting series on computers; they were all quite neat to read (that magazine has got to have the best writing of any), but one article caught my eye. It’s on “music intelligence” software (“Sounds Good?”
Economist 10 Jun 06, p. 8 of the Technology Quarterly section).
According to the article the computer programs analyze music pieces into 30 parameters, among them sonic brilliance, octave, cadence, frequency range, fullness of sound. Pop music producers can use these readings to predict which songs will be hits – or advise the musicians on how to improve their songs.
Disparate pieces of music can share underlying parameter: the article mentions that several U2 ditties have a bit in common with some Beethoven pieces. Trained ears apparently don’t detect this. Why? Mike McCready, co-founder of the software company Platinum Blue, explains it like this: “Songs conform to a limited number of mathematical equations”. I imagine that since it’s a mathematical thing rather than an ear thing, this would explain the trouble a trained ear has - when focusing on the overt sound of the notes and rests, rather than on deeper relationships among the elements, it's easy to overlook those connections.
The repercussions for the industry are obvious. Big companies will feel more confident backing one artist, because they’ll stand a better chance of return on investment. Artists in turn will profit – experienced ones, because they’ve already proved themselves, and newcomers too, since their work can be run through the machine before a contract is signed. The unwritten consequence of all this is that the quality of music will improve generally.
Copyright violation is another avenue for the software. Plagiarism lawsuits could use the readings as evidence, lending “objectivity” to the whole affair. It could even be used to prevent such stuff from happening. (Though you wouldn’t have needed it when Bush shamelessly stole the riff from Bon Jovi’s “You Give Love a Bad Name.” Sampling is one thing, but that was pure theft.)
One thing the article does not consider is the prospect of learning from the program. If the trained ear can’t tell the similarities between U2 and Beethoven, maybe it’s because they’re not listening in the same way. If the producer – or the artist – keys into that mode (pun fully intended), they can eventually dispense with the software altogether. My guess is that musicians and producers perform this kind of calculation already, but are only aware of it at a superficial level; the link between music and mathematics has been noted several times before, making this latest development just a push deeper into the inquiry.
Another thing the article does not consider is what quality is. What if, say, you have the slickest song in the world – but the most insipid lyrics in history? In other words, can we really say there’s hope for Britney Spears’s music would improve? That side of the music hasn’t had a program written for it yet, though surely they’re working on it. One could argue that nobody listens to the words anyways, so there’s no need to bother. Brian Eno discovered this early on in his own career (Eric Tamm,
Brian Eno: His Music and the Vertical Color of Sound, p. 120), but is this unanimously true? And if so, why do songwriters insist on penning meaningful lyrics? Perhaps they’re just behind the times.
As suggested above, it could be that the music of the words themselves has yet to be incorporated into the software. Some nonsense just sounds better than others – I’m even tempted to say that some gibberish rings “truer” than others, though I cannot back this up.
And while I’m at it, what would those programs make of Mizar? I love how Amazon says,
“People who bought 'The King of the Stars' also bought:
No titles available.”
http://www.mizar.us/Go on, check it out. You know you wanna.