Neural Nets for Generating Music

Kyle McDonald, Medium, Artists and Machine Intelligence, Sept 02, 2017
Commentary by Stephen Downes
files/images/1-zlyF5yIoh1EeM6DPUHW3_Q.png

This is an interesting and very detailed examination of attempts to create music using artificial intelligence. It tracks what are (to my mind) two major stages in the evolution of this work: first, the shift from symbolic representations of music to actual samples of music; and second, the shift to convolutional neural networks: "Convolutional networks learn combinations of filters. They’re normally used for processing images, but WaveNet treats time like a spatial dimension." It makes me think: that's why humans haave short-term memory (STM). Not as a staging area for long-term memory (LTM) but as a way of treating time as a spatial dimension. There's the obligatory question of whether these will replace humans, posed at the very end of the article (to no effect whatsoever) and a look at the use of these techniques to generate spoken word audio.

Views: 1 today, 363 total (since January 1, 2017).[Direct Link]
Creative Commons License. gRSShopper

Copyright 2015 Stephen Downes ~ Contact: stephen@downes.ca
This page generated by gRSShopper.
Last Updated: Sept 21, 2017 5:21 p.m.