Process

The developer records a short musical tune, or use a prerecorded one on the keyboard. After which, he selects a generative model of his favourite genre or pre-set genres (classical, jazz, rock, or pop). This selection is then sent to DeepComposer network via AWS cloud to receive a polyphonic output, means the total AI composed music.

Behind The Scenes

Here’s what happening behind; Right after sending your melody to AWS cloud, there are two networks that train each other. Generator and Discriminator. While the Generator creates the rest of the music, Discriminator sends feedback to the Generator about the song being good or bad, thus, improving the overall output. After all, this is an attempt by AWS to make it easy for developers to learn AI or Machine Learning. This composing keyboard lets developers, regardless of their background in ML or music, to get started with Generative Adversarial Networks (GANs). An output of this is original music composed by them. This result can be transformed into a MIDI file of an MP3 to be shared on SoundCloud. AWS’s workshop following this event shall have this setup to let everyone try and learn composing using this keyboard. If interested to try on your own, you may try the preview of this here on AWS console. There are more to revel at this five-day event. If you want to hear all of them, follow us. Else, you may watch it paying a ticket to the grand show held in Las Vegas. Source: AWS Blog

AWS DeepComposer  AI Based Musical Keyboard For Developers   TechDator - 3AWS DeepComposer  AI Based Musical Keyboard For Developers   TechDator - 81