folk-rnn (v3) tune #6197

As the world’s foremost interpreter of folk-rnn-generated tunes (it would be great if others joined in!), I bring you another great tune. Tune #6197 can be found in the folk-rnn v3 Session Book Vol. 3 of 4. Here it is as output by the system:6197orig.pngIt’s already pretty great. I really like both the A and B parts of the tune. I remove the pickup to each part and include them in their final bars. I also make the ending of A generated by folk-rnn the second ending because it modulates to G in the B part. I add a first ending to stay in D.

The penulatimate bar of B needs just a bit of adjustment. I move the D to an E and descend from the high C back to the E at the beginning of the last bar. It’s a nice variation of the first ending of the A part. With some bouncing bass and harmony added, we have a smashing tune. The C bass really resonates. (Sorry about all the clacking!)

6197.png

Advertisements

Music in the Age of Artificial Creation

Artificial intelligence has been making headlines with its sometimes alarming progress in skills previously thought to be the preserve of the human. Now these computers are “composing” music. As part of the 2017 Being Human festival, we are organising a concert/lecture titled, “Music in the Age of Artificial Creation“. We aim to demystify artificial intelligence in music practice, and to allay fears of human obsolescence.

Monday November 20 2017 19h
St. Dunstan and All Saints Stepney
Tickets: £5 (£10 at the door)

At the event you will learn we can make computers teach themselves about specific musical styles and then can use them to create new music. You will go behind the scenes of these systems, and see how they can be tools for augmenting human creativity, not replacing it. You will hear human musicians play several works composed by and with such systems, and learn exactly what the computer contributed to each one. The programme of the evening includes the following:

CFP: ML4Audio @ NIPS 2017

Audio signal processing is currently undergoing a paradigm change, where data-driven machine learning is replacing hand-crafted feature design. This has led some to ask whether audio signal processing is still useful in the “era of machine learning.” There are many challenges, new and old, including the interpretation of learned models in high dimensional spaces, problems associated with data-poor domains, adversarial examples, high computational requirements, and research driven by companies using large in-house datasets that is ultimately not reproducible.

ML4Audio (https://nips.cc/Conferences/2017/Schedule?showEvent=8790) aims to promote progress, systematization, understanding, and convergence of applying machine learning in the area of audio signal processing. Specifically, we are interested in work that demonstrates novel applications of machine learning techniques to audio data, as well as methodological considerations of merging machine learning with audio signal processing. We seek contributions in, but not limited to, the following topics:
– audio information retrieval using machine learning;
– audio synthesis with given contextual or musical constraints using machine learning;
– audio source separation using machine learning;
– audio transformations (e.g., sound morphing, style transfer) using machine learning;
– unsupervised learning, online learning, one-shot learning, reinforcement learning, and incremental learning for audio;
– applications/optimization of generative adversarial networks for audio;
– cognitively inspired machine learning models of sound cognition;
– mathematical foundations of machine learning for audio signal processing.

ML4Audio will accept five kinds of submissions:
1. novel unpublished work, including work-in-progress;
2. recent work that has been already published or is in review (please clearly refer to the primary publication);
3. review-style papers;
4. position papers;
5. system demonstrations.

Submission format: Extended abstracts as pdf in NIPS paper format, 2-4 pages, excluding references. Submissions do not need to be anonymised. Submissions might be either accepted as talks or as posters. If accepted, final papers must be uploaded on arxiv.org.

Submission link: https://easychair.org/conferences/?conf=ml4audio

Important Dates:
Submission Deadline: October 20, 2017
Acceptance Notification: October 31, 2017
Camera Ready Submissions: November 30, 2017
Workshop: Dec 8, 2017

(Note that the main conference is sold out already. Presenters of accepted workshop papers will still be able to register for the workshops.)

This workshop especially targets researchers, developers and musicians in academia and industry in the area of MIR, audio processing, speech processing, musical HCI, musicology, music technology, music entertainment, and composition.

Invited Speakers:
Sander Dieleman (Google DeepMind)
Douglas Eck (Google Magenta)
Marco Marchini (Spotify)
Others to be decided

Panel Discussion:
Sepp Hochreiter (Johannes Kepler University Linz),
Invited speakers
Others to be decided

ML4Audio Organisation Committee:
– Hendrik Purwins, Aalborg University Copenhagen, Denmark (hpu@create.aau.dk)
– Bob L. Sturm, Queen Mary University of London, UK (b.sturm@qmul.ac.uk)
– Mark Plumbley, University of Surrey, UK (m.plumbley@surrey.ac.uk)

PROGRAM COMMITTEE:
Matthias Dorfer (Johannes Kepler University Linz)
Monika Dörfler (University of Vienna)
Shlomo Dubnov (UC San Diego)
Philippe Esling (IRCAM)
Cédric Févotte (IRIT)
Emilia Gómez (Universitat Pompeu Fabra)
Jan Larsen (Danish Technical University)
Marco Marchini (Spotify)
Rafael Ramirez (Universitat Pompeu Fabra)
Gaël Richard (TELECOM ParisTech)
Jan Schlüter (Austrian Research Institute for Artificial Intelligence)
Joan Serrà (Telefonica)
Malcolm Slaney (Google)
Gerhard Widmer (Austrian Research Institute for Artificial Intelligence)
Others to be decided

Case Study — Machine learning about sexual orientation?

“All we really know is that a deep neural net can draw a distinction between these two groups [in the dataset] for reasons that we don’t really understand.”

Source: Case Study — Machine learning about sexual orientation?

 

This is a nice summary of some problems with the conclusions made in this research study. I would also suggest that the shape of a person’s face, gay or straight or whatever, is far more affected by the shape of their parent’s face rather than the presence of a gay gene.

Bonny An Ade Nullway, a folk-rnn (v1) tune

Here is a tune generated by the first version of folk-rnn, which titled it, “Bonny An Ade Nullway.”Bonnyorig.png

Here is a synthesis compliments of the Endless folk-rnn traditional music session:

I like how the first part gradually stretches upwards, reaching up to the high g in measure 5 and coming down to the low E in measure 6. I make one small change, adding an accidental on the c in bar 7. I’m not so enthused about the second part. Even though it has its moments, I totally scrap it and begin fresh. I want to mimic the stretching we hear in the first part, so I write a melody that moves up and up to a high b in bar 13. The second section of this part provides a little response that is not so dramatic. I start the second part with an immediate clash, a natural F. Playing that against the D major chord is fun! That makes an interesting piece that deserves to accompany an interesting dance.

Bonnymin.png