A Derp Deep Learning Ditty

I decided to try something new: I will start a timer and then begin looking through and judging material generated by folk-rnn, recording my observations as I go, until I find something that is worth working with. How long will it take? How many transcriptions will I have to discard? How hard will it be to pick a good cherry? How much will I have to change the computer-generated material?

18h00
I open the folk-rnn v3 session book volume 4 of 4 and scroll randomly to transcription #8588:

M:4/4
K:Gmaj
(3de^f|:g2d2 e2d2-|dBAG Bcde|a2^c2 ed^fe|
d2gg -gede|g2d2 e2de-|edec dec2|dc2A Bded|
c2BA G2G2:||:d2GA G2GG|G2AB Add2-|d2AA -dA^F2|
AGBc BAG2|d2AG G2G2|d2A^G A2^F2|d2Ad edd2|
^f2ed gfe2:|

Screen Shot 2017-10-07 at 12.13.18

Looks like we have ourselves a reel. It has an AABB form. There are no counting errors. I like the held over note in the first bar, and some of the sycopation. But what’s the c# doing in bar 3? It is natural in bars 2, 6-8. And bar 14 has a G#. Pulling to A major? Do I even have that note on my D/G diatonic accordion? (Yes, under one of those top-most accidental buttons.)

18h15

After futzing around for 15 minutes, I don’t think #8588 is salvageable. It feels like incoherent noodling. It’s a failure. Let’s go to the next one, transcription #8589:

M:4/4
K:Emin
d|BGG2 Bddc|BGG2 Bedc|BGG2 Bddg|
aged B3c|BGG2 B^cd=c|BGG2 Bded|BGG2 Bdge|
dBcB A2B/2c/2d|e3d e3^f|g2de ^fdd2|e2e2 dBB2|
dcBA GABd|e3d egg2|^fdde dcBd|e2^fe dBB2|
c2Bc ABcd|

Screen Shot 2017-10-08 at 08.38.50.png
The first 8 bars are in G, and the second are in Eminor. The transcription seems to be missing repeats though. Let’s remove the pickup and add repeats at the expected places, bars 8 and 16.

18h27

I have been futzing around with each section. The A melody is really boring. Can it be made more animated?

18:35

Wait a second. That boring three note figure in the A part can be in G or E minor. The alternation of G and Em sounds so cheesy. Like a Leonard Cohen song. Let’s use it. I make only a very minor change to bar 4, and add a simple chord progression. I play slowly to increase the cheese. The A part is finished.

19:00

The B part needs the most work I think. I keep the main idea of the melody, but change it’s ending in bar 10. That also changes bars 13-14. I compose bars 11-12 anew, and vary it in bars 15-16. Finally, I add a second ending to take us back to the A part. I add a chord progression, more complex than the A part.

Screen Shot 2017-10-08 at 08.42.40.png

19h30

We have arrived at a really sappy song (which I call “A Derp Deep Learning Ditty”) in an hour and a half, looking at only two folk-rnn v3 transcriptions. Let’s learn it, record it, and make a corny video to accompany (complete with a rap chorus).

Here it is without the rap:

Advertisements

Machine Learning and Human Behavior Postdoc positions

This looks like a fantastic opportunity!

Please apply by October 26, 2017

Emilia Gómez leads a novel research initiative inside the European Commission’s Joint Research Centre, on the topic of machine learning and human behaviour. Music will be an important aspect and use case of the project.

In this context, three postdoc positions in the area of machine learning and human behavior are open for appointment from January 1, 2018, at the Joint Research Centre (European Commission) in Seville, Spain. The fully funded positions are available for a period of three years.

The Joint Research Centre (JRC) is the European Commission’s science and knowledge service which employs scientists to carry out research in order to provide independent scientific advice and support to EU policy. The JRC Centre for Advanced Studies (JRC-CAS) was established to enhance the JRC’s capabilities to meet emerging challenges at the science-policy interface. JRC-CAS is now launching a three year interdisciplinary project to understand the potential impact of machine learning in human behavior and societal welfare. It will be carried out at JRC centre in Seville, Spain. There will be close collaboration with the Music Technology Group and the Department of Information and Communication Technologies of Universitat Pompeu Fabra  in Barcelona, Spain.

The project will (1) provide a scientific understanding of machine vs human intelligence; (2) analyze the influence of machine learning algorithms into human behavior (3) identify issues that may require a policy intervention. Music will be an important use case in the project. We are looking for postdoctoral researchers to join this challenging endeavour.

Particular areas of interests:

  • Fairness, accountability, transparency, explainability of machine learning methods.
  • Social, ethical and economic aspects of artificial intelligence.
  • Human-computer interaction and human-centered machine learning.
  • Digital and behavioural economy.
  • Application domains: music, arts, transport, social networks, health, energy.

We are looking for highly motivated, independent, and outstanding postdoc candidates with a strong background in machine learning and/or human behavior. An excellent research track record, ability to communicate research results and involvement in community initiatives is expected. Candidates should have EU/EEA citizenship.
The JRC offers an enriching multi-cultural and multi-lingual work environment with lifelong learning and professional development opportunities, and close links to top research organizations and international bodies around the world. Postdoctoral researchers receive a competitive salary and excellent working conditions, and will define their own research agenda inline with the project goals.

JRC-Seville is located in Cartuja 93 scientific and technological park. Seville is the fourth-largest city in Spain. With more than 30 centuries of history (gateway of America for two centuries, main actor in the first circumnavigation of the Earth),  three UNESCO World Heritage Sites, and privileged climate, it combines its historical and touristic character with a consolidated economic development and innovation potential.

You may obtain further information about the scientific aspects of the Postdoc positions from Dr. Emilia Gómez (project scientific leader: emilia.gomez@upf.edu) and at the following web pages http://recruitment.jrc.ec.europa.eu/?site=SVQ&type=AX&category=FGIV and https://ec.europa.eu/jrc/en/working-with-us/jobs.

Please apply by October 26, 2017

folk-rnn (v2) tune #4542

As the world’s foremost interpreter of folk-rnn-generated tunes, I bring you another nice tune I have found. Tune #4542 can be found in the folk-rnn v2 Session Book Vol. 2 of 10. Here it is as output by the system:

4542_orig

I recently came across this tune when I randomly opened The Endless folk-rnn Traditional Music Session. It often happens that when I look at that page I find something interesting. Here is the synthesis of the tune above as I heard it:

The tune has an AABB structure with 8 measures for each part. folk-rnn chose a 3/4 meter, but most of the transcription is better expressed by 6/8. There is a minor miscounting error in bar 16, but everything else is counted ok.

I am drawn to this tune because of its lilt. It goes up and down and takes all kinds of twists and turns that make it hard for me to catch my breath. And once I do catch my breath the melody resolves. I really like the change in bar 11 from 2 groups of 3 quavers to 3 groups of 2 quavers. This creates a nice contrast.

Nonetheless, there are some things I change to make the tune better. I change the meter to 6/8. I drop the first measure of the B part down an octave, and add a second ending on the A part to help the transition. I change bar 10 to accelerate back to the upper register, and change bar 11 as a gradual descent. I finally add a harmonic progression that fits on my diato, and voilà!

4542.png

And here I play the 6/8 piece with a 2/4 feel. Enjoy!

folk-rnn (v3) tune #6197

As the world’s foremost interpreter of folk-rnn-generated tunes (it would be great if others joined in!), I bring you another great tune. Tune #6197 can be found in the folk-rnn v3 Session Book Vol. 3 of 4. Here it is as output by the system:6197orig.pngIt’s already pretty great. I really like both the A and B parts of the tune. I remove the pickup to each part and include them in their final bars. I also make the ending of A generated by folk-rnn the second ending because it modulates to G in the B part. I add a first ending to stay in D.

The penulatimate bar of B needs just a bit of adjustment. I move the D to an E and descend from the high C back to the E at the beginning of the last bar. It’s a nice variation of the first ending of the A part. With some bouncing bass and harmony added, we have a smashing tune. The C bass really resonates. (Sorry about all the clacking!)

6197.png

Music in the Age of Artificial Creation

Updated program: https://highnoongmt.wordpress.com/2017/11/06/music-in-the-age-of-artificial-creation-an-illustrated-concert/

Artificial intelligence has been making headlines with its sometimes alarming progress in skills previously thought to be the preserve of the human. Now these computers are “composing” music. As part of the 2017 Being Human festival, we are organising a concert/lecture titled, “Music in the Age of Artificial Creation“. We aim to demystify artificial intelligence in music practice, and to allay fears of human obsolescence.

Monday November 20 2017 19h
St. Dunstan and All Saints Stepney
Tickets: £5 (£10 at the door)

At the event you will learn we can make computers teach themselves about specific musical styles and then can use them to create new music. You will go behind the scenes of these systems, and see how they can be tools for augmenting human creativity, not replacing it. You will hear human musicians play several works composed by and with such systems, and learn exactly what the computer contributed to each one. The programme of the evening includes the following:

CFP: ML4Audio @ NIPS 2017

Audio signal processing is currently undergoing a paradigm change, where data-driven machine learning is replacing hand-crafted feature design. This has led some to ask whether audio signal processing is still useful in the “era of machine learning.” There are many challenges, new and old, including the interpretation of learned models in high dimensional spaces, problems associated with data-poor domains, adversarial examples, high computational requirements, and research driven by companies using large in-house datasets that is ultimately not reproducible.

ML4Audio (https://nips.cc/Conferences/2017/Schedule?showEvent=8790) aims to promote progress, systematization, understanding, and convergence of applying machine learning in the area of audio signal processing. Specifically, we are interested in work that demonstrates novel applications of machine learning techniques to audio data, as well as methodological considerations of merging machine learning with audio signal processing. We seek contributions in, but not limited to, the following topics:
– audio information retrieval using machine learning;
– audio synthesis with given contextual or musical constraints using machine learning;
– audio source separation using machine learning;
– audio transformations (e.g., sound morphing, style transfer) using machine learning;
– unsupervised learning, online learning, one-shot learning, reinforcement learning, and incremental learning for audio;
– applications/optimization of generative adversarial networks for audio;
– cognitively inspired machine learning models of sound cognition;
– mathematical foundations of machine learning for audio signal processing.

ML4Audio will accept five kinds of submissions:
1. novel unpublished work, including work-in-progress;
2. recent work that has been already published or is in review (please clearly refer to the primary publication);
3. review-style papers;
4. position papers;
5. system demonstrations.

Submission format: Extended abstracts as pdf in NIPS paper format, 2-4 pages, excluding references. Submissions do not need to be anonymised. Submissions might be either accepted as talks or as posters. If accepted, final papers must be uploaded on arxiv.org.

Submission link: https://easychair.org/conferences/?conf=ml4audio

Important Dates:
Submission Deadline: October 20, 2017
Acceptance Notification: October 31, 2017
Camera Ready Submissions: November 30, 2017
Workshop: Dec 8, 2017

(Note that the main conference is sold out already. Presenters of accepted workshop papers will still be able to register for the workshops.)

This workshop especially targets researchers, developers and musicians in academia and industry in the area of MIR, audio processing, speech processing, musical HCI, musicology, music technology, music entertainment, and composition.

Invited Speakers:
Sander Dieleman (Google DeepMind)
Douglas Eck (Google Magenta)
Marco Marchini (Spotify)
Others to be decided

Panel Discussion:
Sepp Hochreiter (Johannes Kepler University Linz),
Invited speakers
Others to be decided

ML4Audio Organisation Committee:
– Hendrik Purwins, Aalborg University Copenhagen, Denmark (hpu@create.aau.dk)
– Bob L. Sturm, Queen Mary University of London, UK (b.sturm@qmul.ac.uk)
– Mark Plumbley, University of Surrey, UK (m.plumbley@surrey.ac.uk)

PROGRAM COMMITTEE:
Matthias Dorfer (Johannes Kepler University Linz)
Monika Dörfler (University of Vienna)
Shlomo Dubnov (UC San Diego)
Philippe Esling (IRCAM)
Cédric Févotte (IRIT)
Emilia Gómez (Universitat Pompeu Fabra)
Jan Larsen (Danish Technical University)
Marco Marchini (Spotify)
Rafael Ramirez (Universitat Pompeu Fabra)
Gaël Richard (TELECOM ParisTech)
Jan Schlüter (Austrian Research Institute for Artificial Intelligence)
Joan Serrà (Telefonica)
Malcolm Slaney (Google)
Gerhard Widmer (Austrian Research Institute for Artificial Intelligence)
Others to be decided