“Who is the author when the machine creates?”

And how does modern copyright law apply? Those are the questions at the center of J.-M. Deltorn, “Deep creations: Intellectual property and the automata,” Front. Digit. Humanit, vol. 4, no. 3, 2017. These are interesting questions that apply not only to deep learning, but the numerous other applications of machinery to human creation.

Deltorn identifies three “hurdles” bearing on the difficulty of answering these questions:

  1. clarity of the “causal chain”: it’s not very clear how a real-world creation comes to be with the use of such machines.
  2. “multiplicity of interactions”: it’s not easy to enumerate what contributes what to that creative machine.
  3. “new forms of interaction”: the model is operating together with an artist.

After a nice review of machine generation in a variety of arts, and a good review of the Berne Convention for the Protection of Literary and Artistic Works, Deltorn centers his discussion of the two questions above around two tasks: “style transfer” and “training data selection”.

“Style transfer” refers to separating “an artwork’s style from its subject-matter, and to subsequently transpose this style onto another object.” Here are many examples. Does an artist’s selection of source material constitute enough originality that the resulting work would be protected? Deltorn argues that simply transferring the “style” of one image to an image of a person’s choice is not likely to exceed the minimum level of originality that would protect the new work. However, as the number of choices made by the artist increases, e.g., taking and mixing styles of several sources, originality increases.

“Training data selection” refers to the role of the artist as “curator” of the material on which a model is trained. By the nature of training, the possibility exists that a computer-generated output will produce training material verbatim, and so the artist must remain vigilant against such overfitting. When overfitting is avoided, does the artist’s curation of training data constitute enough originality that the resulting work would be protected? Deltorn argues that such a role seems to be so small a contribution rewarding ownership to the endless amount of output the machine can produce that it would harm the function of copyright: to promote creativity: “Care should be taken, therefore, to limit copyright attribution to the creations that are indeed the locus of a ‘creative spark,’ a human one, that is, and not just the electric glint of a computational engine.”

Though the article focuses on work using a specific approach to statistical machine learning, it also nicely situates itself in the wider and richer history of machine generated art. Considering Deltorn comes from the domain of intellectual property and not engineering, he does very well in accurately relating principles and methods of machine learning (although at times I feel he oversells the power of these methods when it comes to creativity). The references assemble a variety of sources relevant to these issues as well. Here are some I want to explore:

  1. Bridy, A. (2012). Coding creativity: copyright and the artificially intelligent author. Stanford Technology Law Review 5:1–28.
  2. Jacobson, W.P. (2011). Robot’s record: protecting the value of intellectual property in music when automation drives the marginal costs of music production to zero. Loyola of Los Angeles Entertainment Law Review 32: 31–46.
  3. O’Hear, A. (1995). Art and technology: an old tension. Royal Institute of Philosophy Supplements 38: 143–58. doi:10.1017/S1358246100007335
  4. Zeilinger, M. (2016). Digital art as ‘Monetised Graphics’: enforcing intellectual property on the blockchain. Philosophy & Technology 1–27. doi:10.1007/ s13347-016-0243-1

HORSE2017 – Registration is open!

Registration is now open for HORSE 2017.

Submission is also still open.

Have you uncovered a “horse” in your domain?

We invite presentations exploring issues surrounding “horses” in applied machine learning. One of the most famous “horses” is the “tank detector” of early neural networks research (https://neil.fraser.name/writing/tank): after great puzzlement over its success, the system was found to just be detecting sky conditions, which happened to be confounded with the ground truth. Humans can be “horses” as well, e.g., magicians and psychics. In contrast, machine learning does not deceive on purpose, but only makes do with what little information it is fed about a problem domain. The onus is thus on a researcher to demonstrate the sanity of the resulting model; but too often it seems evaluation of applied machine learning ends with a report of the number of correct answers produced by a system, and not with uncovering how a system is producing right or wrong answers in the first place.

We are looking for contributions to the day in the form of 20-minute talk/discussions about all things “horse”. We seek presentations from both academia and industry. Some of the deeper questions we hope to explore during the day are:

  • How can one know what their machine learnings have learned?
  • How does one know if and when the internal model of a system are “sane”, or “sane enough”?
  • Is a “general” model a “sane” model?
  • When is a “horse” just overfitting? When is it not?
  • When is it important to avoid “horses”? When is it not important?
  • How can one detect a “horse” before sending it out into the real world?
  • How can one make machine learning robust to “horses”?
  • Are “horses” more harmful to academia or to industry?
  • Is the pressure to publish fundamentally at odds with detecting “horses”?

Please submit your proposals (one page max, or two pages if you have nice figures) by July 15, 2017 to b.sturm@qmul.ac.uk. Notification will be made July 25, 2017. Registration will then be opened soon after.

Ethics: notes

I’ve had philosophy courses before. A “Philosophy of Religion” course in 1995 completely dismantled in 5 minutes my then strong belief in the fine tuning argument of God’s existence (the Anthropic Principle) — in fact, it wasn’t even treated as a contender since it’s so clearly flawed. A “Philosophy of Science” course in 1996 disabused me of the notion that Science is a clean and tidy pursuit directed by facts of nature. And “Women’s Studies” and “Black Studies” courses helped me see how much my views of the world, and the USA in particular, are coloured by privilege. I never did have a course in ethics, however. Now that I am reading up on the subject (because I am reflecting on “the ethics of research in music generation” … more on that below) it seems to me to be as basic to philosophy as water is to life.

Ethics, or moral philosophy, deals with how humans should behave (normative ethics), and the nature of moral judgement (metaethics). Moral philosophy strikes at the heart of what it means (or not) to say something is “good”, and how one may (or not) claim it to be “true”. These are, maybe more than any other, fundamental questions about human existence. They have remained relevant since the ancient Greeks and before. And the questions are still not settled — which may be more of a testament to the conditions of human existence and the imperfection of language, than anything to do with philosophy being a flawed method for seeking truth and testing claims.

Anyhow, over the past few weeks I have been reading about ethics in preparation to write a case study for a manuscript. I am looking at the ethical problems with work that builds statistical models of music transcriptions in order to generate more like it. Put that way, it doesn’t seem such behaviour is particularly “bad” — but this can be a problem with language. One can also describe such research as forcing wholly unqualified participants into a community. Or, exploiting a collective resource in ways that were not intended by the community. There’s also arguments that the research aims may not be “bad”, but that the research could lead to “bad” things, e.g., musicians losing work, music becoming even more under-valued, etc. (There’s also legal implications, but that’s a question for law and not ethics.)

So far, I have read three texts: D. Robinson and C. Garratt, “Introducing Ethics: A Graphic Guide“, 2005; S. Blackburn, “Ethics: A Very Short Introduction” (2003); and H. J. Gensler, “Ethics: A contemporary introduction“, 1998. Each one has it’s merits. Robinson and Garratt’s illustrated guide gives a quick overview of the variety of questions in moral philosophy, and a chronological review of the many schools of thought from the Greek stoics to post-modernism. Blackburn provides a nice introduction to ethics, but casts aside the chronological approach for one addressing the skeptic’s claim that Ethics is impossible (we cannot agree). Gensler’s work goes deeper than the other two texts, and shows strong insight into the varieties of views of ethics.

Gensler’s book discusses several views of ethics and metaethics. He discusses the merits and problems with the following theories of ethics:

  1. Cultural relativism: What is “good” is defined by the majority in a society. Just do that which the majority approves of.
  2. Subjectivism: What is “good” is defined by my likes and feelings. Just do that which you feel is good.
  3. Idealism: What is “good” is that which one would like were they fully informed and impartial in their concern about everyone. Just do that which such an “ideal observer” would do.
  4. Supernaturalism: What is “good” is that which God desires. Just do what God wills.
  5. Intuitionism: The term “good” is undefinable, but there are objective moral truths. Just follow your moral intuition.
  6. Emotivism: The term “good” is just an exclamation of emotion. Moral statements are not genuine truth claims. Just follow your feelings.
  7. Prescriptivism: Moral judgements aren’t literaly true or false. Instead, what one “ought to do” is a prescription of what everyone should do in similar cases (universal). Use moral reasonsing to emphasise consistency.
  8. Consequentialism: Do that which has the best consequences, regardless of what that is.
  9. Nonconsequentialism: Some things are bad because they are, not because of their consequences. There are certain duties everyone ought to follow.

Gensler devotes several chapters to discussing consistency principles, and its many properties. He claims that consistency is fundamental to moral rationality. The “four basic consistency principles” he presents are:

  1. logicality: consistency of logic in beliefs
  2. ends-means: consistency in will, keeping the “means in harmony with our ends”
  3. conscientiousness: consistency between moral beliefs and actions and desires
  4. impartiality: consistency of evaluations in similar conditions regardless of individuals involved.

He derives the following four “derivative principles” from these:

  1. Golden rule: “Treat others only as you consent to being treated in the same situation”
  2. self-regard: “Treat yourself only as you’re willing to have others treat themselves in the same situation”
  3. future-regard: “Treat yourself (in the future) only as you’re willing to have been treated by yourself (in the past)”
  4. Universal law: “Act only as you’re willing for anyone to act in the same situation, regardless of imagined variations of time and person”

What I have really come to appreciate from this reading is how clear the pitfall of language becomes. For instance, Matthew 7:12 in the King James version of the Bible gives a version of the Golden Rule:

Therefore all things whatsoever ye would that men should do to you:
do ye even so to them: for this is the law and the prophets

As stated, this has the following problem: it permits a sadomasochist to humiliate and demoralise others. It also tells a hospital patient who wants their appendix removed to remove the surgeon’s appendix. It’s the law! (God is not a good logician, and doesn’t appreciate the distinction between the law and what is good. Why He chose flawed human language to reveal Hisself to His creation is another strange mystery.) The Golden Rule formulated by Gensler (“Treat others only as you consent to being treated in the same situation”) asks one to ponder a hypothetical situation as they are now, and not as they would be in that situation. An example Gensler gives is an adult watching a child about to stick a knife in an outlet. Imagining myself as the child, I don’t desire to be spanked (or electrocuted). So spanking is against the Biblical version of the Golden Rule unless I want to be spanked as an adult. However, knowing the situation as an adult, knowing the consequences of sticking a knife in the socket, I would then consent as a child to have the shock of being spanked rather than the shock of electricity. (I’m not entirely sure I agree with this, and looking around I can see that Gensler’s statement of the Golden Rule has its criticisms. For instance, what does he mean by “consent”?)

Anyhow, the end of the text synthesizes all the ideas together in a chapter discussing abortion. Gensler makes an argument that in order to be consistent in our moral rationality, one must be against abortion. His argument goes as follows:

  1. “If you’re consistent and think abortion is normally permissible, then you’ll consent to the idea of your having been aborted in normal circumstances.”
  2. “You don’t consent to the idea of your having been aborted in normal circumstances.”
  3. Therefore, “If you’re consistent, then you won’t think that abortion is normally permissible.”

The first premise results from Gensler’s derivative consistency principle, the Golden rule, which he derives as follows:

  1. “If you are consistent and think that it would be all right for someone to do A to X, then you will think that it would be all right for someone to do A to you in similar circumstances.”
  2. If you are consistent and think it will be all right for someone to do A to you in similar circumstances, then you will consent to the idea of someone doing A to you in similar circumstances.”
  3. “”If you are consistent and think that it would be all right to do A to X, then you will consent to the idea of someone doing A to you in similar circumstances.”

The truth of the second premise in his argument depends on the individual. If both premises are true, then the third follows. Gensler illustrates this with an interesting hypothetical situation. Consider a sadist pregnant woman who injects herself with a drug that will not harm her but make the fetus blind for the rest of their life. Would I think it right, and consent to, that being done to me when I was a fetus? Now consider the drug is one that causes death then and there instead. Would I think it right, and consent to, that being done to me when I was a fetus? If at some stage of development one’s answer to the first question is yes, and the second is no, then one is not thinking consistently. One cannot reject the blindness drug but accept the abortion drug and stay consistent.

One aspect of this argument that I find neat is that it doesn’t turn on the problematic definition of “human being”. However, it’s got its share of problems, and Gensler even admits it “leaves some of the details fuzzy.” A minor problem is that if I had a choice to be brought into the world under the care of a sadist mother or to be aborted, I would definitely choose the latter because of what I know now (one of the notions that Gensler emphasises is important about his formulation of the Golden Rule). Consider instead that the mother is not a sadist, but for some reason injected herself with the blindness drug because she believes that is best way for her child to live in the world. Would I then say this is wrong, or that I do not consent to it? Who am I to say that experiencing the world being blind is deficient or incomplete? Hence, I am cannot outright reject consent of being injected with the blindness drug.

This gets to the fuzzy meaning of “consent”, which is the fatal problem identified in Boonin-Vail “Against the Golden Rule argument against abortion” (1997). If Gensler means “consent” literally, then how can I consent to an idea of something that has already happened in the past? If “consent” means “approving of”, then Gensler’s argument runs into the problem that there is no inconsistency if one believes something is morally permissible, but does not approve of it (for example, a legitimate President of the USA may not approve of the news but still believe the news is morally persmissible). If “consent” means “morally permissible”, then the second premise becomes, “You find morally permissible the idea of your having been aborted in normal circumstances”, which is what is trying to be proven. If “consent” means “desire”, then the second premise in Gensler’s Golden Rule derivation becomes: “If you are consistent and think that it would be all right to do A to X, then you will desire the idea of someone doing A to you in similar circumstances.” This reduces Gensler’s Golden Rule to the defective form he attempts to fix, however (it does not prohibit sadomasochism).

Another problem is we can rewrite Gensler’s argument in another way:

  1. “If you’re consistent and think abortion is abnormally permissible, then you’ll consent to the idea of your having been aborted in abnormal circumstances.”
  2. “You don’t consent to the idea of your having been aborted in abnormal circumstances.”
  3. Therefore, “If you’re consistent, then you won’t think that abortion is abnormally permissible.”

The second premise is not necessarily true, and so for the conclusion. It depends on what “abnormal” means. Would it be that my mother would otherwise die in childbirth? Why should I now say, “I consent to the idea of being born  knowing that my mother will die because of it.” Knowing what I know now, that seems exceptionally selfish. It would have robbed my siblings of a mother, and my father of his wife. (That’s a bit utilitarian.)

Another more tortured possibility is:

  1. “If you’re consistent and think contraception is normally permissible, then you’ll consent to the idea of your having never been born in normal circumstances.”
  2. “You don’t consent to the idea of your having never been born in normal circumstances.”
  3. Therefore, “If you’re consistent, then you won’t think that contraception is normally permissible.”

How can I apply what I know now to a time and place in which I don’t exist? Hence, the second premise seems meaningless.

Anyhow, even if much of the above doesn’t apply to my forthcoming writing, it is timely reading because it has helped me realise some important things about the current situation of political discourse in the USA and the world:

  1. No one has all the answers. No one has a monopoly on morality. Every moral argument has its merits and weak points.
  2. There are good reasons why many ethical issues are unresolved, and will continue to be debated (since the time of the Greeks and before): every new participant in the conversation has to learn from the beginning, language is imperfect, and it’s easy to become distracted.
  3. Too often people take criticism of an argument as a criticism of themselves. If I perceived someone attacking me, I would assume a defensive mode. This can reduce my receptiveness to thinking critically. So, when I talk with people about moral views that I find questionable, I try to do so without invoking a personal battle.
  4. Discussing ethics is not about proving who is right or wrong, but discussing together the merits and weaknesses of moral arguments. Discussion, not “truth”, is the “end”.
  5. Looking into the assumptions and premises leading to one’s moral judgements can actually be fun but a bit scary. But it is necessary.
  6. To be a rational person, I must admit to the weak points in my moral judgements. These weak points will always exist.

folk-rnn at the QMUL Ideas Festival 2017

Last week, QMUL organised the “Ideas Festival”, which gave the staff a whole day of learning about the variety of interesting research going on within the organisation. As part of the day’s activities, I arranged for a group of us to play four tunes generated by folk-rnn. Below is the video evidence!

Here’s The Drunken Pint

Here’s X:488 (March to the Mainframe)

Here’s The Mal’s Copporim

Here’s The Glas Herry Comment

Optoly Louden, a folk-rnn original

Here’s a version of a cracking good tune generated by the first version of folk-rnn.

The verbatim transcription generated by folk-rnn is:

T: Optoly Louden
M: 6/8
L: 1/8
K: Gmaj
BdB GBd|AGA cBA|GBd gdB|AcA AGA|BdB GBd|cAG Fdc|DGB cAB|1 GAG G2A:|2 GAB G3|||:gag fgd|ege dBG|A2A FGA|BGE EDE|GAG GBd|ede gfg|edc BAG|1 EGF GBd:|2 cBG G2A|||:BdB ~c3|dBd edc|BdB g2d|BAG AGE|BdB cec|dBA ~G3|Aed edB|1 AGF G2e:|2 AGF G2D||

Here’s the dots:
Optolyv01.png

And ere’s the synthesis of the above:

In my version of this tune I cut the C part. I also don’t like the endings of the A and B parts, so I remove those. Just leave alternatives to the musicians (these transcriptions are just to remember how a tune goes). Finally, I make a half step change to the second note in measure 4. And that’s all! Here’s the dots of “Optoly Louden” (v1.0) with a chord progression that fits nicely on my diato:

Optolyv10.png