Universitat Oberta de Catalunya

AI art and uncanniness

AI model training and copyright infringement

When it comes to AI art (or “art”), it is hard to find a nuanced position that respects creative workers’ labour rights, free expression, copyright law’s vital exceptions and limitations, and aesthetics.

I am, on balance, opposed to AI art, but there are some important caveats to that position. For starters, as a matter of law, I think it is unequivocally wrong to say that scraping works and training a model with them infringes copyright. This is not a moral position (I will get to that in a second) but rather a technical one. Break down the steps of training a model, and it quickly becomes apparent why it is technically wrong to call this a copyright infringement. First, the act of making transient copies of works – even billions of works – is unequivocally fair use. Unless you think search engines and the Internet Archive should not exist, then you should support scraping at scale. And unless you think that Facebook should be allowed to use the law to block projects like Ad Observer, which gathers samples of paid political disinformation, then you should support scraping at scale, even when the site being scraped objects (at least sometimes):

After making transient copies of lots of works, the next step in AI training is subjecting them to mathematical analysis. Again, this is not a copyright violation.

Making quantitative observations about works is a longstanding, respected and important tool for criticism, analysis, archiving and new acts of creation. Measuring the steady contraction of the vocabulary in successive Agatha Christie novels turns out to offer a fascinating window into her dementia.

Programmatic analysis of scraped online speech is also critical to the burgeoning formal analyses of the language spoken by minorities, producing a vibrant account of the rigorous grammar of dialects that have long been dismissed as “slang”.

Since 1988, the UCL Survey of English Language has maintained its International Corpus of English project and scholars have plumbed its depth to draw important conclusions about the wide variety of Englishes spoken worldwide, especially in postcolonial English-speaking countries.

The final step in training a model is publishing the conclusions of the quantitative analysis of the temporarily copied documents as software code. Code itself is a form of expressive speech – and that expressivity is key to the fight for privacy because the fact that code is speech limits how governments can censor software.

Are models infringing? Well, they certainly can be. In some cases, it is clear that models “memorized” some of the data in their training set, making the fair use, transient copy into an infringing, permanent one. That is generally considered the result of a programming error, and it could certainly be prevented (say, by comparing the model to the training data and removing any memorizations that appear).

Not every seeming act of memorization is a memorization, though. While specific models vary widely, the amount of data from each training item retained by the model is minimal. For example, Midjourney retains about one byte of information from each image in its training data. If we are talking about a typical low-resolution web image of, say, 300kb, that would be one three-hundred-thousandth (0.0000033%) of the original image.

Typically, in copyright discussions, when one work contains 0.0000033% of another, we do not even raise the question of fair use. Rather, we dismiss the use as de minimis (short for de minimis non curat lex or “The law does not concern itself with trifles”).

Busting someone who takes 0.0000033% of your work for copyright infringement is like swearing out a trespassing complaint against someone because the edge of their shoe touched one blade of grass on your lawn.

But some works or elements of work appear online many times. For example, the Getty Images watermark appears on millions of similar images of people standing on red carpets and runways, so a model that takes even an infinitesimal sample of each one of those works might still end up being able to produce a whole, recognizable Getty Images watermark.

The same is true for wire-service articles or other widely syndicated texts: dozens or even hundreds of copies of these works might be in training data, resulting in the memorization of long passages from them. This might be infringing (we are getting into some gnarly, unprecedented territory here), but again, even if it is, it would not be a big hardship for model makers to post-process their models by comparing them to the training set, deleting any inadvertent memorizations. Even if the resulting model had zero memorizations, this would do nothing to alleviate the (legitimate) concerns of creative workers about the creation and use of these models.

The AI art debate

So, here is the first nuance in the AI art debate: as a technical matter, training a model is not a copyright infringement. Creative workers who hope to use copyright law to prevent AI from changing the creative labour market will likely be very disappointed in court. But copyright law is not a fixed, eternal entity. We write new copyright laws all the time. If current copyright law does not prevent the creation of models, what about a future copyright law?

Well, sure, that is a possibility. The first thing to consider is the possible collateral damage of such a law. The legal space for scraping enables a wide range of scholarly, archival, organizational and critical purposes. We would have to be very careful not to inadvertently ban, say, the scraping of a politician’s campaign website, lest we enable liars to run for office and renege on their promises while they insist that they never made those promises in the first place. We would not want to abolish search engines or stop creators from scraping their work off sites that are going away or changing their terms of service. Now, onto quantitative analysis: counting words and measuring pixels are not activities you should need permission to perform, with or without a computer, even if the person whose words or pixels you are counting does not want you to. You should be able to look as hard as you want at the pixels in Kate Middleton’s family photos or track the rise and fall of the Oxford comma, and you should not need anyone’s permission to do so.

Finally, there is the issue of publishing the model. There are plenty of published mathematical analyses of large corpora that are useful and unobjectionable. I love a good Google N-gram.

And large language models fill all kinds of important niches like the Human Rights Data Analysis Group’s LLM-based work helping the Innocence Project New Orleans’ (IPNO) extract data from wrongful conviction case files.

AI training and creative jobs protection

So that is nuance number two: if we decide to make a new copyright law, we must be sure that we do not accidentally crush these beneficial activities that do not undermine artistic labour markets.

This brings me to the most important point: passing a new copyright law that requires permission to train an AI will not help creative workers get paid or protect our jobs.

Getty Images pays photographers the least it can get away with. Publishers’ contracts have transformed from inches into miles-long ghastly rights grabs that take everything from writers but still shift legal risks onto them.

Publishers like the New York Times bitterly oppose their writers’ unions.

These large corporations already control the copyrights to gigantic amounts of training data and have the means, motive and opportunity to. They are engaged in this activity right now.

Big games studios are already acting as though there was a copyright in training data and require their voice actors to begin every recording session to that effect with the words “I hereby grant permission to train an AI with my voice”, and if you do not like it, you can hit the bricks.

If you are a creative worker hoping to pay your bills, it does not matter whether your wages are eroded by a model produced without paying your employer for the right to do so or whether your employer got to double dip by selling your work to an AI company to train a model and then used that model to fire you or erode your wages.

Individual creative workers rarely have any bargaining leverage over the corporations that license our copyrights. That is why copyright’s 40-year expansion (in duration, scope and statutory damages) has resulted in larger, more profitable entertainment companies and lower payments – in real terms and as a share of the income generated by their work – for creative workers.

As Rebecca Giblin and I write in our book Chokepoint Capitalism, giving creative workers more rights to bargain with giant corporations that control access to our audiences is like giving your bullied schoolkid extra lunch money – it is just a roundabout way of transferring that money to the bullies.

The music industry precedent

There is a historical precedent for this struggle—the fight over music sampling. Forty years ago, it was not clear whether sampling required a copyright license, and early hip-hop artists took samples without permission, the way a horn player might drop a couple of bars of a well-known song into a solo.

Many artists were rightfully furious over this. The “heritage acts” (the music industry’s euphemism for “Black people”) who were most sampled had been given very bad deals and had seen very little of the fortunes generated by their creative labour. Many of them were desperately poor despite having made millions for their labels. When other musicians started making money off that work, they got mad.

In the following decades, the sampling system changed, partly through court cases and partly through the commercial terms set by the Big Three labels: Sony, Warner and Universal, who control 70% of all music recordings. Today, you generally cannot sample without signing up to one of the Big Three (they are reluctant to deal with indies), and that means taking their standard deal, which is very bad and also signs away your right to control your samples.

So, a musician who wants to sample has to sign the bad terms offered by any of the Big Three labels and then hand $500 out of their advance to one of them for the sample license. That $500 typically does not go to another artist—it goes to the label, which shares it with its executives and investors. This system makes every artist poorer.

But it gets worse. Putting a price on samples changes the kind of music that can be economically viable. If you wanted to clear all the samples on an album like Public Enemy’s It Takes a Nation of Millions To Hold Us Back or the Beastie Boys’ Paul’s Boutique, you would have to sell every CD for $150, just to break even.

Sampling licenses do not just make every artist financially worse off, they also prevent the creation of music of the sort that millions of people enjoy. But it gets even worse. Some older, sample-heavy music cannot be cleared. Most of De La Soul’s catalogue was not available for 15 years, and even though some of their seminal music came back in March 2022, the band’s frontman, Trugoy the Dove, did not live to see it – he died in February 2022.

Reflections on AI art

This is the third nuance: even if we can craft a model-banning copyright system that does not catch many dolphins in its tuna net, it could still make artists poorer. Back when sampling started, it was not clear whether it would ever be considered artistically important. Early sampling was crude and experimental. Musicians who trained for years to master an instrument were dismissive of the idea that clicking a mouse was “making music”. Today, most of us do not question the idea that sampling can produce meaningful art – even musicians who believe in licensing samples.

Having lived through that era, I’m prepared to believe that maybe I will look back on AI “art” and say, “damn, I cannot believe I never thought that could be real art”.

But I would not give odds on it.

I do not like AI art. I find it anodyne and boring. As Henry Farrell writes, it is uncanny and not in a good way.

Farrell likens the work produced by AIs to the movement of an Ouija board’s planchette, something that “seems to have a life of its own, even though its motion is a collective side-effect of the motions of the people whose fingers lightly rest on top of it.” This is “spooky-action-at-a-close-up”, transforming “collective inputs … into apparently quite specific outputs that are not the intended creation of any conscious mind.”

Look, art is irrational in the sense that it speaks to us at some non-rational or sub-rational level. Caring about the tribulations of imaginary people or being fascinated by pictures of things that do not exist (or that are not even recognizable) makes no sense. There is a way in which all art is like an optical illusion for our cognition, an imaginary thing that captures us the way a real thing might.

But art is amazing. Making art and experiencing art makes us feel big, numinous, irreducible emotions. Making art keeps me sane. Experiencing art is a precondition for all the joy in my life. Having spent most of my life as a working artist, I have come to the conclusion that the reason for this is that art transmits an approximation of some big, numinous irreducible emotion from an artist’s mind to our own. That is it: that is why art is amazing.

AI does not have a mind. It does not have an intention. The aesthetic choices made by AIs are not choices, they are averages. As Farrell writes, “LLM art sometimes seems to communicate a message, as art does, but it is unclear where that message comes from, or what it means. If it has any meaning at all, it is a meaning that does not stem from organizing intention” (the emphasis is mine).

Farrell cites Mark Fisher’s The Weird and the Eerie, which defines weird in easy-to-understand terms (“that which does not belong”) but really grapples with eerie.

For Fisher, eeriness is “when there is something present where there should be nothing, or is there is nothing present when there should be something.” AI art produces the seeming of intention without intending anything. It appears to be an agent, but it has no agency. It is eerie.

Fisher talks about capitalism as eerie. Capital is “conjured out of nothing” but “exerts more influence than any allegedly substantial entity.” The “invisible hand” shapes our lives more than any person. The invisible hand is fucking eerie. Capitalism is a system in which insubstantial non-things – corporations – appear to act with intention, often at odds with the intentions of the human beings carrying out those actions.

So, will AI art ever be art? I do not know. There is a long tradition of using random, irrational or impersonal inputs as the starting point for human acts of artistic creativity. Think of divination.

Or Brian Eno’s Oblique Strategies.

I love making my little collages for this blog, though I would not call them important art. Nevertheless, piecing together bits of other peoples’ work can make fantastic, important work of historical note.

Even though painstakingly cutting out tiny elements from others’ images can be a meditative and educational experience, I do not think that using tiny scissors or the lasso tool is what defines the “art” in collage. If you can automate some of this process, it could still be art.

Conclusions

Here is what I do know. Creating an individual bargainable copyright over training will not improve the material conditions of artists’ lives – all it will do is change the relative shares of the value we create, shifting some of that value from tech companies that hate us and want us to starve to entertainment companies that hate us and want us to starve.

As an artist, I am foursquare against anything that stands in the way of making art. As an artistic worker, I am entirely committed to things that help workers get a fair share of the money their work creates, feed their families and pay their rent.

I think today’s AI art is bad, and I think tomorrow’s AI art will probably be bad, but even if you disagree (with either proposition), I hope you will agree that we should be focused on making sure art is legal to make and that artists get paid for it.

Just because copyright will not fix the creative labour market, it doesn’t follow that nothing will. If we are worried about labour issues, we can look to labour law to improve our conditions. That is what Hollywood writers did in their groundbreaking 2023 strike.

Now, the writers had an advantage: they could engage in “sectoral bargaining”, where a union bargains with all the major employers at once. That is illegal in nearly every other kind of labour market. But if we are willing to entertain the possibility of getting a new copyright law passed (that will not make artists better off), why not the possibility of passing a new labour law (that will)? Sure, our bosses will not lobby alongside us for more labour protection the way they would for more copyright (think for a moment about what that says about who benefits from copyright versus labour law expansion).

But all workers benefit from expanded labour protection. Rather than going to Congress alongside our bosses from the studios and labels and publishers to demand more copyright, we could go to Congress alongside every kind of worker, from fast-food cashiers to publishing assistants to truck drivers, to demand the right to sectoral bargaining. That is a hell of a coalition.

And if we do want to tinker with copyright to change the way training works, let us look at collective licensing, which cannot be bargained away, rather than individual rights that can be confiscated at the entrance to our publisher, label or studio’s offices. These collective licenses have been a huge success in protecting creative workers.

Then there is copyright’s most extreme wild card: The US Copyright Office has repeatedly stated that works made by AIs are not eligible for copyright, which is the exclusive purview of works of human authorship. This has been affirmed by courts.

Neither AI companies nor entertainment companies will pay creative workers if they do not have to. But for any company contemplating selling an AI-generated work, the fact that it is born in the public domain presents a substantial hurdle because anyone else is free to take that work and sell it or give it away.

Whether or not AI “art” will ever be good art is not what our bosses are thinking about when they pay for AI licenses: instead, they are calculating that they have so much market power that they can sell whatever slop the AI makes, and pay less for the AI license than they would make for a human artist’s work. As is the case in every industry, AI cannot do an artist’s job, but an AI salesman can convince an artist’s boss to fire the creative worker and replace them with AI.

They do not care if it is slop – they just care about their bottom line. A studio executive who cancels a widely anticipated film prior to its release to get a tax credit is not thinking about artistic integrity. They care about one thing: money. The fact that AI works can be freely copied, sold or given away may not mean much to a creative worker who actually makes their own art, but I assure you, it’s the only thing that matters to our bosses.

References

ARAMANDO ARIFIN, Jose; INDRA DEWI, Ienneke (2023). “Lexicogrammatical Analysis on African-American Vernacular English Spoken by African-American You-Tubers”. E3S Web of Conferences, vol, 426. DOI: https://doi.org/10.1051/e3sconf/202342601055


Recommended citation: DOCTOROW, Cory. AI art and uncanniness. Mosaic [online], september 2024, no. 201. ISSN: 1696-3296. DOI: https://doi.org/10.7238/m.n201.2402

Deja un comentario