Universitat Oberta de Catalunya

The wild machines

At the end of 2022, the New York Museum of Modern Art (MOMA) unveiled Unsupervised, the first piece created with artificial intelligence exhibited with all the honours in that sanctuary of contemporary art. Refik Anadol, his creator, had trained a neural network on data from the museum’s art background and had introduced instructions and constraints, but from here, allowed an automated system to generate, on its own, a flow of continuously moving images. The artist prompted. The machine performed.

For now, the terrain continues to be conditioned by algorithmic unpredictability. AI creation projects strive to explore the machine’s hidden possibilities, try to determine the result, and seek control of the process and resulting work, but evidence confirms that artificial creativity is random and somewhat indomitable.

We do not know what is going on inside the algorithmic maze. We try writing long prompts and setting the context in hopes that the machine will execute our will, but AI is not easily tamed. The machine does. Sometimes, the result pleases us, even though it often baffles us. Or tricks us.

For some groups, this unpredictability can be inspiring. In professional settings, however, so much inconsistency causes perplexity. Here, we ask for control, certainty, confidence… Organizations are wary of this still volatile technology, but they also assume that they have no choice but to try it and work on its fitting in today’s processes. They hope the wild horse can be tamed.

Hallucinations that do not stop

They ask to end, for example, with the hallucinations, that is, errors, but these persist despite improvements in the capacity and refinement of current language models.

A few months ago, a prompt from Professor Gary Marcus went viral, asking ChatGPT to generate the image of an empty room without an elephant: “Above all, no elephant in the room.” The result could not be more explicit.

Anadol
Figure 1. Prompt Result. Source: Gary Marcus

In April, the World Health Organization introduced Sarah, a virtual avatar willing to advise the population 24 hours a day. However, sometimes it was wrong or gave contradictory answers to the same query.

McDonald’s cancelled an AI-managed self-service project it launched with IBM technology because of the number of order errors the system generated. He struggled to identify the accent of some customers’ voices, was confused with background noise or did not understand complex orders.

Months pass, and new models emerge, but text generators still invent facts, and image generators are still confused with human hands’ number of fingers.

Programmed to invent

It is difficult, in principle, to imagine that such an unstable technology will inspire confidence. Still, the accelerated deployment of generative AI is based on the belief that this stage will soon be overcome and that it is worth delving into it before it is too late.

However, several published research studies throughout the year question this conviction: this precision stage may never come.

An article published by MIT Magazine explains that language models invent facts because they are programmed to do so. They do not look for accurate data; they generate text, images or whatever from the statistical probabilities the data suggests.

“In AI, all is a hallucination, but we only call it that when we notice it is wrong. The problem is that large language models are so good at what they do that what they invent seems right almost always. And this makes it hard to trust them.”

Another article, this one from a group of Google researchers, recognizes that many of the manipulations and disinformations generated with AI are not due to manipulators’ bad intentions nor explicit violations of the applications’ content or service policies. They simply take advantage of the ease of fabulation these tools provide. “They are a feature, not an error”.

The impossible control

We want AI not to hallucinate, but we also want it to trust us, be docile with our wishes and be easy to control. We embrace AI as a tool and expect it to behave as such. But in practice, we must enter long lists of instructions and a lot of context data to make AI adapt to what we want.

Question 1. Can AI be really controlled? Expert opinions are hesitant.

“As long as great language models are probabilistic, there will be an element of chance in what they produce. Roll 100 dice and you will get a pattern. Roll them again and you get another one. Even if dice, such as large linguistic models, are weighted to produce some patterns more often than others, the results will not always be identical. An error among 1,000 (or 100,000) accumulates many errors if we consider the times this technology is used every day”, the MIT Magazine’s article says.

A tool that is not a tool

We take AI as a tool, but it may not be a tool, not as we have understood it so far. Maybe we are talking about something else and are still keen to relate to AI as if it were a sort of sophisticated hammer.

While we are not sure yet if AI can be trusted in the way a hammer can be, big techs are moving towards new stages to give more responsibility and power to AI. That is the path that is being followed, for example, with so-called artificial intelligence agents, programs that have some autonomy to act on our behalf following some guidelines that we have set for them, for now with simple actions (find me a restaurant for tomorrow night, plan for a trip…) but with the ambition to address more complexity.

Bots may go from being our imperfect assistants to acting on delegation in a short time, from helping us do things to doing them directly. At this new stage, we will no longer need control. Trusting will be enough. Like Refik Anadol, we will let the machine act. Unsupervised.

Anadol
Figure 2. Unsupervised, New York Museum of Modern Art. Source: Refik Anadol

Question 2. Will we rely on automated agents if we have not passed the current assistants’ errors and hallucinations phase?

It depends on the margin we are willing to give and tolerate. Today, we are wary of the volatility of these “tools,” but our weakness in managing the world’s complexity has forced us to rely on technology like never before. Everything is too difficult for us, and our capabilities have a limit. We need help.

If humans cannot face the world ahead, we will have to find a solution and rely on something. Humanity has now put faith in technologies like AI. If these technologies are inaccurate, we will have to resign to what they can offer us. Probability instead of accuracy. Error as the lesser evil. Hallucinations as normality.

A tool must not fail. For an AI, it will be enough to establish a tolerable margin of action. A framework of autonomy. If they respect it, we will not care that we are trusting wild machines that we cannot completely control. There will be no need. We will make them our assistants with complexity and let them act on our behalf within certain parameters. Maybe they are right, but given the impossibility of them offering certainty, we will agree to build a world of probabilities and adapt to the whims of their randomness. We will cross our fingers and let the machine do more than us.

Documentation

DOUGLAS HEAVEN, Will (2024). «Why does AI hallucinate?”. MIT Technology Review [online]. Available at: https://www.technologyreview.com/2024/06/18/1093440/what-causes-ai-hallucinate-chatbots/

LACORT, Javier (2024). “La IA todavía no está lista para entender nuestros pedidos de comida. Un gigante como McDonald’s acaba de comprobarlo”. Xataka [online]. Available at: https://www.xataka.com/robotica-e-ia/ia-todavia-no-esta-lista-para-entender-nuestros-pedidos-comida-gigante-como-mcdonalds-acaba-comprobarlo

MAIBERG, Emanuel (2024). “Google: AI Potentially Breaking Reality Is a Feature Not a Bug”. 404 Media [online]. Available at: https://www.404media.co/google-ai-potentially-breaking-reality-is-a-feature-not-a-bug/

MARCUS, Gary (2024). “There Must be Some Misunderstanding”. Marcus on AI [online]. Available at: https://garymarcus.substack.com/p/there-must-be-some-misunderstanding

WORLD HEALTH ORGANIZATION (s.d.). “Meet S.A.R.A.H. A Smart AI Resource Assistant for Health”. World Health Organization [online]. Available at: https://www.who.int/campaigns/s-a-r-a-h


Recommended citation: ROSÉS, Joan. Les màquines salvatges. Mosaic [online], October 2024, no. 201. ISSN: 1696-3296. DOI: https://doi.org/10.7238/m.n201.2405

Deja un comentario