Techno-optimist chants to unleash the power of AI innovation and throw off the burden of regulation were so starkly given oxygen by JD Vance shortly after he became US Vice President when he told an audience at the AI Action Summit in Paris that regulation favoured only the lazy incumbents, the old corporate behemoths who wanted to close the door of innovation behind them.
I disagree with techno-optimists about regulation for exactly the same reason they reject it: because our current form of capitalism is the most remarkably creative force in the universe. It combines the extraordinary biological cognitive adaptations of Homo sapiens – coevolved with rich human culture – with the scientific method, and with fierce competition among nation states, corporations, and meritocratised citizens, all left as unhindered as possible thanks to the protection of freedoms by democracy and the rule of law. The 20th century saw this acceleration with powerful infrastructures: the internet, vibrant urban precincts, global trade and transport. Add in AI, a technology that is itself extraordinarily creatively fertile, and up up and away continues the curve into the 21st century.
This is not a force that requires further unleashing. Just like an accelerating car, maximum acceleration might be great for a drag race or teenage fantasy, but it is not the best setting for a family drive to the countryside. Having created a creative behemoth so powerful it can destroy worlds, transfer unimagined economic and political power into the hands of elites, and generally cause chaos for ordinary citizens, we need to radically configure its interface so that its control serves socially beneficial ends. This will not happen through unfettered market freedoms. Look no further than the history of big tobacco.
Depending on how you look at it, the domain of creative arts might be seen as either the least or most significant place where this matters. Like most of my peers growing up in 1980s and 1990s London, I developed a progressivist mindset rooted in modernism: art seemed entirely about innovation. What are you doing that’s new, different or challenging? Where are things heading, and how is technology taking us there? Even traditional forms were infused with progressivism, from the development of transpositional complexity through equal temperament to the surprising notion that Stravinsky’s rhythms in the Rite of Spring were revolutionary. The seemingly ancient art of folk music was transformed by the introduction of microphones on voices, and jazz’s visionary assertion of African American cultural excellence, echoed again in hip-hop and Detroit techno, combined sensations of technological, political, aesthetic and cultural progress.
With today’s multiple dilemmas regarding AI’s impact on cultural industries, we might simply see history repeating and conclude that allowing innovation to run free is the lesson of art history. However, there are some reasons to see AI as quite different. Firstly, it is not “just another tool”. We are edging closer to systems that express complex agencies embedded as corporate actors within sociocultural systems, as well as systems that operate at such a scale that they can both swamp and predict human action. Secondly, they promote a platformised existence. According to the logic of deep learning, AI power comes at scale, meaning it benefits large corporations. Even without AI, a move towards creative practices happening on privately owned platform fiefdoms beyond the control of their users is worthy of scrutiny. Thirdly, AI, at its worst, is parasitic on creative cultures, sucking up every piece of creative work it can access without providing recompense for the value it extracts. Recent rulings on copyright and AI training data (legal victories by Anthropic and Meta) show there is no established protection against this. I do not think it is unreasonable to say, as the judge in the recent Anthropic ruling suggested, that AI acts somewhat like people, learning and finding “inspiration” in its training data. However, such a radical unleveling of the playing field, associated shift of power, and potential disruption of the skills and vibrancy of creative cultures should be viewed as a special case of exceptionally unfair use. The ruling may have been legally defensible, but the outcome is grossly wrong.
Each of these three factors has no precedent in the history of human creativity. But each is a big-picture issue. In a close-up view, it makes sense to consider creative AI as simply another creative technology: instead of learning to draw or edit images in Photoshop or Illustrator, someone will learn image prompt engineering, in-painting, and model fine-tuning; the technology will enhance the artist’s ability so that one person can produce more in less time, across a broader range of technical disciplines – we will all become art directors – but it is still creation as we know it, with a human at the helm, operating a series of advanced tools. Photography and synthesisers did the same.
In this new world, vibrant creative subcultures will still thrive making and sharing AI art, perhaps making a good living from it. AI art can be as valid a form of cultural expression as any other. Old timers can tut about hip hop producers lacking talent, taking shortcuts, and stealing from earlier material, while a new generation recognises and rewards what they clearly see as great art within the new rules of a new paradigm. AI can be just the same.
This is the close-up view which holds as technology is still in a nascent stage. Viewed through this human-centred AI approach, technology developers are creating the tools people need, addressing creative interface challenges such as generative accuracy, consistency, and refinement, as well as technical issues related to computational efficiency and data management, making smarter AI models that offer more user freedom and control. Often, this tends towards the simple formula outlined by Ben Shneiderman in his Human-Centred AI, that the best systems will combine powerful computer autonomy with equally strong human autonomy. Current AI systems provide significant computer autonomy, but this is futile until they are better designed to empower people and keep them in control. While innovative AI companies make ambitious claims about “democratising” creative expression through a streamlined route to artistic results, their tools often seem to produce a “dullocratising” effect –dampening those qualities that most enable expression in a creative interface: freedom and control. The frontier of AI progress is irregular and unpredictable, but it is entirely possible that a key sweet spot can be identified, where creative AI’s controllability and generative power are optimally balanced.
Again, this is all business-as-usual creative-technological change. My music generation was frenzied in its exploration of the sampler, the quintessential technology of 90s jungle music, where drum breaks were sped up and chopped about in astounding sonic mosaics. A documentary on that scene revealed how open it was to all. If you got some basic kit affordable on a modest salary, read the manuals, and spent a bit of time imitating your peers, you were away: no years of conservatory training needed, just talent and imagination. In that revolutionary zeal, you can imagine how unbothered we were that drum machines and synthesisers might have been putting live musicians out of business, and samplers, even more so, allowed pilfering from music’s creative history. There was no other way to produce this incredible sound. The new generation, without doubt, will achieve equally amazing things with AI.
But an easy mistake to make would be to assume that, while this may be true, there is nothing else concerning about creative AI. While the close-up view sees AI super-powering a new generation of artists, we need to monitor how such technologies – staggeringly autonomous, all-seeing, platform-based, and culturally extractive – serve other actors, namely a group of corporations and their investors, who may outwardly value creative cultures, but unfortunately, go hand in hand with the Silicon Valley ethos of disruption, which prioritises its masters’ profit and power.
At this macro scale, creative AI threatens quite many changes that are unlike anything before. What is perhaps most alarming is the potential for closing off the map of cultural content, with the world of artistic expression not only saturated but claimed by these new powerful agents. They have a giant map of cultural space in their hands, while individual artists only have their intuitive awareness of their immediate locale. Like colonial powers, these actors are bringing vast knowledge resources to divide up that cultural map into territories, closing it off to more localised individual expression. The risk is that our freedom of expression, and the cultural value that comes with it, becomes shut out as this map becomes overpopulated with corporate activity, just as real public spaces can become unnavigable under private encroachment. In practical terms, imagine you are unable to produce something seemingly original without it either being “inadmissible” on the platforms you depend upon to communicate with the world, or immediately (subtly, indirectly) co-opted by cultural analytics giants (that seem always a step ahead).
Similarly, we can anticipate an increasing capability for corporate power to game the complex systems of attribution. Our cultural space is powered by economic flows of royalties that have transformative potential for the lives and livelihoods of creators, but often seem opaque and out of reach. In a world where generative AI in theory plays fair and pays royalties to the creators of the training data (an outcome now dwindling in likelihood with the news of Meta’s and Anthropic’s copyright successes), the technical impossibility of fairly attributing sources of “inspiration” (something copyright does not account for) points to royalty configurations that are even further from fairly representing creative efforts, talents or identities than ever before.
A logical next step is the ability to predict or second-guess cultural flows. The modelling of creativity and cultural variation, organisation, stability, and change may conceivably converge on a point where a computationally powerful corporation could spawn 20 compelling variations of a TikTok trend, run the numbers on an emerging urban subculture, and flood the market with imitations, or develop any number of other tricks to stay ahead of cultural trends. As happens with mortgage, mobile phone and insurance deals that contain tricky terms and conditions we cannot understand, giving large corporations a clear power advantage in private agreements, small cultural actors are vulnerable to disempowerment by giant, knowledge-wielding competitors. This is one thing if it simply means large corporations direct more revenue their way (like insurance or phone companies), but quite another if they become fully empowered to “steer culture” or even gain the ability to undermine natural, healthy cultural vibrancy and communal expression. Ultimately, since cultural expression is political expression, this has implications for democracy and participatory citizenship, already beginning to emerge.
Over-personalisation has the potential to compound this issue. The gradual rise of personalised content provision, the vanguard of corporate AI’s influence on culture in the form of content recommendation and feed scheduling, is already having well-documented negative social effects, from misinformation bubbles to extremism. The voices in our heads have always been those of our friends, family, teachers, colleagues and a couple of other more sinister entities: historically big, organised religions, and in the 20th century big powerful states. Into the 21st century, corporations have become a new dominant voice, building direct channels to each of us, adaptive to each of us. As this expands through the platformisation of our lives, it gets more entrenched and covers more areas. With or without AI, the platformisation of creative production (not just dissemination) is on its way, giving ever closer corporate intrusion into our cultural interaction.
Lastly, even where all actors are well-intentioned, AI can create endless bull-in-a-glass-shop effects. It is easy to construct a narrative of conflict between nefarious and noble actors, as each of the above themes develops. However, sometimes bad actions are simply more negligent than nefarious. In the wider world of AI adoption, consider the office worker who uses AI to turn 5 bullet points into a 5,000-word document, only for a fellow worker to reduce that document back to 5 bullet points: a form of anti-data-compression that is embarrassingly dysfunctional and wasteful, given the productivity claims of AI. Effects of creative AI that are anti-social only through a collective negligence, not through any particular malice or greed, include the flooding of our attentions with advertising, excessive and wasteful use of resources, and the increase of noise over signal in human communication and connection. I call these “bull-in-a-glass-shop effects”: nothing malicious about leading your pet bull into the glass shop, but in hindsight, the day did not work out as planned. It might be right to consider these effects more as a form of pollution, the unintended negative consequences of an otherwise positive activity.
These are all profound but real risks that run synchronously alongside the relatively benign experience of AI as just another routine revolution in cultural production. They may be alarmist; I’m not claiming they will happen, but by imagining them we could do better to curtail the negative social effects of a technology before they happen, not years later (as with cars, tobacco and social media). We are often presented with the question: is AI “just another tool” or is it a “collaborator”? I’ve previously entertained the latter as a philosophical possibility, but now I’m wary of it: collaborators have shared or compatible intentions, which is hard to ascertain through a veil of complex T&C. I might be excited about machines with complex intentions, but scared of who they serve, in reality.
There is now a crucial need for people from various backgrounds in the cultural and creative industries spaces, from technologists to curators and cultural policymakers, to consider and articulate how real or imaginary these risks are, what real potential impacts on creative cultures may occur and how to develop entirely new policy frameworks that lessen the negative and promote the positive. One of the most significant challenges is to clearly determine what is good and bad, a task that should not be left solely to techno-optimists and accelerationists, but also, I admit, not to academic or artist liberal elites with ideological visions of righteous culture (and a financial stake in the outcome).
Many, but not all, of the myriad AI ethics principles established by nation states and corporations at least establish a precedent for taking this action. Notably, in my home country, Australia, the National AI Ethics Principles list, as number one, the protection of individual, societal, and environmental wellbeing. It is encouraging that this clear identification of “societal wellbeing” provides a platform to address exactly the issues mentioned above. However, it is important to recognise that many AI ethics frameworks do not prioritise social considerations, reducing ethics to a realm of individual transactions, i.e., within a neo-liberal framework.
But societal wellbeing remains wide open to interpretation: the interpretation is there for the taking for whoever can best articulate what matters about creative production and expression. For now, the narrative is being led by corporations claiming the beneficial democratising effects of creative AI technology, and unfortunately the counter from concerned artists has fallen into an “AI copyright trap”, depending on a principle that is also individualist, is at best very carefully balanced between restrictions and freedoms, and at worst already grossly unfair and made to serve corporate interests. The conflation of scales that this article is concerned with, between just-another-technology-revolution at the scale of individual expression, and a complete transformation of the creative sphere at the scale of global political economics, represents a similar trap: we comprehend the individual continuity, not the cultural transformation.
Those three things that are fundamentally new about the creative AI revolution – the agentic capability of AI, the platformisation of creative expression, and the extraction of cultural history – it is true to say that any of these needn’t be inevitable. We can design AI that leaves human creativity in control, that is platform-free, or that is in some way ‘ethically’ trained (or more likely ethically applied, as many systems are). An important part of the solution lies in pluralistic approaches to AI that show how things can be done differently. But such alternative visions are most useful if they empower cultural or political will to curtail socially negative effects – like those described above – arising elsewhere, especially with respect to the mainstream. This is much like how many cities have embraced a resurgence of cycling and walking over motorised transport through structured change. The cultural and creative sector is far from developing a plan for what this looks like and it needs more than ever independent research engaged in public dialogue to do so.
Recommended citation: BOWN, Oliver. Predicting culture Mosaic [online], July 2025, no. 204. ISSN: 1696-3296. DOI: https://doi.org/10.7238/m.n204.2509
Deja un comentario