On Large Language Models

Perhaps you would expect this piece to be called ‘On Artificial Intelligence’. It is not, because what we currently call an ‘Artificial Intelligence’ is neither of those things. It is not intelligent… and it is all too real.

This article is one from my viscera, a neo-Luddite manifesto, my scream into the void of complacency.

This article will examine Large Language Models (LLMs) as they are already prevalent and are increasingly being adopted in everyday life. The same arguments can, however, be applied to algorithms trained on and producing image, video, and sound. You may recognise these tools, including ChatGPT, Claude, DeepSeek, Dall-E, and Midjourney, among others.

The Arguments For

In the interests of a thorough discussion, I will first lay out the arguments for LLMs as a creative tool. My own bias should be noted, however, as you read the following.

I was speaking to some colleagues during rehearsal recently, all three of whom advocated for the use of ChatGPT and similar tools in refining or drafting text. It saves time when writing applications, they reasoned. It makes it feasible to apply for things they otherwise couldn’t submit, they explained. It takes that one resource which is so scarce for freelancers - time - and returns some of it, they elaborated. As my friends said this, there was a discomfort in my marrow (and I voiced it), but it was hard to begrudge them these luxuries.

Elsewhere, the argument goes: LLMs are the latest in a lineage of technological development. The printing press, radio, and television all disrupted the existing information and cultural landscapes, and despite all the pearl-clutching each one expanded the diversity of stories and storytelling¹. LLMs are a tool, this line of reasoning goes, like any other. They increase the accessibility of creativity - anyone can make, and human attention can now be completely devoted to the finer crafts².

There is hope (and emerging evidence³) that LLMs can be “a net good”⁴, with applications ranging from productivity increases to environmental harm reduction and early intervention in medicine. Some proponents even suggest that this is the utopian technological advance that will finally allow humans to significantly reduce their labour.

The Arguments Against

For the purposes of this article I will focus on the ethical implications of these technologies, but I would be remiss to omit concerns about sustainability, accuracy, and equity, which I will briefly canvas.

There is emerging evidence that LLMs and other generative models could have a significant impact on our environment due to high carbon intensity. For instance, Google is investing heavily in these models and the company’s 2023 emissions were 48% higher than its 2019 emissions⁵. Forecasts are uncertain but electricity demand from the data centres that power LLMs could triple between 2023 and 2028⁶. Simon Clark provides a balanced summary of the literature⁷.

Turning to the issue of accuracy, LLMs get things wrong all the time - as often as 37% of the time even when using friendly metrics⁸. Because they are trained on the canon of online material, they inherit our biases and blind spots. As the uptake of LLMs continues - and as generated material is used to train new and existing models - we risk losing all meaningful information on the internet. In pop culture, humanity has assigned the instincts of humans onto depictions of ‘AI’, which misses this broader point. LLMs do not pose the risk of authoritarian control of all information - they risk fracturing the information space until there is no reality. In the recent film Mission: Impossible - The Final Reckoning, the protagonist faces off against The Entity, an artificial intelligence programmed on vast amounts of online material and intent on devouring the remainder. What this film and its 2023 predecessor allude to (but fail to articulate) is the threat that The Entity poses to reality, to truth. The Entity - like HAL 9000 and Ultron before it - depict valid fears about human leadership but have failed to contribute to a culture that is clear-eyed about the real impacts of generative models. As artists and independents, we use the internet all the time. Whether you’re writing fan-fiction, sharing video on social platforms, or researching other artists, truth should matter to you.

Examining the economic model behind this new tech obsession, LLMs appear to be free for now - that’s a huge part of what my independent artist peers are drawn to. But how long until the massive technology companies behind these tools start monetising them? Like Netflix, Spotify, and Uber before them⁹, OpenAI (the company behind ChatGPT) and its peers will have no feasible economic or regulatory obstacles to charging whatever price they like for a product that is seen as indispensable. But don’t take my word for it - Sam Altman recently conceded that the not-for-profit structure is akin to “a millstone around the company’s neck”¹⁰. It’s all part of the Silicon Valley ‘move fast and break things’ ethos - OpenAI with its lofty promises¹¹ was initially founded and co-chaired by Altman and Elon Musk. Oh, and of course Musk and the other billionaires running these models, or anyone with access to them, have and may continue to alter the model for their own commercial or political purposes¹². Instead of ‘democratising' art through a new product, what if we considered addressing the underlying inequalities that prevent people from making art? Karla Ortiz (quoted in designer Matt Corral’s thoughtful piece on generative art) observed that:

If we’re going to talk about what really stops people from pursuing art, it’s those economic issues. It’s issues like a lack of universal health care, few meaningful grants for artists, and the fact that most of us are just a missed paycheck away from homelessness and hunger. This doesn’t solve that.¹³


The truth is: LLMs do not think. They are a roided-up version of the predictive text on your phone, trained on the stolen work of innumerable artists.

Perhaps I should be using only dispassionate, economic language to make my case. Perhaps emotional appeals play into the tech-bro narrative that we are a bunch of whiny entitled artists. But I am emotional, and I do not apologise for saying this to my fellow artists: You should be enraged! Every word, image and video you have published online is being consumed and regurgitated by these algorithms. Your article for the school newspaper, your Facebook comment welcoming your baby niece to the world, your advocacy for the causes you believe in. Even these words that I have spent weeks crafting for you will soon be shredded, tamped down into a great congealed mass, and then spat out to erect a ghoulish papier-mâché paragraph, a hollow shell void of meaning that some unsuspecting user will take as gospel. It sickens me to think of this and it should repulse you too. We often make art and craft language to be shared, yes, but not to be scraped from our canvasses by a machine and sold as intelligence!

In Conclusion (Inconclusive)

LLMs represent such a perversion to me because - unlike Uber or Spotify, who pose a threat to a single profession or medium - they exploit the class of working artists as a whole. They take a thriving ecosystem of independent artists and turn it into a field of resources to be extracted.

It seems likely we have already lost. New services using ‘AI’ and the rebranding of existing options as ‘AI’ are already being offered by so many of the tools that we use in our day-to-day freelance work; Squarespace, Mailchimp, Photoshop, Canva, Podbean, Gmail, and Google search (in the latter’s case, through an option that appears at the top of the page and cannot be disabled). A recent poll¹⁴ found mixed attitudes toward LLMs¹⁵, but significantly increased support compared to 2024¹⁶. The ever-prescient Arundhati Roy has scant suggestions for hope:

Perhaps we, even the most privileged among us, are just a group of redundant humans gathered here with an arcane interest in language generated by fellow redundants.¹⁷

Like Roy, I can offer no clear path forward. I would ask you to consider boycotting LLMs and their incestuous siblings in the worlds of image, sound, and video. This is both an ethical stand to take and a defence against your data being directly harvested by the companies themselves. As for the rest of your work, consider where you can return to person-to-person contact, live art, and analogue methods. Have discussions with your colleagues about this debate. Support (and hold to account) emerging models trained only on consenting artists’ work. Continue to open up your medium and your practice to those who feel like outsiders. State that your art doesn’t use LLMs (and state when it does). Seek out artists doing the same, and be open to expending more time or money to do this. Stare the robots down across the table and go all in on community.

In my mission statement for this blog, I promised to welcome equivocation over dogma. I still aim to do so, and I have reservations about sending such a ‘strongly worded letter’ into the world. I hope that I have at least laid out some potential positives about LLMs. It is possible that I will return to this subject in the future with muted concerns, but right now all I can do is consider the situation in front of me and acknowledge its impact on my life and art. I promise that I have thought deeply about these issues and, whether or not we ultimately agree, I hope you can commit to doing the same.

¹ McCullough, J, A more moderate take on AI art, [online video], 30 June 2025, https://www.youtube.com/watch?v=dutpBSKj8JY, retrieved 13 July 2025, 5:51 and 7:19

² Corral, M, ‘The harm & hypocrisy of AI art, blog post, 2024, section ‘Democratising art’ paragraphs 1-2, https://www.corralldesign.com/writing/ai-harm-hypocrisy, retrieved 24 June 2025

³ Clark, S, Should I feel guilty using AI?, [online video], 2 April 2025, https://youtu.be/5sFBySzNIX0?si=_iLneX5M1z0K3vOH, retrieved 1 July 2025, 11:57

⁴ Altman, S, interview with Thompson, B, ‘An Interview with OpenAI CEO Sam Altman About Building a Consumer Tech Company’, Stratechery 20 March 2025, https://stratechery.com/2025/an-interview-with-openai-ceo-sam-altman-about-building-a-consumer-tech-company, retrieved 14 July 2025

⁵ Clark, S, Should I feel guilty using AI?, [online video], 2 April 2025, https://youtu.be/5sFBySzNIX0?si=_iLneX5M1z0K3vOH, retrieved 1 July 2025, 8:39

⁶ Euan, G, Fulghum, N & Altieri, K, ‘Global Electricity Review 2025’, Ember, 8 April 2025, https://ember-energy.org/app/uploads/2025/04/Report-Global-Electricity-Review-2025.pdf, retrieved 16 July 2025, page 35

⁷ Clark, S, Should I feel guilty using AI?, [online video], 2 April 2025, https://youtu.be/5sFBySzNIX0?si=_iLneX5M1z0K3vOH, retrieved 1 July 2025

⁸ Fiddean, E, ‘OpenAI claims its newest chatbot GPT-4.5 should 'hallucinate less'. How is that measured?’, ABC Science, 20 March 2025, paragraphs 11-16, https://www.abc.net.au/news/science/2025-03-20/openai-generative-ai-hallucinations-chatbot-gpt45-test/105041122, retrieved 24 June 2025

⁹ Funnell, A, ‘Uber and Netflix run at a loss — part of a growth-before-profit ideology that's spreading’, ABC Radio National, 8 June 2019, paragraphs 14-18, https://www.abc.net.au/news/2019-06-08/uber-netflix-run-at-a-loss-and-they-dont-care-/11185434, retrieved 24 June 2025

¹⁰ Altman, S, interview with Thompson, B, ‘An Interview with OpenAI CEO Sam Altman About Building a Consumer Tech Company’, Stratechery 20 March 2025, https://stratechery.com/2025/an-interview-with-openai-ceo-sam-altman-about-building-a-consumer-tech-company, retrieved 14 July 2025

¹¹ OpenAI, ‘OpenAI Charter’, 9 April 2018, paragraphs 3-4, https://web.archive.org/web/20230714043611/https://openai.com/charter, retrieved (via Wayback Machine) 24 June 2025

¹² Capoot, A & Kolodny, L, Musk’s Grok AI chatbot says it ‘appears that I was instructed’ to talk about ‘white genocide’, CNBC, 15 May 2025, paragraphs 1-3, https://www.cnbc.com/2025/05/15/grok-white-genocide-elon-musk.html, retrieved 13 July 2025

¹³ Ortiz, K in Corral, M, ‘The harm & hypocrisy of AI art, blog post, 2024, section ‘The fight back’ paragraph 5, https://www.corralldesign.com/writing/ai-harm-hypocrisy, retrieved 24 June 2025

¹⁴ This data should be treated with caution, as the poll was commissioned by Google and some of the questions offer false binaries to respondents.

¹⁵ Jackson, C, ‘Google/Ipsos Multi-Country AI Survey 2024 – Topline’, Ipsos, 14 January 2025, https://www.ipsos.com/sites/default/files/ct/news/documents/2025-01/Google%20Ipsos%20Multi%20Country%20AI%20Study%20Topline%20for%202025%20.pdf, retrieved 14 July 2025

¹⁶ Ipsos, ‘Google/Ipsos Multi-Country AI Survey 2025’, 14 January 2025, paragraphs 1-3, https://www.ipsos.com/en-us/google-ipsos-multi-country-ai-survey-2025, retrieved 14 July 2025

¹⁷ Roy, A, Azadi Freedom. Fascism. Fiction., Chicago, Illinois, Haymarket Books, 2020, page 11

Next
Next

Three Alphabets for Dancing