Tech corporations like to sell a utopian version of their products. OpenAI’s ChatGPT and Microsoft’s Copilot promise us a personal assistant, ready to write our emails for us, plan our daily activities or even advise us on social relationships. Meanwhile, image generators like Midjourney or Adobe’s Firefly allow us to generate images of the pope wearing a funny jacket. Suno can generate something that is supposed to pass for music, while OpenAI’s Sora can conjure up videos of anything we desire. Soon we won’t ever have to think or create again!
It’s not just Big Tech, and the usual tech enthusiasts, that push this narrative. More impactfully, liberal governments like Starmer’s in the UK have fully embraced the ‘AI revolution’, with Scholz following closely behind.
But some on the left also have illusions. Some believe Artificial Intelligence (AI) will ‘democratize art’, making the bold claim that being unable to draw is undemocratic. Others go as far as claiming ‘fully automated luxury space communism’ is right around the corner, thanks to AI. Arguably these opinions are fringe and rarely leave the internet. But some leftwing social movements and political parties have happily turned to image generators, to create pictures of Marx as Santa, to promote demonstrations and events as well as the viral ‘all eyes on Rafah’ image.
Proponents of AI also point out advances in fields like medicine and material science. It is important to note that the AI models used for this are specifically trained for that very purpose, and are not the same as the mass deployed models we are seeing today.
What are the downsides to all of this? Shouldn’t we embrace technology that promises to make our lives easier? Some tech CEOs like Elon Musk and Sam Altman like to talk about the dangers of AI. Others desperately plead for governments to regulate them, others almost messianically claim that only theycan safeguard humanity from the inevitable super intelligence that they seek to create. These narratives are a convenient distraction away from the actual, current impact the technology is having.
It’s time to critically look at these technologies. The are a lot of reasons to criticize AI for its broader effects on civil society: such asjob losses, mass surveillance, privacy concerns, military use, disinformation, loss of culture, decline of cognitive skills, and the beautifully phrased ‘enshittification of the internet’. All important. But i will discuss what makes these systems turn. That is expropriation of vast amounts of data, without compensation for those who did the work in the first place; the often invisible labour behind it, often in the global South; and the immense energy cost associated with AI.
Data Theft
Let’s start with the basics: Generative AI needs data, and lots of it. These technologies do not write text or conjure up images because they are intelligent creations , but because they are fed with enough human-created data (written text, video, audio, images etc) that they can predict what word, or pixel, comes next. Feed it enough pictures of cats, and it will be able to predict what a cat looks like (but maybe with some added paws).
This data comes from somewhere. To get it, tech corporations scrape the entire internet. All of Wikipedia, Reddit posts, any image that was ever uploaded to Instagram – you name it, and it probably trained an AI model, without the creator’s consent.
Arguably X (Twitter) posts are not hard labour. But it is more complicated when larger work are affected: books, academic publications, music, visual art, all expropriated by Big Tech, without the creators consent, and fed into a machine to swoon investors and boost stock valuations. his uncompensated labor is used to make a few CEOs and shareholders incredibly rich. It is also being used to automate away means of income for many creators. A graphic designer promoting their work on Instagram to find clients, finds their work being used without consent by a corporation that explicitly states its desire to remove the need for graphic designers in the future.
This is theft, plain and simple.
Copyright & Resistance
In theory, this is where copyright comes in. An author owning the rights to their book, theoretically has a say over what happens to it: including whether to allow AI models to be trained on their work. However, AI companies argue that anything is fair game. After all, if they respected copyright law they would never get the amounts of data needed to train their models.
Does this mean that stronger regulations enforcing copyright law can protect creators from having their work stolen? Not necessarily. Aside from concerns with copyright itself, which includes corporations hoarding large amounts of intellectual property (which they are often happily selling off to AI companies), western governments are happy to go along with the argument that tech companies are making. According to them, Generative AI will benefit society as a whole. At most, the EU forces companies to include an ‘opt out’ option, where individuals can refuse tot have their work be trained on, but they must specifically state this. Generally, artists and activists have been calling for opt-in, where explicit permission is required for a work to be used as training data.
Various organizations and artists have filed lawsuits against tech corporations over copyright violations. Several major cases are in court, mostly in the US. Examples include, frm the New York Times suing OpenAI over training on its entire archive; to a series of lawsuits filed by digital artists, to high profile authors like George RR Martin suing over having his entire Game of Thrones series trained upon. Perhaps these cases can set a precedent and reign in AI companies a little.
Another way that especially artists, have protected their work, was ‘data poisoning’. Researchers at the University of Chicago created two tools where an invisible filter is applied to work, rendering it useless to AI companies. While Glaze protects against style mimicry, Nightshade is more aggressive as it tricks the AI model that it is looking at something else than is actually depicted. Feed enough of these poisoned images to an AI model, and it risks collapse.
Increasing pressure on governments to regulate AI training, is a ‘Statement on AI training’, with the following core demand: “The unlicensed use of creative works for training Generative AI is a major, unjust threat to the livelihoods of the people behind those works, and must not be permitted.”
So far, this statement has been signed by over 40.000 people, including major artists like Kate Bush, Robert Smith, Kazuo Ishiguro and Julianne Moore. ignatorys to the statement also contain many cultural organizations, including the SAG-AFTRA union, who’s 2023 strike listed protection from AI as a core demand.
Neocolonialism
The theft of data by Big Tech has been widely scandalized. Much less attention is given to workers in the global South tasked with the ‘labelling’ of data. For an AI model to work, it needs to know what it is looking at, which means a human has to label it first. Humans also sort out all of the bad, offensive and harmful content included in the indiscriminate scraping of the internet.
To do this, AI companies employs subcontractors. An investigation by TIME in 2023 revealed that OpenAI relied on a self declared ‘ethical AI company’ called Sama. It paid workers in Kenya between 1.32 and 2 dollars an hour to sift through endless amounts of content, working 9 hour days at a grueling pace. This content contained large amounts of harmful material, like images of sexual abuse (including of minors), which workers had to textually label. Of the four workers interviewed by TIME, all reported being traumatized by the experience, with one experiencing recurring visions as the result of the abuse witnessed. In May 2024, Kenyan data workers wrote an open letter to Joe Biden, calling on the US to hold its companies accountable for the labor violations they are committing abroad.
Kenya is just one example. Similar reports have come out of Venezuela, where data labellers on average earn just 90 cents an hour. Meanwhile in Lebanon, Syrian refugees are often the ones doing this work as strict work permit laws exclude them from the regular job market. They too report harsh conditions and low pay, with one worker detailing 14 days work to afford just 10 days of food. This is not limited to just data labelling. Generative AI models require large amounts of computing power, with Graphic Processing Units (GPUs) especially sought after to train ever larger models. These computer chips require rare earth minerals, often dug up in war-torn areas where both corporations and rebel groups commit grave human rights violations at mining sites. Just recently, the Democratic Republic of Congo filed a criminal complaint against Apple over the use of blood minerals in its supply chain.
Climate Breakdown
Finally the data centers required to run AI models consume growing amounts of energy in a time of accelerating climate breakdown. A single ChatGPT query consumes 10 times the energy of a Google search, equivalent to running a light bulb for 20 minutes. Microsoft’s energy usage is currently 29% higher than in 2020, with the company officially dropping its (already dubious) claim of being carbon neutral. Google’s energy usage has increased by 48% since 2019, something it attributes to its investment into Generative AI. However, research by The Guardian has shown that emissions by data centers might be 662% higher than official figures fromy tech companies.
Meanwhile, the International Energy Agency estimates that in 2022, at the start of the current AI boom, data centers ( used for more than just AI), accounted for around 2% of all global emissions. It expects that by 2026, data centers emissions will increase from 35% to 128% — equivalent to the annual energy consumption of Germany. Wells Fargo estimated energy usage for AI use will increase to a staggering 550% by 2026.
The estimates of energy usage of these data centers in specific countries, are that in technology hub Ireland they will use 35% of electricity (up from a current 21%) by 2026. Fear of rolling blackouts led the Irish energy grid operator to forbid construction of new data centers near Dublin until 2028. Already, Ireland’s data centers use more energy than all its urban homes combined, energy that for 50% comes from fossil fuel sources. For the US, data center power usage could account for 9% to 25% of national energy usage by 2030 (currently at 4%). This not only drives up local energy prices, but also puts immense pressure on existing energy infrastructure.
It’s not just energy either — water usage also increased dramaticallyto cool servers. By 2027, AI-related water use might be 6 times the water use of Denmark. But a quarter of the world’s population lacks access to clean water and sanitation, and global water demand is expected to be 40% greater than available supply by 2030. Moreover many data centers are located, or are being built, in water scarce areas i, like Latin America and the southern US. While Uruguay was experiencing a drought in 2023 causing its capital Montevideo to run out of drinking water, Google announced plans for t a new data center consuming 7.6 million liters of fresh water per day, sparking mass protests. Google adjusted its plans and promised to rely on air conditioning instead But activists and academics continue to criticize the project over carbon emissions and hazardous waste disposal.
Is it worth it?
Large scale AI models are fundamentally unnecessary. While they might occasionally allow us to perform tasks faster, albeit with huge drawbacks – they rarely allow us to do anything new. We can already write books, draw pictures, or make movies. Is speeding up that process when it’s built on even more suffering truly worth it? Our planet is literally on fire and we’ve hit 1.5° C of global warming. Maybe we shouldn’t waste large amounts of resources on something we don’t need.
It is obvious that Big Tech doesn’t have the best interest of people and planet at heart. That is increasingly clearer as their involvement with far right figures like Donald Trump and Alice Weidel grows. Instead of buying into their products and promises, let’s stand in solidarity with those negatively affected by the tech. Although a unified mass movement is still lacking – public sentiment is rapidly turning against AI. We can start by supporting labour struggles and uplifting artists and creators who reject Generative AI.
Imagine what good we could do with the hundreds of billions being poured into Generative AI instead!
So, next time you are tempted to ask ChatGPT a question or want a picture of a cat with too many paws, realize that it, like all large scale AI models, is built on the logic of capitalist accumulation — expropriation, exploitation, and the relentless extraction of the natural resource base.