What's the problem with AI? An interview with Paris Marx
Paris Marx is a tech critic, author, podcaster, and international speaker. He hosts the podcasts ‘Tech Won’t Save Us’ and ‘System Crash’, writes the newsletter Disconnect, and is the author of Road to Nowhere. We reached out to him to ask his thoughts on AI, what are the problems with it and why the left should take it seriously.
Thank you for taking the time to discuss with us. In Ireland, we’re quite familiar with the issues raised by data centres. We have 82 data centres, which currently use about 20 per cent of our energy. Another 14 are under construction, and 40 have planning permission. This has had an immense impact on Ireland’s resource usage. The development of AI will dramatically increase the need for these “energy vacuums”, dashing any hope of reducing energy needs and transitioning rapidly to renewables.
Can you explain to our readers why that is? What is AI and why is it so energy intensive?
AI, or artificial intelligence, is a term that’s been around since the 1950s and can describe a wide array of technologies. Over the past two years, we’ve been hearing a lot about generative AI, in reference to chatbots like ChatGPT, image generators like Midjourney, or video generators like Sora. But AI can also be something as basic as the algorithms that suggest correct spellings when you use Microsoft Word or the next word when you’re texting. Google’s search algorithm that displays suggested results based on your query is also a form of AI. Those uses are not so energy or resource intensive, but generative AI is entirely different.
Effectively, companies like OpenAI have set out to create large AI models that can theoretically be applied to virtually any task. They’re trained using immense amounts of computation — amounts that only some of the largest and most well-capitalised companies in the world tend to have access to — and almost unimaginable amounts of data scraped from the web that can include everything from the content of news websites, books, or movies that have been posted online, or even the personal photos and information we’ve posted on social media platforms over the years. Companies then develop AI tools using those models, and it takes a lot of computation — and thus energy and resources — to actually generate something new based on them.
Newer models like DeepSeek (out of China) claim to have made that process more efficient, but it remains to be seen how much more efficient their advances have truly made generative AI models. And even then, companies like Nvidia and Microsoft have claimed that once generative AI becomes more efficient — basically, that outputs can be made with relatively less required computation — the demand for those tools will actually increase, meaning the total amount of computation will need to continue to rapidly expand. That computation takes the form of hyperscale data centres — sprawling facilities filled with tens of thousands of servers that draw on large quantities of electricity and water to power and cool all those computers. The past two years have seen a significant global expansion in the number of data centres being constructed — and if tech companies get their way that will continue.
The energy needs of AI are one thing, but what about the material footprint? I think most people don’t consider how “the cloud” and all the tools we use online, including ChatGPT and other AI resources, have an actual, physical presence in the real world. How is the use and expansion of AI going to exacerbate the rush for “critical minerals”?
It’s an important question. We often focus on energy and water because they’re direct inputs into data centres, but all those servers, hard drives, graphics processing units, and other forms of computer hardware require a lot of minerals to produce. They’re also replaced much more frequently than a personal computer because they’re run constantly and aggressively, meaning they don’t last nearly as long. Data centre companies claim they recycle some of those components, but they’re not forthcoming about what percentage actually goes through that process and evades the landfill.
We’re already seeing a push to expand mining operations the world over to supply the shift to renewable energy and electric vehicles, and the need for minerals to create computer components only accelerates that further. All of that mining comes with harmful effects — from local environmental pollution that affects land, animals, and nearby communities to serious health effects for workers and those living in the vicinity of extraction sites. Those activities are often centred in the Global South, but the effort to expand mining is also placing pressure on countries in places like Europe and North America to lessen environmental standards for new mining projects too, especially as “critical minerals” become a central feature of geopolitical conflicts the world over.
Mining companies, tech companies, governments, and militaries are aligned on a vision of the future that requires far more extraction regardless of the social and environmental consequences. But that doesn’t mean there aren’t alternative paths we could collectively take that reduce the computation we need, the energy we demand, and as a result the quantity of minerals we must extract. But that will never happen as long as the current economic paradigm governs our decision-making processes and constrains the possible futures available to us.
On the left you hear mostly bad news about AI - the impact on artists, the lack of transparency, concerns around ownership and the content it produces, and of course the ecological impacts - what impact does the current ownership model have on how AI is developed and, despite this, are there any positive aspects to AI? Can you say something about how these uses are currently constrained and if there is any potential for another version of AI which operates for public good rather than private profit?
“Because AI is fundamentally a technology that ranks things, including people, it’s well placed to be seized by ascendant fascist forces...”
On this, I’m a bit more sceptical. Certainly you can point to AI being used in climate modelling or more potentially beneficial uses, but you also need to consider the broader consequence of AI rolling out in our present society. Dan McQuillan told me that when he wrote his book Resisting AI, he actually set out to write a book about “AI for good,” but after getting into the research, he completely changed his view on things. Because AI is fundamentally a technology that ranks things, including people, it’s well placed to be seized by ascendant fascist forces to provide a technological veneer to an extreme project of dismantling the state, restricting access to public services, and abusing people’s rights — particularly those of minorities. McQuillan was writing this all well before we saw Elon Musk charge into the White House with his DOGE agency and chainsaw.
This is also not just a generative AI problem, but one with AI more broadly. Well before the advent of ChatGPT, governments had already begun rolling out algorithmic systems in social welfare, health care, visa processing, and other areas of government jurisdiction with terrible consequences. The deployment of those tools justified discrimination and restriction of benefits, whether it was more people of colour being denied visas because of biased training sets or poorly designed systems that kicked people off their benefits, as occurred in Australia in what was called the ‘Robodebt scandal’. It eventually resulted in a settlement of A$1.8 billion for people the algorithmic system deemed ineligible for benefits when they did, in fact, meet the criteria. It then forced them to prove the system was wrong instead of the agency having to prove it was right. The system was eventually terminated, but only after 443,000 people’s lives had been harmed or destroyed — and some had even been pushed to take their lives out of desperation.
There are real human costs to these technologies, and speaking of AI for good when they’re being seized to justify and advance a fascist project feels glib, if not outright insulting. Maybe there is some positive use of AI we can imagine, but I think those discussions centre the technology at the expense of all the other means we have of addressing problems in society, without assuming technologies designed to serve Silicon Valley capitalism will be easily turned around for more positive uses.