A Severe Lack of Situational Awareness

Dirk Songuer
6 min readJun 19, 2024

--

Recently I have been thinking about the negative externalities of the current hype around Artificial Intelligence and I think there is a disconnect between the acceleration of required resources versus the achieved outcomes.

And nothing illustrates this as well as the recent 165-page AI manifesto by Leopold Aschenbrenner, formerly of OpenAI’s Superalignment team.

In his introduction, Aschenbrenner unironically writes:

“Behind the scenes, there’s a fierce scramble to secure every power contract still available for the rest of the decade, every voltage transformer that can possibly be procured. American big business is gearing up to pour trillions of dollars into a long-unseen mobilization of American industrial might. By the end of the decade, American electricity production will have grown tens of percent; from the shale fields of Pennsylvania to the solar farms of Nevada, hundreds of millions of GPUs will hum.”

In my personal view, it requires a severe lack of situational awareness to see this as desirable, given our current struggle to become net neutral societies.

Acceleration and Outcomes

Training large language models consumes a lot of resources. GPT-3, for example, is estimated to have used just under 1,300 megawatt hours (MWh) of electricity, about as much power as consumed annually by 130 US homes. GTP-4 is estimated to have used between 51,772 and 62,318 MWh of electricity, more than 50 times the amount of the previous generation.

And it’s not just energy. GPT-4 was trained in OpenAI’s data centre in West Des Moines, Iowa. A lawsuit by local residents revealed that in July 2022, the month before OpenAI finished training their GPT-4 model, the data centre used about 6% of the district’s water. They also constantly require new hardware to be able train and execute queries faster, meaning more electronic components produced, and faster obsolescence of existing components, adding to e-waste.

In a recent talk at Microsoft Build, Microsoft CTO Kevin Scott talked about about the compute requirements for the next generation of models, hinting that they would require another huge jump in required resources.

Microsoft CTO Kevin Scott about future compute requirements for LLMs

And yet, I wonder if the impact of using these models is also accelerating in a similar manner. Don’t get me wrong — I do think that LLMs and generative AI in general have their use cases. But I wonder if their impact, their outcomes, their actual return on investment, is also growing in such an exponential manner. Because I don’t think it is.

LLMs have fundamental issues that prevent them from doing specific things. You might call it hallucinating, lying, whatever, but at their core LLMs don’t have a concept or model of reality. And it’s unknowable if they ever can. And that is a problem. We do get more elaborate language, prettier pictures, but fundamentally, these systems are often simply wrong. And I don’t see them get exponentially more right.

This is why AI companies now work on a veneer of sexualized and manipulative user interfaces that talk at you flirty and funny. Because why wouldn’t you want to be attracted to your iPhone? But also to distract you from the fact that these things are not really turning into a revolution.

Feeding the Imaginary Beast

In January 2020, Microsoft proudly announced their goal to be carbon negative by 2030. Last year, Microsoft increased its greenhouse gas emissions by 30% due to its AI initiatives. Besides power, there is also the issue with increased fresh water consumption and e-waste from new data centres. In an interview with Bloomberg, Microsoft president Brad Smith admitted that these ESG goals are essentially off the table:

“In 2020, we unveiled what we called our carbon moonshot. That was before the explosion in artificial intelligence. So in many ways the moon is five times as far away as it was in 2020, if you just think of our own forecast for the expansion of AI and its electrical needs.”

Yes, but let’s be clear: You accelerated away from the moon, Brad. The moon didn’t suddenly move by itself. Your justification for that acceleration is that “the good AI can do for the world will outweigh its environmental impact.

Google also pledged to “achieve net-zero emissions and 24/7 carbon-free energy by 2030” — something that just got increasingly harder with their focus on generative AI.

Aschenbrenner argues similarly in his manifesto, predicting AI compute clusters that will consume 1GW of power in 2026, 10GW in 2028, and 100GW in 2030. He even mentions that 10GW equals the energy consumption of a small to medium sized US state, while 100GW would be more than 20% of the total US electricity production.

His justification is that:

“On our fleets of 100s of millions of GPUs by the end of the decade, we’ll be able to run a civilization of billions of them, and they will be able to “think” orders of magnitude faster than humans. They’ll be able to quickly master any domain, write trillions of lines of code, read every research paper in every scientific field ever written (they’ll be perfectly interdisciplinary!) and write new ones before you’ve gotten past the abstract of one, learn from the parallel experience of every one of its of copies, gain billions of human-equivalent years of experience with some new innovation in a matter of weeks, work 100% of the time with peak energy and focus and won’t be slowed down by that one teammate who is lagging, and so on.”

Great, so nothing concrete how such a system will help us to become net neutral societies, just wild speculation. The belief is that since these AI system might perhaps be so much smarter than us, they will surely figure something out, while we just shovel natural resources into their furnaces.

Again, this hinges on faith. Faith that eventually LLMs will overcome their inherit limitation of not understanding what reality is. When Aschenbrenner writes “We are building machines that can think and reason” it is based on a very peculiar definition of what “reason” and “thinking” means. A definition that linguists, media theorists, anthropologists, and even computer scientists think is misleading, even dangerous.

When Sam Altman asks ChatGPT to “behave like a superintelligence, please,” it’s performative. I can ask my Skoda Roomster to please pretend to be a Lamborghini, but expecting that it then drives like one is delusional.

What if we don’t care that you might be right?

In his Parting Thoughts, Aschenbrenner asks us to consider: “What if and all the other SF-folk are right?” May I propose a different question for consideration: What if we start the discussion with the question if we should burn all those Gigawatts, use up all this fresh water, generate all this e-waste, not if we could?

Aschenbrenner prevents this question from being asked, as he sees the whole thing as being inevitable, whether it’s good or not. Thank you for this wonderful example of futures appropriation. I don’t buy it.

To create such a superintelligent AI, Aschenbrenner wants to assemble a tight alliance of individuals, companies, and of democracies. Ok then, consider this:

What are the chances of all these individuals, companies, and democracies to just throw all their ESG initiatives out of the window and bet on this one hypothetical card as our lord and saviour?

No, seriously, consider it. Any company participating can kiss their ESG goals goodbye, just like Microsoft, just like Google, just like anybody else currently involved. States can throw out their climate goals, pretty much immediately withdrawing from the Paris Climate Accords and any other environmental treaty.

Meanwhile, most aren’t even aware of the ecological footprint when using ChatGPT, Microsoft Copilot, Google Gemini, and other generative AI platforms. In fact no company offering such services want to say what exactly the ESG cost is. And they certainly don’t want enterprise customers to know the real numbers, or they might have to add them to their annual ESG reports.

But they should. If we agree that this is somehow desirable and inevitable, then we should be honest about it. So, here is my challenge to all organizations engaging with generative AI and these LLMs:

Analyze your usage numbers, get a proper number from the service providers in terms of footprint (and please ask them to include the training, water, and e-waste as well), then add the cost to your ESG report.

If my thesis is correct, there is a point where the positive outcomes won’t be worth the resources used. And given that the table stakes are our climate, I don’t think we should “move fast and break things,” but rather look at these things from a societal desirability perspective.

--

--

Dirk Songuer
Dirk Songuer

Written by Dirk Songuer

Living in Berlin / Germany, loving technology, society, good food, well designed games and this world in general. Views are mine, k?