Last week I found out ChatGPT Plus gives me access to Sora video generations. I hopped to the Explore section and without noticing scrolled through the most absurd, awful, beautiful and artistic images and videos for half an hour with a smile on my face.
I love exploring cutting edge technology. I am still amazed by the fact that people like you and me have created such advanced algorithms, and that they’ll only get better as we move forward. It’s an achievement of humanity, and enables people that have creative ideas, but not the skillset to express them, to do so anyway. I use ChatGPT daily for work and personal projects, and it has changed the way I work.
But right when I was about to hit the “generate” button, I stopped. This tool is packaged in such an easy-to-use way, that it is easy to forget what is happening behind the scenes. Let’s break down what it means to generate an AI video, and the moral dilemma that comes with it.
Screenshot of Sora.
Generative AI: behind the scenes
Generative AI is a type of self-learning software program that can create new content, such as images, videos, or text, based on patterns learned from existing data. The approach is not new, but gained traction in the last couple of years after some remarkable breakthroughs in the efficiency of the algorithms. This enabled the quality of the generated content to break the threshold of “good enough” for many use cases, such as art, music, and even software development.
Creating these GenAI models like LLMs (Large Language Models) and diffusion models, usually consists of three phases of compute:
- The training phase
- The fine-tuning phase (RLHF, Reinforcement Learning from Human Feedback)
- The inference phase (for each prompt)
The models are trained on massive datasets, which are fed through the algorithm running on powerful computers in data centers. These data centers are often located in locations with cheap energy, such as Texas in the US. The energy consumption of these data centers is enormous, and the carbon footprint of training and running these models is significant.
Energy usage in perspective
To put the energy consumption of these tools in perspective, let’s look at some numbers.
The training phase of OpenAI’s GPT-3 model, for example, is estimated¹ to have consumed around 1287 MWh of energy and produced ~550 tons of CO2 emissions. This is equivalent to the annual energy consumption of about 514 Dutch households² or the carbon footprint of about 163 gas cars driving for a year³.
That’s just for the training phase. The inference phase is much less energy-intensive, but depending on the popularity of the model it can even surpass the training phase in total energy consumption. Google has estimated that 60% of AI workload energy goes into inference, versus 40% for training⁴. For example, one query to ChatGPT uses approximately as much electricity as could light one light bulb for about 20 minutes, Researcher Jesse Dodge says⁵. Imagine that number for an image or even video generation. Orders of magnitude higher.
The moral dilemma
These numbers are good to internalize and can help put the effect of these AI tools in perspective. Actions have consequences. But does this mean I should feel guilty about using them? Am I really contributing to the climate crisis by using AI tools? How do I decide when my usage of AI is justified, and when it is not?
Although I think that we cannot answer these questions with a simple yes or no, and that it is a personal choice, let me be brave and try to answer them anyway. Also, let’s not forget that there are benefits to the development of AI too.
Should I feel guilty?
No. Guilt is in my opinion not a productive emotion in this situation. I am not breaking the law nor bringing direct harm to someone. But that doesn’t mean that I should now generate a video every hour just for fun, just because I can. Replace guilt with feeling responsible. With feeling in control. With feeling conscious.
Am I contributing to the climate crisis?
Yes. Not only with AI usage, but with many decisions we take throughout the day. We have talked about the numbers and cannot deny them. There will always be debate about the scale and urgency of climate impact, but the fact that our actions have an effect is undeniable.
When is AI usage justified?
This is a personal answer and might be different for you. But for me, it’s about feeling like my AI usage has a net-positive contribution to the world. I say “feeling” on purpose her: I’m not claiming to perform a moral analysis on every prompt I enter. But it helps to have some kind of gut check to steer my behavior.
And how I know my usage is net-positive? I don’t. But educating ourselves on the matter like we are doing in this blog, will at least help with realistic considerations.
To make this answer less vague and more concrete, here are some examples:
What I would not use AI for
- Automate content creation on social media and creating Brainrot
- Proxying all my coworkers’ questions to ChatGPT and sending back the AI answers as if they were my own
- For retrieving basic facts that can also easily be found on Google
- Generating images of videos of things that already appear in the real world
What I would use AI for
- To learn subjects that are difficult to find online
- To get help with very specific problems, either in work or personal life
- For learning about the AI tool itself
- When I get stuck in ideation and need inspiration
AI generated image of a steak that probably did not have to
exist. source
Disclaimer and discussion
Although I’ve been trying to be as accurate as possible with the numbers in this article, much of the data are estimates. Companies are not transparent about their energy consumption and carbon footprint, as it might hurt their customer perception. What can we do, as users of AI tools, to push for more transparency and control?
Sources
- https://news.climate.columbia.edu/2023/06/09/ais-growing-carbon-footprint
- https://www.cbs.nl/nl-nl/cijfers/detail/81528NED
- https://yourmoby.nl/mobiliteit-in-nederland/uitstoot-verminderen/autos
- https://research.google/blog/good-news-about-the-carbon-footprint-of-machine-learning-training/
- https://www.npr.org/2024/07/12/g-s1-9545/ai-brings-soaring-emissions-for-google-and-microsoft-a-major-contributor-to-climate-change