[Translate to en:] Kilian Vieth-Ditlmann, Deputy Head of Policy & Advocacy bei AlgorithmWatch © Studio Monbijou CC-BY-4.0

14 March 2024

"There is a great need for more tools and aids to finally make AI sustainable in practice."

Despite all the technological quantum leaps of the last five years, which can be attributed to the development of artificial intelligence, we must not forget the ecological, social and economic costs that AI solutions often entail. In view of the increasing number of AI systems, there is therefore an urgent need for a discussion on how AI can be made more sustainable.

The aim of sustAIn - a joint project between AlgorithmWatch, the Institute for Ecological Economy Research (Institut für ökologische Wirtschaftsforschung) and the DAI-Labor - was to develop tools that can be used to assess the sustainability of AI systems in practice. The project, which ran from November 2020 to October 2023, aimed to raise organizations' awareness of the sustainability of their AI systems and provide them with tools to make their AI systems more sustainable in the future.

Now that the project is complete, we caught up with Kilian Vieth-Ditlmann, Deputy Head of Policy & Advocacy at AlgorithmWatch, to find out more about the results of the project, the response to the tool, the impact of the EU's AI Act and the challenges and hurdles for sustainable AI solutions.

Hello Mr. Vieth-Ditlmann, thank you very much for taking the time. Is the topic of sustainability in the field of AI discussed openly enough and addressed by companies that use or develop it?

Unfortunately, no. Artificial intelligence systems often only work by exploiting resources. Nevertheless, they are currently often given the benefit of the doubt: The technology will sort everything out. In fact, AI has great social potential, but its use also entails risks and harmful consequences. This is not discussed openly enough.

In the course of the sustAIn project, you developed a self-assessment tool to evaluate the sustainability of AI systems in practice more precisely. How has the tool been received overall?

The self-assessment tool is attracting a lot of interest, even though it is only available in German so far. We receive feedback from companies as well as from research and science.

What feedback have you received from companies and what conclusions do you draw from it?

Among other things, companies want to understand the evaluation logic of the tool, make suggestions for additional questions and would like an English version. We were a little surprised at how much feedback we received, it clearly shows that there is a great need - for more tools and aids to finally make AI sustainable in practice. Is AI a way to tackle the climate crisis or a worse environmental sin than flying? Now that we are experiencing an AI hype, this question is becoming increasingly urgent. Accordingly, we will continue and complement the development of a comprehensive AI impact assessment. It will also become even more important in the future to make data on environmental risks transparent.

Although it has not been possible to develop a Sustainable AI Index, you have developed a comprehensive set of criteria and indicators for sustainability assessment. What does this look like and can you give us a practical example?

Sustainable AI respects planetary boundaries, does not reinforce problematic economic dynamics and does not jeopardize social cohesion. In the sustAIn project, we have defined 13 criteria on this basis that organizations should take into account in order to produce and use AI more sustainably.

Ecological sustainability continues to attract the most attention. AI systems are the opposite of ecologically sustainable if they consume more resources than are conserved or even reproduced through their use. In addition to the material consumption for the hardware, their immense energy consumption and the associated emissions are an obstacle on the path to ecological sustainability.

There is a big question mark behind the application phase of AI systems, for example. In technical jargon, this phase is called "inference". The development and training of AI models are very complex processes and consume a relatively large amount of energy. At the same time, however, the number of processes in these phases is limited and they are largely completed at a foreseeable point in time. Each application or inference of an AI system, on the other hand, generally consumes relatively little energy. However, inference sometimes takes place extremely frequently.

At the end of 2022, researchers from Facebook AI stated in a scientific paper that trillions of inferences take place in Facebook data centers every day. This number has doubled in the last three years. The significant increase in inferences has also led to the expansion of specialized infrastructure: Between the beginning of 2018 and mid-2019, the number of servers specifically designed for inferencing in Facebook's data centers increased 2.5-fold. For a company like Facebook, this mass of inferences is generated by recommendation and ranking algorithms, for example. These algorithms are used every time the almost three billion Facebook users worldwide access the platform and view content in their newsfeed. Other typical applications that contribute to high inference numbers on online platforms are image classification, object recognition in images and translation and speech recognition services based on large language models.

The greenhouse gas emissions of AI solutions are probably the most obvious indicator for assessing sustainability. What other areas analyzed would you highlight? Which findings were surprising?

The social sustainability of AI is also extremely important. Socially sustainable development and application of artificial intelligence focuses on people, society and fair living conditions. This includes transparency and companies taking responsibility. There should also be an awareness of fairness in the development and application of AI. End users, those affected and other stakeholders should be involved in the AI design process. AI planning and development teams should be diverse and interdisciplinary. The application context of an AI must also be taken into account in its development. AI systems should therefore be adaptable and retrainable if they are used in different cultural contexts. In particular, local knowledge and data sets should be used.

This also applies, for example, to large text or image-generating models whose output suggests that we are dealing with facts or realistic images. Ultimately, however, it is only realistic-looking content that spreads certain cultural codes. So the surprise is that we are still a long way from sustainable AI and have even less data about it than we initially expected.

Both the success of an AI and its evaluation stand and fall with the data situation. This is often very opaque, for example with regard to the energy consumption of AI solutions. What political steps are needed here to bring the issue more to the fore?

There is hardly any information available on the energy consumption of AI systems and the emissions they cause. This makes it difficult to develop political solutions to reduce emissions. In the EU Climate Law of 2021, the EU Council and the EU Parliament set the target that the EU should become climate neutral by 2050. However, there was initially no concrete political response to the harmful consequences of AI production for the environment. When the European Commission published its draft AI regulation (Artificial Intelligence Act/AI Act) in April 2021, it did not contain any mandatory environmental requirements for manufacturers and/or users of AI models.

The final version of the European AI Regulation now provides for the first important steps for environmental protection. The environment is one of the explicitly mentioned legal interests worthy of protection. Standardized reporting and documentation procedures for the efficient use of resources by AI systems must now be drawn up. These procedures should help to reduce the energy and other resource consumption of high-risk AI systems during their life cycle, or to promote the energy-efficient development of general-purpose AI models (GPAI). This will enable new research and insights into the environmental sustainability of AI.

The EU's AI Act, which the German government now also wants to sign, lists the environmental risks of artificial intelligence for the first time. Do you think this is enough or just a first step towards transparency?

Providers of GPAI models that are trained with large amounts of data and have a high energy consumption must document this energy consumption precisely in accordance with the AI Regulation. The Commission had completely neglected this aspect in its first draft. This is why research organizations have repeatedly called for the energy consumption of AI models to be made transparent. The Commission now has the task of developing a suitable methodology for assessing energy consumption. GPAI models that pose a systemic risk must meet more stringent requirements. For example, they must develop internal risk management measures and testing procedures. These measures and procedures must be approved by a dedicated authority to ensure that providers comply with the requirements. These are all important steps, but of course only a start and a basis on which further regulatory approaches can be developed in the future.

Thank you very much for the interview.