At this year's PRIO1 - The Climate Network Climate Award in Mannheim, the focus was on a question that also shapes the profile of the Competence Center for Leadership Experience (CCLE):
How can artificial intelligence (AI) help to effectively shape climate protection and sustainable leadership - and what needs to happen at the level of behaviour, organization and responsibility to achieve this?
Prof. Dr. Burkhard Schmidt represented the CCLE on a high-calibre panel with
- Karen Paul, Head of IT at Greenpeace,
- Nike Klaubert, SAP,
moderated and embedded in the strong network of PRIO1 - The Climate Network.
In the following, we summarize the key findings - and show what they mean in concrete terms for our students and practice partners.
1 AI alone will not solve the climate problem - behavior is the bottleneck
The discussion made it very clear:
Climate protection is no longer a knowledge problem, but an implementation and behavior problem.
From the perspective of economic and climate psychology, this means that
- People make decisions not only rationally, but under the influence of emotions, routines, social norms and incentive systems.
- Technological solutions - including AI - will only be effective if they take this reality into account.
- Whether AI ultimately reduces emissions or merely makes existing patterns more efficient depends on the goals, structures and decision-making architectures in which it is embedded.
This shifts the focus away from the question "What can AI do technically?" to "How do we design contexts in which AI makes sustainable behavior more likely?".
2 Three core psychological perspectives of the CCLE
Three perspectives became particularly clear in the panel, which also characterize the CCLE's self-image.
a) Decision architectures instead of technological folklore
Psychological and behavioral economics research shows that small changes in the decision-making environment can have major effects:
- Defaults: defaults (e.g. sustainable delivery options, train instead of flight as default) strongly influence behavior.
- Social norms: Information about how others act sets anchors ("best-in-class" benchmarks, team comparisons).
- Feedback & gamification: feedback on CO₂ footprint, progress indicators and playful elements encourage engagement.
This is precisely where AI can come in - by making such structures dynamic and personalized. The question is not so much whether AI does something, but how we design it in a scientifically sound way.
b) Dealing with AI psychologically: calibrating trust
A second focus of the panel was the psychologically intelligent use of AI:
- People vacillate between naïve enthusiasm for technology ("AI will get it right") and blanket rejection ("I don't want to know anything about that").
- Added to this are cognitive distortions such as the "automation bias": when a system outputs a result, we tend to question it less critically.
- From a psychology perspective, it is about calibrating trust:
- Transparency about which goals are being optimized,
- understandable explanations (Explainable AI),
- real opportunities for intervention and responsibility on the human side.
AI should be a tool in everyday organizational life - not a substitute for professional judgment and leadership.
c) Ethics, bias and greenwashing
The third focus was the question of how we address bias, fairness and greenwashing in the AI context:
- Biased training data, biased targets or homogeneous development teams can lead to AI scaling existing inequalities.
- Sustainability ratings or climate scoring supported by AI can - if designed incorrectly - contribute to greenwashing instead of creating transparency.
- What is therefore needed are
- clearly documented targets ("What does this system actually optimize - and for whom?"),
- independent audits,
- interdisciplinary governance structures in which technology, ethics, psychology and practice work together.
3 What does this mean for our students?
For students of business psychology and related degree programs at the HWG Ludwigshafen, this topic opens up a broad field of learning and application:
- Competence building in "AI Literacy" and climate psychology
- Understanding how AI systems function fundamentally - and where their psychological implications lie
- Understand how emotions, norms and decision-making architectures influence sustainable behavior.
- Practical projects and case studies
- Analysis of real AI applications in the context of sustainability: Where do they support meaningful changes in behavior, where do they reinforce rebound effects or greenwashing?
- Development of own concept ideas for "human-centered" AI solutions in companies.
- Theses and research projects
- Empirical work on topics such as "Trust in AI", "Acceptance of AI-based sustainability tools", "Bias perception in AI applications" or "Psychological mechanisms of digital climate interventions".
- Cooperation with practice partners in which students systematically collect data, design and evaluate interventions.
The CCLE sees itself as a bridge: between scientific evidence, didactically good teaching formats and real fields of application in companies and organizations.
4 What does this mean for our practice partners?
For companies, administrations and organizations that work with AI and sustainability (or are planning to do so), the work of the CCLE results in several concrete approaches to cooperation:
- Analysis of existing AI and sustainability applications
- Behavioural-psychological evaluation: Does the system actually support sustainable decisions - or does it just optimize existing routines?
- Analysis of possible bias risks, greenwashing dangers and acceptance hurdles
- Co-design of new solutions
- Development of prototypes for AI-supported decision support in purchasing, sustainability management or HR and management work.
- Design of decision architectures and feedback systems based on scientifically proven mechanisms.
- Leadership and qualification formats
- Workshops and programs for managers on "AI, Leadership & Climate Action":
- How do I read AI-supported reports correctly?
- How do I maintain responsibility and judgment?
- How do I anchor climate and sustainability targets in target systems and incentive structures?
- Support for change processes in which new digital tools are introduced.
- Workshops and programs for managers on "AI, Leadership & Climate Action":
- Research and pilot projects
- Joint projects in which AI-based sustainability tools are tested in practice and scientifically monitored.
- Involving students in project work and theses to bring new perspectives and analyses into the organization at an early stage.
5 Conclusion: Mission for the CCLE
The panel at the PRIO1 Climate Prize impressively demonstrated how important it is to think about leadership, AI and sustainability together - and to consider the psychological dimension in a central way.
For the CompetenceCenter for Leadership Experience (CCLE) at the Ludwigshafen University of Business and Society, this means:
- We combine scientific evidence from business psychology, climate psychology and leadership research
- with concrete applications in education, practical projects and cooperation,
- to promote a leadership culture that works with AI and sustainability in a responsible, evidence-based and future-oriented way.
If you are interested in an in-depth exchange, joint projects or formats related to "AI, Leadership & Climate Action" as a student, education or practice partner, the CCLE team looks forward to hearing from you.
Together, we can help ensure that AI not only impresses technologically - but also becomes a real lever for sustainable change in organizations.




