KI @ HWG LU
Status: 25.08.2025
Artificial intelligence (AI) is also becoming increasingly important in everyday university life at HWG Ludwigshafen. AI offers many opportunities for our university tasks: it can enrich our activities in studies and education, in administration, research and transfer and make them more efficient, for example when we research content or write texts, when we generate codes or evaluate and process data. The HWG Ludwigshafen wants to integrate AI systems into everyday university life, particularly with regard to the qualification of students.
At the same time, we are aware that there are also risks associated with AI systems, particularly with regard to data protection and copyright regulations. AI systems also have the potential to challenge the scientific integrity of our activities. In addition, social and ethical boundaries in dealing with AI systems must be recognized and taken into account.
The HWG Ludwigshafen therefore advocates not only a creative and competent, but also a responsible and prudent approach to AI systems, in which guidelines must be adhered to.
Guidelines for the legally compliant use of AI systems at HWG Ludwigshafen
The following requirements apply to all students.
The following requirements apply to all other members of the university, i.e. all employees, civil servants, academic and student assistants, professors, deputy professors and interns (hereinafter referred to as "education staff and employees"). These requirements apply to education staff and employees in conjunction with the "Guidelines on the Use of AI at the University of Business and Society". Education staff and employees can find the guideline on the intranet (please log in to the intranet beforehand).
According to the guideline Lecturers as self-employed persons are responsible for observing the legal regulations on the use of AI (e.g. EU AI Regulation and GDPR). They can use the following guidelines as a guide.
Before use
- Students, education staff and employees must attend a mandatory training course before using AI systems: The training consists of the module "Excursion into generative AI" and the module "What risks arise with ML?" from the course "Introduction to AI" of the AI Campus.
- Read the terms of use of the AI system you wish to use carefully and observe them for your use.
- To ensure the best possible data transfer security, you should establish a VPN connection if you are using the AI service from outside the university's IT networks.
During use
- Check whether you want to use copyrighted material as input for the AI service. This applies, for example, to the input of illustrations, copies of texts and examination results.
If this is the case, you must ensure its lawful use, e.g. by obtaining the author's consent, and document this in writing. §§Sections 44a, 44b UrhG may open up further possibilities for the use of copyright-protected content. Check whether the requirements are met on an individual basis. - You may not disclose personal data of third parties in accordance with Art. 4 para. 1 GDPR as input.
This includes real name, contact details, email address, nationality, age, marital status, photo, audio or video recordings. - You may not use any particularly sensitive personal data of third parties in accordance with Art. 9 para. 1 GDPR as input.
This includes ethnic origin, political opinions, religious or philosophical beliefs or trade union membership as well as the processing of genetic or biometric data for the purpose of uniquely identifying a natural person, health data or data concerning a natural person's sex life or sexual orientation. - Personal data of third parties must be anonymized before being used as input so that no conclusions can be drawn about the person.
- If AI services support decisions or assessments, you must document in a comprehensible manner that the decision is primarily based on human judgment and that it is therefore not an automated decision in individual cases in accordance with Art. 22 para. 2 GDPR is involved.
- When writing the input, applicable contractual agreements, legal requirements and internal university guidelines must be complied with to the best of our knowledge and belief. In particular, these are agreements on data protection, confidentiality, the protection of official secrets or legal requirements.
- Materials from confidential sources may not be used as input.
After use
- Based on the responses of the AI service, no automated decisions pursuant to Art. 22 para. 1 GDPR which produce legal effects concerning the data subject or similarly significantly affect the data subject, for example in the case of labor and personnel law decisions.
It must be ensured that the final decision-making authority lies with a natural person, e.g. yourself. - When distributing or publishing the output of the AI system, it must be checked for copyright-protected content, which can arise, for example, if the AI produces coincidental similarities to existing works.
- If published texts are subject to editorial control and a responsible person can be identified, they do not have to be marked as AI-generated.
Automated AI-generated texts that are published without prior control are to be labeled as AI-generated in accordance with Art. 50 para. 4 EU AI Act (examples of this are chatbots or automated social media posts). - The output of the AI system should be checked according to scientific principles, including for technical accuracy. To this end, the procedures and guidelines for safeguarding good scientific practice of the HWG Ludwigshafen should be observed.
Examples of unauthorized use (not exhaustive)
- You copy copyrighted works such as photographs or newspaper articles as input into an AI service.
This use is not permitted without the consent of the copyright holder. - You use personal or particularly sensitive personal data of third parties as part of an input, e.g. the real name, email address, ethnic origin, biometric or medical data.
This use is not permitted. Personal data must be anonymized. - You copy application documents, personal cover letters, CVs or other sensitive documents of third parties into the AI service in non-anonymized form as part of an input.
This use is not permitted. - You copy non-public documents such as tender submissions, self-disclosures or annual reports as input into an AI service.
This use is not permitted without the consent of the person concerned. - You copy examination results from students or trainees, such as written exam solutions or drawings, as input into an AI service to assist you with corrections.
This use is not permitted. - You use confidential materials such as committee minutes from closed meetings, non-public research results or classified information as part of an input.
This use is not permitted. - You carry out an automated assessment of documents using AI services that has a detrimental effect on the persons concerned and where you take the result from the AI service without comprehensive human assessment.
This use is not permitted.
AI systems
What are (generative) AI systems?
An AI system is "a machine-based system that [...] infers how to generate outputs such as predictions, content, recommendations or decisions for explicit or implicit goals from the inputs it receives"(EU AI Act Art.3 No.1).
Generative AI is a sub-area of AI. It is characterized by the fact that it is able to independently generate completely new content that goes far beyond the predictions of non-generative AI. This is achieved through complex calculations of probabilities, paired with AI learning processes known as machine learning (ML) and based on huge amounts of data. The instructions to the AI as to what it should process or generate are known as prompting.
The currently most popular representatives of generative AI are LLMs(Large Language Models), which aim to generate and understand human language. LLMs include the well-known commercial conversational agents ChatGPT, Gemini, Perplexity etc., as well as the AI systems provided by HWG Ludwigshafen.
Many generative AI models are now multimodal: they can generate text from audio files or graphics or code or videos etc. from text. Despite these impressive capabilities, generative AI (just like non-generative AI) does not have (factual) knowledge.
The output (i.e. what the AI outputs as a result) that follows the input (i.e. what is entered by the user) is still generated exclusively on the basis of complex statistical probabilities. The objective of the system is not factual accuracy, but solely the output of coherent text (or clear images, etc.). This can lead to hallucinations, i.e. simply false assertions. Another problem is distortions or bias contained in the training data. Therefore, you need to check all outputs you receive from an AI not only for factual accuracy, but also for potentially more subtle bias.
The ability to communicate with these systems in human language (spoken or written) tempts us to attribute emotions, intelligence and knowledge to these systems that they do not actually possess, and also to forgive errors that would simply be seen as such in normal "machines". This is known as the ELIZA effect.
Similar to other PC applications or cell phone apps, AI systems also process user data. This includes login data such as email addresses, names, IP addresses or dates of birth, and possibly credit card details or places of residence. In the case of generative AI systems, the inputs to the systems can also be linked to this data, meaning that everything that is communicated to the AI or uploaded to the AI is assigned to the user.
If you would like to learn more about how AI and generative AI work, we recommend the free online course from the AI Campus "Introduction to AI".
Use of AI systems in studies and education
Various AI systems are used at the HWG Ludwigshafen. For the area of study and education (S&L), it is decisive which systems students have access to and whether their use may be mandatory in courses and examinations. Whether an AI system can be used on a mandatory basis depends on two key questions:
- Can students use an AI system in compliance with data protection regulations?
- Is the use of an AI system free of charge for students?
If these two requirements for an AI system are not met, students cannot be obliged to use this particular AI system. However, lecturers and students can use such AI systems voluntarily and on their own responsibility in their studies and education.
Please note: The "Guidelines for the legally compliant use of AI systems at the HWG Ludwigshafen" listed on this website must be observed, regardless of whether the respective AI system is provided by the HWG Ludwigshafen or not, and regardless of whether its use is mandatory or voluntary.
Category 1 "Data protection is ensured, provided free of charge": can be used mandatorily in S&L
The following generative and non-generative AI systems are available to teachers and students at all times and can be used for all tasks in studies and education. Students can be obliged to use these AI systems in courses and examinations. It is ensured that data is processed in compliance with data protection regulations.
GenerativeAI systems provided:
The HWG Ludwigshafen provides all students, lecturers and employees with access to the generative AI systems on the platform of the "Edu-KI-Chat", which the Virtual Campus Rhineland-Palatinate (VCRP) offers for Rhineland-Palatinate universities (Edu-KI-RLP). On this platform, you will find several free, generative AI models that deliver different outputs depending on the topic and prompt. The system is constantly being further developed and expanded.
Registration takes place via Shibboleth. This means that the login data is encrypted and no link can be established between your login data and your chat history. Chat content is not logged, cached or used for training purposes.
You can access the platform and the generative AI systems using your usual login details at the following link: https://chat.edu-ki-rlp.de/login
Provided systems with AI functionality:
Education staff and students can access software with AI functionality at any time, which is provided in the university's PC pool rooms and can also be used remotely if necessary.
Examples are: IBM SPSS, Tableau, RStudio, SAP BusinessObjects Analysis, SAP HANA Studio / BW Modeling Tools, SAP Analytics Cloud and others.
Category 2 "Data protection can be ensured, free provision is possible": can be made mandatory in S&L after preparatory work
AI systems that can be installed on students' own PCs:
AI systems and tools for working with AI that can be installed by students free of charge on private computers and used locally may also be used for all tasks in studies and education. If installed on private devices, lecturers are obliged to instruct students on how to install and use the software in compliance with data protection regulations. In general, the software must be used locally and not in the cloud.
Examples include Anaconda for Python or R, Microsoft Power BI, R for Windows, Altair RapidMiner, RStudio, KNIME.
If educators are unsure whether the conditions for data protection-compliant and mandatory use are met, they are welcome to contact the AI Literacy Center for advice.
AI systems in cloud environments:
For certain AI systems, data protection-compliant use is also possible in a cloud environment, for example via Amazon Web Services with a server location in Frankfurt. When using such services, lecturers are obliged to instruct students in data protection-compliant use. If these services are made available to students free of charge, they can also be used in all areas of study and education.
If teachers are unsure whether the requirements for data protection-compliant and mandatory use are met in order to use additional AI systems in cloud environments, they are welcome to contact the AI Literacy Center for advice.
Many AI systems that are operated in the cloud do not meet the data protection requirements. This applies in particular to commercial generative AI systems such as ChatGPT or DeepL. Therefore, students may not be obliged to use these systems. However, voluntary and autonomous use is possible (see following section).
Category 3 "Data protection on own responsibility, no guaranteed provision by the HWG": may be used in S&L, but not mandatory
Students, education staff and employees may use AI systems that are not subject to mandatory use under categories 1 and 2 on their own responsibility for their studies or official tasks. This applies, for example, to the generative AI systems ChatGPT and DeepL or AI products that use Google Cloud or Microsoft Azure.
However, students may not be obliged to use these AI systems in courses or examinations.
This means that
- If the use of certain AI systems is required or demanded for an examination performance, AI systems of categories 1 and 2 must be used if mandatory use is possible under the respective regulations.
- If the use of AI systems is optional for an examination or it is irrelevant from the lecturer's point of view which AI systems are used, students can use both the AI systems provided and other AI systems. Students are therefore free to decide which AI system they want to use. Educators must be able to measure the independent examination performance of individual students, even if they use different AI systems.
- Different application scenarios are conceivable in courses depending on the intended learning outcomes:
- Educators can use the AI systems provided and also other AI systems, for example as part of a demonstration by the educators themselves or via the students' own voluntary use.
- If the lecturers require that specific tasks in the courses are completed by all students using the same AI systems, category 1 or 2 AI systems must be used for this mandatory use, in compliance with the requirements specified in each case.
- If it is irrelevant from the lecturer's point of view which AI system is used in the course (i.e. it is up to the students whether they use the AI systems provided or not), students may be obliged to use the AI systems provided if they do not wish to use any other AI systems.
AI Literacy Center: Jointly developing AI skills in studies and education
The HWG Ludwigshafen aims to actively promote its students in the competent use of artificial intelligence (AI). The desired development of AI skills is to be systematically anchored in studies and education through the development of AI literacy.
With the AI Literacy Center, the departments of Studies and Education, in cooperation with the IT Service Center, support lecturers in designing, planning and implementing quality-assured courses for AI literacy training.
Further information can be found on the AI Literacy Center website.
Please also note
Additional information for students
Students receive information on whether AI systems are permitted in courses and examinations and how the use of AI systems should be documented and the output marked via the responsible teaching staff and/or the guidelines of the Departments or degree programs.
Whether or not AI systems may be used in a course or examination is justified and decided by the lecturers. If the use of AI systems is permitted, it must be clear to you as a student that the responsibility for the entire text, including AI-based results or passages, lies with you as the author. As a student, you therefore have a duty to ensure that the content of the AI-generated text sections or other results is correct and communicatively appropriate.
The HWG Ludwigshafen has a university-wide standardized declaration of independence for the final thesis, which also provides information on the use of AI systems. The declaration of independence requires that the question of whether and which AI systems may be used in the thesis and how the output is documented is agreed with the first assessor. The declaration of independence can be found on the SSC website for the thesis.
Additional information for educators
Lecturers and lecturers will receive additional information and recommendations from the Departments of Studies and Education on the use of AI systems in teaching and examinationson the intranet (please register on the intranet beforehand).
Contact us
If you have any questions, please contact the AI Literacy Center at KI@ 8< SPAM protection, please remove >8 hwg-lu.de.