AI in studies and teaching

What is AI and how do I deal with it during my studies? You can find (first) answers to these questions under:

© derknopfdruecker.com

Person working at a note book

/

  • What is artificial intelligence (AI)?

    AI is intelligence realised by machines or programs. We usually differentiate between weak AI (or narrow AI) and strong AI (or artificial general intelligence). While weak AI can solve clearly delimited problems, strong AI (which could not really be implemented previously) shows general intelligent abilities and can thus solve any task requiring intellectual capabilities. The more popular, successful AI tools are rather weak AI, such as computing programs defeating human world chess champions or world go champions or programs generating images from text prompts. Also language models, such as GPT are rather referred to as weak AI (for an exemplary description of capabilities of GPT 4 see, for example, Bubeck et al., 2023). For a broader discussion on the capabilities, risks and opportunities of modern language models, please see, for example, Bender et al., 2021; Brown et al., 2020.

    AI systems are available in many different forms and complete a variety of tasks. There are AI systems that can play various board games and video games. Moreover, there are AI systems that support physicians in diagnosing diseases (expert systems), that identify objects in images, sort waste automatically, decide on credit allocation, drive cars independently, etc. Recently, especially one special category of AI systems, the so-called language models, such as Chat-GPT have attracted attention and have increasing significance.

     

     

  • What AI tools are available?

    There is no answer to this question since new and improved tools enter the market each day. While language models including a chat function dominated the first phase of the hype, image and video generators are now available as well and are developing extraordinarily. 

    A brief overview of which AI tools are on the market in the area of study and teaching can be found on the website www.vkkiwa.de/ki-ressourcen/

  • May I use AI tools for my studies?

    Frankly speaking, there is no easy answer to this question. For example, students of Computer Science are developing and experimenting with language models themselves. Students of Translation Studies are using AI for translations. Every text generation program contains little helpers in the form of AI. Students enrolled in teacher education programmes learn how school students prepare homework assignments and presentations and how to deal with the use of AI in this context. And many courses examine the impact of AI tools on society, culture and technology from an academic perspective.

    Of course, exams and academic theses must clearly demonstrate what YOU know and are able to do. We do not hold exams to assess AI tools.

    Therefore, teachers and examiners in each discipline decide whether and how students may use AI tools as aids. For this purpose, the University of Vienna has provided guidelines for our teachers. A crucial aspect in the context of AI is transparency. Teachers have to specify permitted materials before the course and/or exam. In view of the wide range of available tools, this is not an easy task.

  • What can I do if teachers do not provide any information?

    For the sake of clarity, ask teachers in the first course unit or before the exam whether and in what form you may use AI tools.

  • What do I need to consider when using AI tools?

    Whether you use AI tools in private life or for your studies, you should consider a few questions and topics:

    • What data does the AI tool want me to provide and am I prepared to share these data (especially personal data)? What are the interests of the company providing the AI tool? Never enter personal data or secrets in an AI tool! This applies to both your personal data and the personal data of others.
    • On which basis was the AI tool trained? Can I be sure that the output is correct, complete and especially unbiased? When using AI tools, you are always responsible for how you use the results. Therefore, checking the output of AI tools is an important task. AI tools can hallucinate. This means that they may present completely fabricated texts as the truth or as reliable, existing sources. Be careful!
    • Document the prompts you entered and when you entered them, the results that the AI tool generated and the way in which you used these results.

    Example: using ChatGPT in a safe way 

    Large language models, such as Chat-GPT, do not have a source of truth due to their network architecture, as Open AI describes in its blog article at the beginning of the section on 'Limitations'. “ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers. Fixing this issue is challenging, as: (1) during RL training, there’s currently no source of truth; (2) training the model to be more cautious causes it to decline questions that it can answer correctly […]“. Therefore, we have to emphasise again that chat bots are no knowledge machines in any case. Quite the opposite: In the end, it is always the obligation of the human using the AI tool to check the accuracy of the output and to assume responsibility for the result. A flow chart by Aleksandr Tiulkanov illustrates this in an impressive way:

    Illustration: Aleksandr Tiulkanov (© https://creativecommons.org/licenses/by/4.0/deed.de)

    Graphik: Aleksandr Tiulkanov (© https://creativecommons.org/licenses/by/4.0/deed.de)
  • What changes with regard to writing academic theses?

    When writing an academic thesis, please discuss with your supervisor in advance which aids you may use and how you have to document in a transparent way that you completed the achievement independently. This may also differ between disciplines. In any case, you should discuss the following aspects (your supervisors are aware of this):

    Discuss with your supervisor how you correctly approach academic research and writing.

    For your academic thesis to be assessed as an independent achievement, you must

       

    • have documented any aids used
    •  

         

      • in the section where the aids been used and
      •  

      • in the description of the thesis’ methods;
      •  

       

    • have cited intellectual property of other persons according to the disciplinary rules and referenced these in the bibliography;
    •  

    • have the right to use copyrighted images and media in the thesis;
    •  

    • have documented all (raw) data, protocols and analyses generated during preparation in a transparent way and accessible at any time;
    •  

    • have been transparent about any texts and images generated by (AI) tools and their adjustments throughout the thesis preparation process;
    •  

    • have explicitly mentioned any contribution to the content by third parties (such as data processing, analysis, etc.) and have acknowledged these persons accordingly (e.g. in the acknowledgements);
    •  

    • did not make use of any impermissible support regarding content (e.g. ghost writing) in addition to the approved supervision;
    •  

    • have mentioned any content overlaps with their contributions in courses (e.g. bachelor’s paper, seminar paper).
    •  

    Discuss with your supervisor how you best approach your thesis project. You will also have to confirm the aspects above when submitting your academic thesis. In case of the suspicion that your thesis was not an independent achievement, a study law procedure is initiated (comparable to the procedure in case of plagiarism, ghost writing, etc.).

  • What happens if I used AI tools even though they were not permitted as aids?

    The same rules as in case of cheating apply. If the teacher or examiner suspects that you used unauthorised aids, a study law procedure is initiated. For the purpose of this procedure, observations and evidence are collected. You have the right to make a statement as well. If it turns out that you did not observe the law, an ‘X’ will be entered in your transcript of records and the exam counts as an attempt.

Scientists from the University of Vienna have briefly summarised what AI technically is, how it is trained and what the ethical and legal consequences of its use are. You can find out more about these topics in the courses offered by the University of Vienna.

 

/

  • How are AI systems developed and trained?

    The development of AI systems usually consists of many steps and comprises several iterations. If we ignore a lot of details, we may simply describe the development process as the development of a so-called model and the training (learning) of this model based on data or interactions.

    For complex tasks, so-called ‘neural networks’ are often used as model. This type of models was initially inspired by the processing of signals in the human brain and was decisive for the successes of AI systems in the past years. There are different special neural networks depending on the type of data entered that should be processed (e.g. special models for images, sequences, language or graphs). To process language, so-called ‘transformer models’ (Vaswani et al., 2017) have enjoyed popularity in the last years.

    Neural networks often have a lot of adjustable parameters that can be changed during training. To give you an idea about the scale: The model of Chat-GPT has approximately 175 billion parameters.

    Training a model means adjusting the model’s parameters to achieve a specified target. For this, so-called target functions (or error functions) are usually defined and the model parameters are adjusted to optimise the relevant target function. Depending on the problem and the availability of data, different approaches are used to adjust the parameters. Generally, we may differentiate between three central approaches:

    • In case of supervised learning, the target values which a model should predict are set and the model’s parameters are adjusted to reach the set target values as exactly as possible in a provided training data set (a set of data used for training the models).
    • In case of unsupervised learning, no target values are set for the training and the model’s parameters are adjusted to recognise structure in the data.
    • In case of reinforcement learning, the model (in this case also often referred to as agent) interacts with its environment by means of interactions. It receives information about the state of this environment from it and whether the actions performed by the agent result in a reward. Based on this information, the agent is adjusting its behaviour to receive as many rewards as possible.

    Example Chat-GPT

    The language model of the AI tool Chat-GPT, for example, is trained by means of supervised and reinforcement learning. For supervised learning, approximately 570 gigabyte of text data from books, websites (e.g. Wikipedia), etc. have been used; in total about 300 billion words were used. Regarding the target function for the training, the model underlying Chat-GPT had to predict the probability of subsequent words based on partial sentences. Through this type of training, the model has acquired information about the statistical structure of the language and can generate text. For applications, the first evaluations of a model which was trained only by means of supervised learning showed that the results are partially unsatisfactory, especially due to so-called hallucinations or non-adherence to instructions (L. Ouyang et al., 2022). To further develop the model accordingly, it was improved by means of reinforcement learning. Here, several possible answers to prompts are first assessed by humans. Based on this, a reinforcement model is trained, which is then used to better adjust the parameters of the model. The resulting model and several safeguards which should ensure that Chat-GPT does not provide answers, or provides appropriate answers to dangerous or illegal prompts, make up the AI tool Chat-GPT in its current version.

    Not all AI systems are trained in the form of multi-level training and by means of different approaches, but all AI systems use data in some form or another as a basis. This is important because the properties of the data usually influence the properties of the resulting AI system.

  • What are desirable features in AI systems and do systems have these features?

    When using AI systems, a common question that arises is what features these systems have and how these features influence the outcome of the system. For example, in AI systems deciding on the allocation of loan credits it is important that they treat different groups of persons having the same credit rating equally and do not discriminate, so that they treat persons within these groups fairly. This is also relevant to language models, such as Chat-GPT, since this may, for example, influence the type of answers and subsequently may influence humans.

    An entire sub field of AI research is addressing the properties of AI systems, namely fairnesstransparency and accountability in addition to questions about the trust humans put in AI systems, which is usually related to the three properties mentioned before. It is often desirable that AI systems are fair, i.e. treat different groups of persons the same, and that they are transparent. Transparent means that their mode or the foundation of the system’s predictions or decisions can be made comprehensible. The property of accountability addresses: who assumes responsibility, or who is allocated responsibility, for erroneous or problematic predictions or actions.

    Due to the training of AI systems based on data, the systems are generally not fair but inherit the biases prevalent in the data. For example, AI systems implemented in different applications were identified that systematically disadvantage women or assign lower chances for successful probation to persons having certain characteristics. There are different techniques and methods to counter these biases during training but many of these approaches are still in the development or research stage.

  • Is AI neutral?

    As mentioned before regarding the properties of AI, AI is generally not neutral due to the training based on data that might contain biases.

    Example:

    In the case of Chat-GPT, for example, the system’s political orientation has been analysed by prompting the model to give answers to questions from the political orientation test. Here, Chat-GPT in general showed a rather leftist political orientation (D. Rozado, 2023). This is relevant insofar as it is obvious that Chat-GPT and actually any other AI model cannot be considered as being neutral. We must evaluate the predictions of these systems in this light.

  • Is AI ethical(ly justifiable)?

    AI technologies have always provoked ethical considerations (see Coeckelbergh, 2000). Ethics can be generally described as a deliberation trying to give answers to the questions of what is good and right. This usually refers not to what is factually good and right but to what is morally good and right. Morality concerns norms and ideals for actions related to succeeding in life or other values.

    The ethics of AI is debated widely. Security, data protection, informational self-determination, discrimination and bias as well as accountability and allocation of actions are being discussed. Reviewing the catalogues of ethical rules for AI, we can see that they are very similar and arrive at an uncontroversial set of principles and values (Jobin et al., 2019; Hagendorff, 2020). In contrast, there are vigorous debates about the design of AI technologies (academic and technical level) and their regulation (political and legal level) based on these moral perspectives. In addition, often also other considerations regarding Europe as a business and research location that aim at both a dynamic and ethically acceptable use of AI technologies play a role. Regarding regulation, a risk-based approach emerges which is strongly regulating high-risk technologies (such as self-driving cars) and has high requirements regarding the use of AI systems and which barely regulates low-risk technologies (such as reverse vending machines) (see also the proposal and discussions about the EU AI act, see European Commission, 2023).

    AI tools are increasingly used in education accompanied by ethical considerations regarding teachers, learners and organisations from different perspectives. Often, the already known problems of AI systems (security, discrimination, data protection) play an overarching role in this case (Witt et. al., 2020). On all application levels, the use of AI systems in the field of education causes ethical problems that might require a lot of effort to solve, but can usually be solved.

    Far more exciting but also more difficult to answer are overarching ethical problems, which concern the level of changing human self-relationships and relations with the world resulting from technological advancements. AI systems strongly challenge the human self-understanding and understanding of the world. Previously, the human self-understanding was strongly defined as distinct from ‘pure machines’, but these anthropologies might be shaken by high-performance generative AI systems. Where generative AI systems become real assistants in the daily lives of people, they represent interaction partners having real effects and are thus part of human communication communities.

    We can assume that the use of AI systems in higher education also changes what we understand as being humanwhat a (well) educated person is and how we treat humans (see also the research project BiKiEthics, se-ktf.univie.ac.at/bikiethics/). The notion of education itself is a symbol for the open question about human developmental capacities, human knowledge capabilities and accountability capabilities, about questions of a succeeding life and just societies. Insofar as higher education institutions are oriented towards these topics as institutions of higher education and insofar as AI systems are heavily changing the context of these considerations, they have to answer the question of their meaning and purpose as research and educational institution anew.

    Also the question of how our University is using generative AI, such as Chat-GPT, raises the question of what our educational objectives are. The answers depend on the level at which considerations are taking place. Although the University has to find a universal answer to this as an institution, there might be different answers at the levels of subjects and disciplines. Finally, these questions have to be answered by the responsible teachers together with the learners.

  • How about AI and Law?

    Although the disruptive potential of AI has been on the legal agenda for some time now (European Commission, 2019) and although the European Commission in particular has been addressing this topic for a while now (see European Commission, 2023a), so far, the activities at the European level have only resulted in suggested legislation, especially in a proposal for a regulation (European Commission, 2023b) and in a proposal for a directive on AI liability (European Commission, 2022). At the moment, there is a lack of a ‘hard’, special European law, which could serve as a guidance. This refers to a general AI regulation and, especially, to a special regulation in the field of education.

    The same applies to the national, Austrian legal situation, which only encompasses a very broad AI strategy at the moment. There are no special and enforceable regulations at all. However, this strategy, at least, contains the following statement (Bundesministerium für Klimaschutz, Umwelt, Energie, Mobilität, Innovation und Technologie (BMK), 2022, p. 50): AI should be used by teachers and learners in terms of individualisation and didactic innovation throughout the entire education chain. A requirement for this is the development of AI-based tools that are combined with certain learning methods as well as the associated production of evidences regarding their effectiveness through accompanying research.

    This means that there are mainly only vague provisions that can be legally enforced only to a limited extent. On the one hand, these provisions especially stress the potential of AI use. On the other hand, they often also list inherent challenges and risks. These should be tackled with legal instruments that still need to be created. Until these instruments have been created, we can only apply non-AI-specific legislation to AI situations. Due to the peculiarities of language models, their fast development and the difficulties in clarifying the challenges related to their use, it makes sense and can even be considered a duty to provide at least (some) orientation at the university level.

    Therefore, questions regarding data protection law and copyright law are raised that need to be answered with reference to general rules.

  • Which data protection regulations do I have to consider when using AI tools?

    Broadly speaking, data protection law protects natural persons (i.e. all humans) against the illegal processing of their personal data. The main prerequisite for the application of the data protection law thus is that AI tools process personal data. Article 4, para. 1 of the General Data Protection Regulation (GDPR) defines ‘personal data’ as “any information relating to an identified or identifiable natural person [...]; an identifiable natural person is one who can be identified, directly or indirectly, in particular by reference to an identifier such as a name, an identification numberlocation data, an online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that natural person”. We also talk about personal data, in particular, even if the affected person is not immediately identified (for example, “Luca Müller is participating in my course”), but at least identifiable.

     Examples

    • "The student with the matriculation number 01234567 is taking part in my course."
    • "The student, who will introduce herself by name in my office hours next week, is taking part in my course."
    • "The (only) student who is in the 4th semester, has already passed exam X with a grade of sufficient and is doing a semester abroad in Paris in the coming semester, is taking part in my course."

    If it concerns personal data, it is prohibited to use them unless you have obtained permission to do so. This means that processing them is generally prohibited and only permitted in exceptional cases. These are those cases in which the affected person has consented to the processing of their data or the processing is (exceptionally) permitted due to a legal requirement. Such a legal permission can be, for example, if a body has to process data to meet its (statutory) responsibilities or to adhere to a contract. Therefore, universities are allowed to process data (e.g. also on learning platforms, such as Moodle) without students consenting to the processing of their data, if data processing is required for the purposes at hand and if it adheres to the principle of proportionality.

    However, special caution must be applied if the personal data do not remain in Europe but are transmitted to a so-called third country (e.g. the US, China, India) in which, at least from the European perspective, there is no comparable suitable level of data protection. The transfer of personal data to third countries is subject to especially strict regulations which should ensure that the data remain protected even if lower standards apply. Due to these reasons, it is generally not recommended to transfer personal data to a third country, for example, the US if this transfer is not specifically regulated (in general: by means of an agreement).

  • Which copyright regulations do I have to consider when using AI tools?

    Copyright (only) protects singular creations of the mind in the fields of literature, music, fine arts or movies, (section 1 of the Austrian copyright law, UrhG), so-called "Werke" or "works". The work is protected as soon it is created. No additional steps are required. No permission or registration is required, in particular. However, a condition is that the work has a (certain) singularity, or simply put, the work is the result of a creative creation process, and has a so-called level of creativity (Schöpfungshöhe). Copyright does not protect completely trivial, irrelevant results of activities that do not require any creativity. Authors (copyright holders) are therefore always natural persons. Through creating the work, they also have the right to independently decide on the exploitation of their work, for example, by concluding a license agreement. 

    Exploitation actions protected by copyright law are, in particular, reproduction and making it available on the Internet. A work is also reproduced if it is copied digitally. Often, authors conclude licensing agreements with natural or legal persons exploiting their work (e.g. employers, publishers, film producers) which (may) specify the transfer of the exploitation rights against payment. If the author transferred their exploitation rights exclusively, they also transferred any authority regarding actions to exploit the work. Thus, they can no longer take decisions on exploiting their work.

    ‘Works’ created by AI lack human creativity resulting in their (concrete) realisation and are thus usually not protected by copyright. Therefore, no exploitation rights are acquired, and usually no license is needed. This may be different if the instructions leading to the production of the work, namely the prompts, already have sufficient level of creativity. This is the case if they are especially precise or complicated, for example. If it is not a work, there is also no copyright. Thus, information referring to copyright (“Copyright XYZ”) is misleading.

    Training data needed for developing an AI system may be protected by copyright. If they are copyright-protected, it may be permissible, under certain conditions, to use these training data also without permission. This is especially the case if it is a form of text mining or data mining (see section 42h of the Austrian copyright law). However, this is a (complicated) individual case which requires an assessment of the situation (see Kramer, 2023). In comparable cases, there are already suits pending (see, for example, Klaiber, 2023).

  • Ressources and further information
    • Coeckelbergh, Mark. AI ethics. The MIT Press essential knowledge series., 2020.
    • Bender, Emily M., et al. "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?" ACM Conference on Fairness, Accountability, and Transparency (2021).
    • Brown, Tom, et al. "Language models are few-shot learners." Advances in Neural Information Processing Systems 33 (2020).
    • Bubeck, Sébastien, et al. "Sparks of artificial general intelligence: Early experiments with gpt-4." arXiv preprint arXiv:2303.12712 (2023).
    • Europäische Kommission (2023), Vorschlag für eine Verordnung zur Festlegung harmonisierter Vorschriften für künstliche Intelligenz (Gesetz über künstliche Intelligenz) und zur Änderung bestimmter Rechtsakte der Union. Online verfügbar unter eur-lex.europa.eu/legal-content/DE/TXT/, zuletzt abgerufen geprüft am 17.06.2023.
    • Europäische Kommission (2022). Ethical guidelines on the use of artificial intelligence and data in teaching and learning for educators. Online verfügbar unter: education.ec.europa.eu/news/ethical-guidelines-on-the-use-of-artificial-intelligence-and-data-in-teaching-and-learning-for-educators, zuletzt abgerufen geprüft am 19.06.2023. European Union Ethical guidelines for teachers on the use of AI. Online verfügbar unter: education.ec.europa.eu/news/ethical-guidelines-on-the-use-of-artificial-intelligence-and-data-in-teaching-and-learning-for-educators, zuletzt geprüft am 23.06.2023.
    • Hagendorff, Thilo. „The Ethics of AI Ethics: An Evaluation of Guidelines.” Minds & Machines, 2020. Online verfügbar unter: doi.org/10.1007/s11023-020-09517-8 zuletzt geprüft am 23.06.2023.
    • Jobin, Anna, Marcello Ienca, und Effy Vayena. „The global landscape of AI ethics guidelines.” Nature Machine Intelligence 1, Nr. 9 (2019): 389–99. Online verfügbar unter: doi.org/10.1038/s42256-019-0088-2 zuletzt geprüft am 23.06.2023.
    • Ouyang, Long, et al. "Training language models to follow instructions with human feedback", Advances in Neural Information Processing (2022).
    • Rozado, David. "The political biases of chatgpt." Social Sciences 12.3 (2023): 148.
    • Schlimbach, R., Khosrawi-Rad, B. & Robra-Bissantz, S. Quo Vadis: Auf dem Weg zu Ethik-Guidelines für den Einsatz KI-basierter Lern-Companions in der Lehre?. HMD 59, 619–632 (2022). Online verfügbar unter: doi.org/10.1365/s40702-022-00846-z, zuletzt geprüft am 23.06.2023
    • Vaswani, Ashish, et al. "Attention is all you need." Advances in Neural Information Processing Systems 30 (2017).
    • Witt, Claudia de, Florian Rampelt, und Niels Pinkwart. „Whitepaper "Künstliche Intelligenz in der Hochschulbildung" 2020. Online verfügbar unter: doi.org/10.5281/zenodo.4063722, zuletzt geprüft am 23.06.2023.
    • Bundesministerium für Klimaschutz, Umwelt, Energie, Mobilität, Innovation und Technologie (2022). Strategie der Bundesregierung für Künstliche Intelligenz - Artificial Intelligence Mission Austria 2030 (AIM AT 2030). Online verfügbar unter: www.bmk.gv.at/themen/innovation/publikationen/ikt/ai/strategie-bundesregierung.html, zuletzt abgerufen am 14.06.2023.
    • Datenschutz Grundverordnung (DSGVO). Online verfügbar unter: eur-lex.europa.eu/legal-content/DE/TXT/HTML/, zuletzt abgerufen am 14.06.2023.
    • Europäische Kommission (2023a). A European approach to artificial intelligence. Online verfügbar unter: digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence, zuletzt abgerufen am 14.06.2023.
    • Europäische Kommission (2023b). Vorschlag für eine zur Festlegung harmonisierter Vorschriften für künstliche Intelligenz (Gesetz über künstliche Intelligenz) und zur Änderung bestimmter Rechtsakte der Union. Online verfügbar unter: eur-lex.europa.eu/legal-content/DE/TXT/HTML/, zuletzt abgerufen am 17.06.2023.
    • Europäische Kommission (2019). Generaldirektion Kommunikation, Leyen, U., Politische Leitlinien für die künftige Europäische Kommission 2019-2024 – Rede zur Eröffnung der Plenartagung des Europäischen Parlaments 16 Juli 2019. Online verfügbar unter: data.europa.eu/doi/10.2775/01339, zuletzt abgerufen am 19.06.2023.
      Europäische Kommission (2022). Vorschlag für eine Richtlinie des Europäischen Parlament und des Rates zur Anpassung der Vorschriften über außervertragliche zivilrechtliche Haftung an künstliche Intelligenz (Richtlinie über KI-Haftung). Online verfügbar unter: eur-lex.europa.eu/legal-content/DE/TXT/HTML/, zuletzt abgerufen am 19.06.2023.
    • Klaiber, Hannah (2023). Nächste KI-Klage: Stable Diffusion und Midjourney sollen Urheberrechte verletzen. t3n Online-Magazin. Online verfügbar unter: t3n.de/news/stable-diffusion-sammelklage-stability-ai-midjourney-deviantart-1527577/, zuletzt abgerufen am 19.06.2023.
    • Kramer, Josefine (2023). EU will Auskunft über die Trainingsdaten von ChatGPT. t3n Online-Magazin. Online verfügbar unter: t3n.de/news/ai-act-eu-trainingsdaten-chatgpt-urheberrecht-1549442/, zuletzt abgerufen am 19.06.2023.
    • Urheberrechtsgesetz (UrhG). Online verfügbar unter: www.ris.bka.gv.at/GeltendeFassung.wxe, zuletzt abgerufen am 14.06.2023.

    Weitere Quellen zum Nachlesen: