At this time of rapid technological development and the ever-expanding possibilities of artificial intelligence (AI), it is imperative that Palacký University Olomouc is at the forefront of innovation while maintaining ethical standards and integrity. This document reflects a commitment to balance the potential of AI to improve teaching, research, and administrative processes with the need to protect personal data, intellectual property, and academic honesty.
The following recommendations should serve as a guide for all members of the UP academic community and contribute to the development of a safe, transparent, and innovative environment in which AI can support our academic goals and missions.
This text should be perceived as a work in progress, i.e. it will be updated periodically to reflect the latest developments in AI and to ensure that our work practice remains consistent with the latest knowledge and best practices. The text is in line with the conclusions of the UP Pedagogical Committee on the use of AI in teaching.
All AI systems (generative or otherwise) at UP must be used in such a way as to comply with the ethical principles of UP as defined by the Code of Ethics for UP staff and students, as follows:
(a) Artificial intelligence must be used in a meaningful and responsible manner – so that it serves as a tool to improve our knowledge and skills and to make our teaching and research activities at UP more effective. This means using AI as a learning, managerial, administrative, and editorial tool, not as a full replacement for creative activity. Therefore, it is not recommended to use generative AI tools to formulate the text itself, i.e. the author’s own claims, conclusions, arguments, etc.
(b) AI must not be misused for plagiarism. The use of generative AI in professional texts and other creative work, including artistic output, must be declared.
(c) Neither staff nor students may claim to be the authors of texts and other creative outputs generated using AI.
(d) AI must not be used to deliberately misrepresent the results of science and research (e.g. purposeful modification and falsification of data).
(e) AI must not be misused to create and disseminate misinformation.
(f) All AI-generated outputs must be verified.
(g) Liability always falls upon the human being who misuses AI in this way.
When using AI, personal data and other sensitive information (such as potential UP intellectual property) that is fed into AI systems must be handled responsibly. It should be understood that such data is usually made available to a third party and that the owner then loses control over it. It is therefore important to ensure that there is no breach of privacy, no leakage of sensitive information outside the organisation, no discrimination, etc.
Meanwhile, it is also necessary to be mindful of possible leakage of intellectual property and the protection of third-party rights. This concerns e.g. patents, research datasets, project plans, as well as foreign language translations of materials containing sensitive data mentioned above. In the case we want to work with data requiring special protection within AI systems, specific licences must be used to ensure that the data entered into the systems is not leaked outside the organisation/institution.
The use of AI systems in research and education requires a responsible approach. Particularly, the potential ethical, security, and legal issues associated with the creation and dissemination of AI-generated content must be taken into consideration. AI systems may produce compelling but potentially false or misleading content, which may affect the integrity of research results and educational materials. There is also a risk of misuse of technology to create misinformation and manipulative content.
Therefore, it is essential to integrate ethical standards and rules into all phases of the research and educational process, thereby ensuring that the eventual application of AI is transparent, justified, and in accordance with UP’s ethical principles and standards.
When using new (especially generative) forms of AI, one must anticipate that these tools are still flawed and may provide information that is false and misleading. Therefore, AI outputs should always be verified. The quality of AI-generated outputs depends heavily on the way educators/students enter their specifications (requests, commands) into these systems – i.e. the prompts. In the educational process, it is necessary to distinguish situations in which the use of AI is effective and expedient from situations in which the use of AI is ineffective or directly undesirable (e.g. when testing students’ factual knowledge and skills). Similarly, it is necessary to choose appropriate and effective pedagogical methods in educational activities that include the deployment/use of AI.
AI can be used to create tools for testing students’ knowledge and skills, e.g. to create series of test questions and tests of different types. Again, however, the potential for error in these systems needs to be taken into account, and human oversight is therefore essential. Thus, generative AI can be used for self-testing, generating test questions for specific learning materials, as well as for analysing student work at different levels (e.g. based on criteria defined by the educator).
When supervising theses, it should be anticipated that some students may actively use advanced AI tools when writing their theses. Therefore, the use of AI within these types of works must be in line with the principles defined in this recommendation. However, detecting whether students are using AI tools in writing their theses is difficult.
It is thus necessary to introduce a gradual change in the verification of the authorship of the thesis in those fields that require it. This may take the form of ongoing consultations reviewing the text and verifying the student’s knowledge of the topic during the thesis defence.
At the same time, it should also be pointed out that each and every one of us has the fundamental right not to be subjected to a decision that has a significant impact on them when it is be based solely on automated processing. Thus, if an educator decides to use AI in any form of assessment of students’ activities (e.g. evaluation of their written works), it should always include the educator’s own assessment, not one made solely by AI.
Generative AI (and other tools using elements of AI) can be used as an aid to improve the quality of the text, especially stylistically, but not creatively (e.g. not in the formulation of conclusions or arguments based on the results of scientific research, etc.). If generative AI is used so as to directly influence the content of the text (diagrams, graphical abstracts, flowcharts, as well as the structure of the text), this should always be acknowledged. If generative AI is used to modify the form of the text, it is not necessary to acknowledge its use (similarly, we do not mention the use of automatic spell-checkers and reference citation systems).
doc. Mgr. Lucie PLÍHALOVÁ, Ph.D.
(guarantor, Vice-rector for Science and Research)
prof. Mgr. Kamil KOPECKÝ, Ph.D.
(for the UP RO AI Committee)
Examples of AI tools use acknowledgement:
Perplexity AI was used in writing this text when compiling a reference list of academic resources on the subject.
ChatGPT (GPT4) was used in writing this text, specifically chapters 1–2.
For more information on generative artificial intelligence, please visit our website www.ai.upol.cz.