Evaluation Schema
Frieda Klaus
Created on November 12, 2024
Over 30 million people build interactive content in Genially.
Check out what others have designed:
IAU@HLPF2019
Presentation
SPRING IN THE FOREST 2
Presentation
EXPLLORING SPACE
Presentation
FOOD 1
Presentation
COUNTRIES LESSON 5 GROUP 7/8
Presentation
BLENDED PEDAGOGUE
Presentation
WORLD WILDLIFE DAY
Presentation
Transcript
Funded by the European Union. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Education and Culture Executive Agency (EACEA). Neither the European Union nor EACEA can be held responsible for them.
for AI in education on data, privacy, ethics, and EU values
Evaluation Schema
Start
Click here for the full document
Contents
03
06
05
02
01
04
Other considerations
Pedagogical considerations
Operational considerations
AI governance and monitoring
Assessing current AI use, AI maturity and related AI ethical concerns
References
Summary of the Evaluation Schema
Back to title page
01
05
Other considerations
Summary of the Evaluation Schema
01
Assessing current AI use, AI maturity and related AI ethical concerns
02
AI governance and monitoring
03
Operational considerations
04
Pedagogical considerations
Click here for the full document
Back to title page
02
If the answer to 4a is "yes", who has access and uses these tools?
4c
4b
Does the educational centre officially endorse and support the use of specific AI tools, or is their use limited to individual teachers who have chosen to incorporate them independently in their teaching practices?
4a
Does the centre have an official AI policy/ strategy?
Assessing current AI use, AI maturity and related AI ethical concerns
Which of the following technologies are currently used in the educational centre?
Do any of these or other technologies used in the educational centre rely on AI?
If the answer to 4a is "yes", does the educational centre provide training/ support for the use of these tools?
+info
+info
+info
+info
Click here for the full document
Back to title page
03
AI Governance and monitoring
Adherence to relevant regional/ national/ European policies and legislation
Issues of privacy, data protection, technical robustness and safety
Diversity, non-driscrimination, fairness and equity
Transparency, accountability and oversight
+info
+info
+info
+info
Click here for the full document
Back to section overview
Source: EU Artificial Intelligence Act
EU AI Act
Unacceptable Risk
Minimal/ Limited Risk
High-Risk
The new EU AI Act (European Parliament and Council of the European Union, 2024) establishes a shared responsibility approach, where AI providers, developers and also deployers all play a role in ensuring compliance with AI regulations. The EU AI Act classifies AI systems into different risk levels, with distinct obligations for each. These classifications are designed to ensure the safety and ethical use of AI. Below are the primary classifications:
Click here for the full document
- The Institute for Ethical AI in Education in their 2021 report suggest it is important to strike a balance “between privacy and the legitimate use of data for achieving well-defined and desirable educational goals” (p.8)
- Additionally, educational institutions must be aware of the broader regulatory framework affecting AI tools, including the Digital Services Act (DSA, European Parliament and Council of the European Union, 2022), which sets out additional requirements for online platforms. The DSA mandates greater transparency and accountability from platforms, including those using AI, particularly in terms of content moderation and the handling of user data. Institutions must ensure that AI systems used in education comply with both the GDPR and the DSA to ensure privacy, transparency, and user safety.
Back to section overview
Personal and sensitive data
+info
- Many AI tools can be used without collecting personal/sensitive data. If this is the case then the most simple approach is to consider whether the particular tool is aligned with current national/regional/European legislation. For example:
- Is the educational centre in compliance with relevant data protection regulations (e.g., the GDPR published by the EU in 2016)?
- Does it have a system in place to avoid data breaches?
- Issues of privacy and data protection are complex, and in most cases, it is challenging for a centre to ensure compliance independently of the AI provider. According to the EU AI Act, the responsibility for ensuring privacy and data protection compliance is likely to rest with AI developers and providers, and this compliance will be evaluated by an external organisation rather than the educational centre.
- It is important to also consider:
Click here for the full document
Back to title page
04
Operational considerations
Ensuring human agency and oversight in the teaching process
Ensuring training/ support for AI implementation/ use
+info
+info
Click here for the full document
Back to section overview
Are there monitoring systems in place to prevent overconfidence in or overreliance on the AI system?
Do teachers and school leaders have all the training and information needed to effectively use the system and ensure it is safe and does not cause harm or violate rights of students?
Is there a mechanism for leaners to opt-out if concerns have not been adequately addressed?
Are procedures in place for teachers to monitor and intervene, for example in situations where empathy is required when dealing with learners or parents?
Ensuring human agency and oversight in the teaching process
Is the teacher role clearly defined so as to ensure that there is a teacher in the loop while the AI system is being used? How does the AI system affect the didactical role of the teacher?
Are the decisions that impact students conducted with teacher agency and is the teacher able to notice anomalies or possible discrimination?
+info
+info
+info
+info
+info
+info
Click here for the full document
Back to title page
Towards a balanced approach to AI use in teaching/training/ learning and how AI may influence the development of competencies but also social well-being
Common ethical aspects related to assessment and AI, academic misconduct
Ensuring students are prepared for an AI-driven workforce
Copyright and intellectual property
Empowering teachers and teching
05
+info
+info
Pedagogical considerations
+info
+info
+info
Back to section overview
Ensuring students are prepared for an AI-driven workforce
- If the centre does not yet include this type of training it is important to prioritise it in the curriculum.
- The centre could create a research team investigating what AI is being used for in the field of training and reach out to industry and organisations for insight and potentially even for support in providing this training (if it lacks the resources to do so independently).
- This can be done gradually for different courses/programs but will be a continuous process given the leaps in the development of AI and its increasing use in multiple fields.
- Practical AI training equips students with real-world skills for the job market and it is crucial that it be provided consistently across courses/programs. While it is hard to predict which tools will be in use in a specific field in 5 years time, it will be easier for an individual with practical experience in AI to adapt to new AI tools.
Click here for the full document
Back to section overview
- Encouraging collaboration between staff and sharing of experiences/materials etc. can also be beneficial in creating a community and keeping up to date with developments.
- To develop AI literacy, educational institutions can draw on established frameworks, such as UNESCO’s AI Competency Frameworks for Students and Teachers (UNESCO, 2024a; UNESCO, 2024b), which outline the skills and knowledge necessary for effective engagement with AI technologies. The AI Pioneers Supplement to DigCompEdu (Bekiaridis, 2024) also highlights the AI skills required by teachers in educational contexts.
Empowering teachers and trainers
Starting situation / Client
- AI can help teachers by automating repetitive tasks and freeing up time for more creative work, but training and support are crucial for this change, no matter a person’s educational or economic background.
- The centre should provide opportunities to educate and support AI users through internal or external seminars/workshops, as well as online and work-based learning options. All of these formats offer valuable avenues for learning and can be tailored to accommodate diverse preferences and circumstances.
Click here for the full document
Back to section overview
More specifically:
More specifically:
10.
What measures are in place to prevent academic misconduct related to AI?
9.
Are students educated about the ethical use of AI in learning and assessment?
8.
How can the centre ensure that AI does not replace but rather supports educators in assessing student learning?
7.
Common ethical aspects related to assessment and AI, academic misconduct
If AI tools are being used in assessments, do they align with fair and inclusive evaluation practices?
+info
+info
+info
Click here for the full document
Back to section overview
Unesco, 2014; Van Laar et al., 2017; Vincent-Lancrin & van der Vlies, 2020)
18.
17.
16.
15.
14.
13.
12.
11.
Towards a balanced approach to AI use in teaching/training/ learning and how AI may influence the development of competencies but also social well-being
Has the use of AI allowed teachers/ trainers to save time thus increaing the capacity of the educational centre?
Has the use of AI improved the teaching materials and methods thus extending the capabilities of the centre?
Has AI implementation improved or worsened assessment quality inlcuding fairness?
Has AI implementation improved or worsened accessibility?
Has AI implementation improved the personalisation of teaching content?
Has AI implementation improved or worsened learner engagement?
Has AI implementation improved or worsened leaner performance/grades/outcomes/access to the labour market?
How has AI implementation influencing the development of student's transversal or 21st century skills and emotional wellbeing?
+info
Click here for the full document
Back to title page
4.
3.
2.
1.
06
Other considerations
How sustainable and environmentally friendly is the AI tool?
Is the design of the AI tool ethical?
Is the data collected by the AI tool used currently or could it be used in the future for commercial purposes?
Is the AI tool open-source?
+info
+info
+info
+info
Go to page
This section is focused on teachers, learners, and IT staff. Here we consider ethical issues related to training and providing support for teachers, trainers, staff and students regarding AI. Providing this training allows the centre to ensure human agency and oversight when using AI, to promote AI literacy, thus empowering both teachers and learners, to foster democratic participation in educational policy planning and AI practices and finally, to support equity and accountability.
Another strategy is to embrace the use of AI as a legitimate support tool in assessments, rather than viewing it purely as a potential source of misconduct. By integrating AI in a way that encourages its responsible use, institutions can adjust assessment criteria to account for AI-assisted work. Some institutions, like University College London, have provided guidelines on how to incorporate AI as an assistive or integral component of assessment, aligning with the growing acceptance of AI as a learning tool rather than a threat (University College London, n.d.).
One approach is to focus on designing assessments that encourage authentic engagement with the learning process. For example, assessments can be designed to promote critical thinking and problem-solving, areas where AI may not fully replicate human reasoning. Tasks like in-class discussions, project-based assignments, and oral presentations can make it difficult for students to rely solely on AI tools while fostering deeper learning.
While the use of AI in assessment raises valid concerns, it is important to focus on fostering a fair and responsible approach to using AI in educational settings, rather than solely emphasizing the risk of academic misconduct. Instead of relying on AI detection tools, which currently seem to be unreliable or biased at least if they are not trained based on adequate datasets (Jiang et al., 2024), it may be more effective to rethink how assessments and learning activities are designed.
Go to page
This section is relevant to the centre’s senior management and IT staff or AI providers/developers collaborating with the centre to provide AI tools. Key issues such as adherence to relevant regional/national/European policies and legislation, issues of privacy and data protection, transparency and accountability, diversity, non-discrimination, fairness and equity are addressed in this section. As readers will note, some of the issues such as adherence to relevant legislation and data protection laws are straightforward to address when laws are already in place but other issues such as non-discrimination and fairness can be more complex. These sections aim to distill the core concepts and provide guidance for institutions and individuals on how to best serve their students and staff, taking into account available information and current policies.
1.
Is there support for teachers/ trainers so that they can adjust their teaching to the use of AI?For example consider the following aspects:
9.
Is the AI tool easy to use?
8.
Is there sufficient training/ support available regarding the ethical use of AI?
7.
Is there sufficient training/ support for all individuals who will interact with the AI tools?
AI officially in use
AI in use but not officially
AI not in use
- What/how much data is collected
- Who has access to this data
- How the data is used
- Whether the amount of data is more than necessary
- Whether individual users can withdraw their consent to their data being used (also related to user autonomy)
- Whether any data breaches or inadvertent sharing of personal/sensitive information have occurred and, if that is the case, what measures have been taken to avoid similar issues in the future
Go to page
This first section provides guiding questions to help gain a general idea of the centre’s AI maturity based on their current use and understanding of AI and other technologies. The concept of AI maturity (JISC, 2022) evaluates how extensively institutions use AI and other digital technologies and how well this usage is supported and endorsed at an organizational level. The following sections are based on Chan’s (2023) dimensions, adjusted with Şenocak et al. 's (2024) review and the European Commission's (2022) ethical guidelines. They are also informed by works on AI ethics in education (e.g., Holmes et al., 2022, 2023; Nguyen et al., 2023; Council of Europe, 2023) and guidelines from educational and other institutions (e.g., AI HLEG, 2019; Chinese University of Hong Kong, 2023; Monash University, n.d.; Russell Group, 2023; University College London, n.d.).
- AI HLEG (High-Level Expert Group on Artificial Intelligence) (2019). Ethics guidelines for trustworthy artificial intelligence. European Commission. Retrieved from: https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=60419
- Atwell, G., Bekiarides, G., Deitmer, L., Perini, M., Roppertz, S., Stieglitz, D., & Tutlys, V. (2021). Artificial intelligence & vocational education and training. How to shape the future. Taccle AI. Retrieved from: https://taccleai.eu/wp-content/uploads/2021/12/TaccleAI_Recommendations_UK_compressed.pdf
- Bekiaridis, G. (2024). Supplement to the DigCompEDU Framework. Outlining the skills and competences of educators related to AI in education (Attwell, G. Ed.). AIPioneers.org. Retrieved from: https://aipioneers.org/supplement-to-the-digcompedu-framework
- CAST (2024). Universal Design for Learning Guidelines version 3.0. Retrieved from https://udlguidelines.cast.org
- Chan, C. K. Y. (2023). A comprehensive AI policy education framework for university teaching and learning. International Journal of Educational Technology in Higher Education, 20(1), 38. https://link.springer.com/article/10.1186/s41239-023-00408-3
- Chinese University of Hong Kong (2023). Use of Artificial Intelligence Tools in Teaching, Learning and Assessments: A Guide for Students. Retrieved from: https://www.aqs.cuhk.edu.hk/documents/A-guide-for-students_use-of-AI-tools.pdf
- Council of Europe (2023). Human rights by design future-proofing human rights protection in the era of AI. Retrieved from: https://rm.coe.int/follow-up-recommendation-on-the-2019-report-human-rights-by-design-fut/1680ab2279
- European Commission, Directorate-General for Education, Youth, Sport, and Culture. (2022). Ethical guidelines on the use of artificial intelligence (AI) and data in teaching and learning for educators. Publications Office of the European Union. https://data.europa.eu/doi/10.2766/153756
- European Union. (2016). Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data (General Data Protection Regulation). Official Journal of the European Union, L 119, 1–88. https://eur-lex.europa.eu/eli/reg/2016/679/oj
- Holmes, W., Porayska-Pomsta, K., Holstein, K., Sutherland, E., Baker, T., Shum, S. B., Santos, O. C., Rodrigo, M. T., Cukurova, M., Bittencourt, I. I., & Koedinger, K. R. (2022). Ethics of AI in education: Towards a community-wide framework. International Journal of Artificial Intelligence in Education, 1-23. https://doi.org/10.1007/s40593-021-00239-1
- Holmes, W., Iniesto, F., Anastopoulou, S., & Boticario, J. G. (2023). Stakeholder perspectives on the ethics of AI in distance-based higher education. International Review of Research in Open and Distributed Learning, 24(2), 96-117. https://doi.org/10.19173/irrodl.v24i2.6089
- JISC (2022). AI in tertiary Education. A summary of the current state of play. JISC Repository. Retrieved from https://repository.jisc.ac.uk/8783/1/ai-in-tertiary-education-report-june-2022.pdf
- Martínez-Comesaña, M., Rigueira-Díaz, X., Larrañaga-Janeiro, A., Martínez-Torres, J., Ocarranza-Prado, I., & Kreibel, D. (2023). Impacto de la inteligencia artificial en los métodos de evaluación en la educación primaria y secundaria: revisión sistemática de la literatura. Revista de Psicodidáctica, 28(2), 93-103. https://doi.org/10.1016/j.psicod.2023.06.001
- Monash University (n.d.). Assessment policy and process. Retrieved from: https://www.monash.edu/learning-teaching/teachhq/Teaching-practices/artificial-intelligence/assessment-policy-and-process
- Nguyen, A., Ngo, H. N., Hong, Y., Dang, B., & Nguyen, B. P. T. (2023). Ethical principles for artificial intelligence in education. Education and Information Technologies, 28(4), 4221-4241. https://doi.org/10.1007/s10639-022-11316-w
- Redecker, C. (2017). European Framework for the Digital Competence of Educators: DigCompEdu (Punie, Y. Ed.). Publications Office of the European Union. https://doi.org/10.2760/178382
- Russell Group (2023). Russell Group principles on the use of generative AI tools in education. Retrieved from: https://russellgroup.ac.uk/media/6137/rg_ai_principles-final.pdf
- Şenocak, D., Bozkurt, A., & Koçdar, S. (2024). Exploring the Ethical Principles for the Implementation of Artificial Intelligence in Education: Towards a Future Agenda. In Transforming Education With Generative AI: Prompt Engineering and Synthetic Content Creation (pp. 200-213). IGI Global.
- The Institute for Ethical AI in Education (2021). The Ethical Framework for AI in Education. Buckingham.ac.uk. Retrieved from: https://www.buckingham.ac.uk/wp-content/uploads/2021/03/The-Institute-for-Ethical-AI-in-Education-The-Ethical-Framework-for-AI-in-Education.pdf
- Tommasi, F., & Perini, M. (2024). Guidelines to design your own AI projects and initiatives (Wubbels, C. & Sartori, R. Eds.). AIPioneers.org Retrieved from: https://aipioneers.org/knowledge-base/report-guidelines-to-design-your-own-ai-projects-and-initiatives/
- UCL (n.d.). Using AI tools in assessment. Retrieved from: https://www.ucl.ac.uk/teaching-learning/generative-ai-hub/using-ai-tools-assessment
- UNESCO (2023a). ChatGPT and artificial intelligence in higher education. IESALC UNESCO. Retrieved from: https://www.iesalc.unesco.org/wp-content/uploads/2023/04/ChatGPT-and-Artificial-Intelligence-in-higher-education-Quick-Start-guide_EN_FINAL.pdf
- UNESCO (2023b). Guidance for generative AI in education and research. Retrieved from: https://unesdoc.unesco.org/ark:/48223/pf0000386693
- UNESCO (2019). Artificial Intelligence in Education: Challenges and Opportunities for Sustainable Development. Retrieved from: https://unesdoc.unesco.org/ark:/48223/pf0000366994
- UNESCO (2014). UNESCO Education Policy Brief (Vol.2), Skills for holistic human development. Retrieved from: unesdoc.unesco.org/ark:/48223/pf0000245064/PDF/245064eng.pdf.multi
- Van Laar, E., Van Deursen, A. J., Van Dijk, J. A., & De Haan, J. (2017). The relation between 21st-century skills and digital skills: A systematic literature review. Computers in human behavior, 72, 577-588. https://doi.org/10.1016/j.chb.2017.03.010
- Van Wynsberghe, A. (2021). Sustainable AI: AI for sustainability and the sustainability of AI. AI and Ethics, 1(3), 213-218. https://doi.org/10.1007/s43681-021-00043-6
- Vincent-Lancrin, S., & van der Vlies, R. (2020). Trustworthy artificial intelligence (AI) in education: Promises and challenges., OECD Education Working Papers, (218), OECD Publishing. https://doi.org/10.1787/a6c90fa9-en
- World Health Organization. (2015). Public health implications of excessive use of the internet, computers, smartphones, and similar electronic devices: Meeting report, Main Meeting Hall, Foundation for Promotion of Cancer Research, National Cancer Research Centre, Tokyo, Japan, 27–29 August 2014. World Health Organization. https://apps.who.int/iris/handle/10665/184264
- Yan, L., Sha, L., Zhao, L., Li, Y., Martinez‐Maldonado, R., Chen, G., Li, X., Jin, Y., & Gašević, D. (2024). Practical and ethical challenges of large language models in education: A systematic scoping review. British Journal of Educational Technology, 55(1), 90-112. http://dx.doi.org/10.1111/bjet.13370
References
10.
How does the centre plan to monitor/audit the AI tool’s performance long-term in order to ensure overall alignment with intended outcomes?
9.
Is there a procedure in place that permits stakeholders to present concerns and feedback regarding the use of the AI tool and the influence it has on teaching, learning and overall well-being?
8.
Is the purpose of using the AI tool clear to all individuals involved?
Beyond outlining the rules in an official document, providing training via workshops, learning pills or other resources is the best way to educate on the ethical use of AI for learning and assessment.
Students and teachers must understand when AI use is appropriate for learning, when they need to disclose its use, and when it is prohibited. A clear understanding of AI's limitations and ethical implications in education is essential for maintaining academic integrity and accountability.
The school must outline in its AI policy or code of conduct the specific rules that must be followed in relation to the ethical use of AI in learning and assessment.
EU AI Act
2.
Are there regional/ national/ international policies/ regulations that must be taken into account?
1.
What policies and regulations have schools included in their decision-making processes on the use of AI to date?
We consider that the following questions put forward by the European Commission in the Ethical Guidelines published in 2022 (European Commission, 2022) can be very useful in evaluating this aspect.
Does the centre provide general training (seminars/workshops/resources) supporting the development of students’ AI-specific skills (e.g., prompt engineering, ethical considerations)?
6.
5.
Does the centre provide practical training in AI tools that are used in the specific field of study/ training?
AI can be used to supplement evaluation, to provide more ongoing assessments during the learning process and facilitate teachers so they can attend to more students and cater to their specific needs while spending less time on certain tasks. However, fully automating grading and evaluation is not the goal and teachers should always maintain oversight and be responsible for final decision-making since they have the capacity to consider the abilities of the student in different contexts and types of tasks and understand their educational needs and progress in a way that is not possible solely based on AI.
Nevertheless, it is crucial that using these AI tools is not a step back in fairness and inclusivity. Institutions should ensure that the AI’s assessment methods do not lead to bias and that all students, regardless of background, are evaluated equitably.
Using AI in assessment allows educational centres and teachers to track progress more often, providing standardised feedback, and grading certain types of assignments. This can potentially improve the learning experience.
If one of the technologies already in use relies on AI, a first step would be to consider what this technology does and who interacts with this AI or is exposed to it. It is important to consult with staff and students who are using/will use the AI in order to ensure democratic participation in educational policy planning and empowerment.
Centres that already use multiple technologies are better positioned to implement AI, as they likely have both the technical infrastructure and staff expertise to deploy it responsibly, understanding its limitations and ethical implications.
Go to page
This section focuses almost exclusively on teachers, trainers and students, those most involved in the pedagogical aspects of AI use in education. The guiding questions also support the development of policies that encourage the empowerment of teachers and learners in their respective tasks and that push for students to be better prepared for an AI-driven workforce (particularly important in the context of Adult Education and Vocational Education and Training: also see Attwell et al., 2021; UNESCO, 2019). Moreover, this section takes into consideration common ethical dilemmas related to the use of AI in assessment as well as issues related to copyright. Lastly, the section contemplates whether and how the potential influences of AI on the development of competencies and societal well-being can be addressed.
Another approach to designing AI tools ethically is considering the potential of tools to be addictive, encouraging excessive engagement or overuse. This is also an essential consideration given that there are increasing studies showing the effects of excessive screen time, social media or internet addiction on mental health (Tang et al., 2021; World Health Organisation, 2015).
It can be achieved firstly by following the Principles of Universal Design in Learning (CAST, 2024). By incorporating Universal Design Principles, educational AI tools can better serve diverse learners, including those with disabilities or learning differences, fostering an inclusive learning environment. For example, universal design principles can include features like adjustable font sizes, text-to-speech capabilities, and alternative content delivery methods.
Ethical design is related to the creation of AI tools that are inclusive, safe and support student well-being.
Centres should know whether this is the case for the AI tool they are using and whether stakeholders are aware and can potentially opt-out of their data being used. Many centres or individuals may not be against their data being used for commercial purposes, but it is important that they are informed and able to decide.
For example, AI-based personalized learning platforms may gather anonymous data on students' progress, strengths, and weaknesses, which can be used to improve the platform and possibly sold to third-party companies that create other educational products and are interested in learner profiles.
In the AI Governance and Monitoring section we have already considered the issue of personal or sensitive data. However, even when this data is not collected by the AI tool, other data may still be collected and used for commercial purposes.
Ensuring both access and training avoids the risk of increasing the digital divide between users of AI. They also promote transparency, fairness, accountability, minimise discrimination, while empowering staff and students, and maintaining human oversight. The following sections will also provide some more information regarding these issues.
There are free and paid versions of many AI tools. If the centre decides to incorporate an AI tool it is important to ensure access for everyone.
While many individuals may be comfortable using a new tool without specific training and troubleshooting issues, this will not always be true for everyone. As a result, some individuals will be at a disadvantage if the centre does not provide training.
Participating in research related to these aspects or dedicating some time to review the existing evidence can also help to evaluate these issues although it is admittedly time consuming and not always an option for all centres.
One relatively simple approach is to have regular meetings (for example, at the end of a teaching module, semester or year) and ask stakeholders their opinion about the above questions. Separate meetings could be held with students, teachers or trainers and administrative or IT staff. Keep in mind that depending on the AI tool being used, some of the questions may not be relevant so they can be eliminated. For example, if AI is not being used for assessment Question 10 can be omitted.
While it may be difficult for the centre to evaluate each of these aspects, it is important to consider the benefits and potential drawbacks to the use of the AI tool(s) and try to mitigate their negative consequences.
4.
How aware are employees of the importance of data privacy and their role in protecting it? Do employees receive data privacy and protection training regularly?
3.
Is personal/ sensitive data being collected by the AI tool(s) (to be) used in the educational centre?
Go to page
This fourth and final section of the evaluation schema concentrates on a number of issues that are hard to address (Şenocak et al., 2024). These are sustainability, ethical design and commercialization.
AI has a large environmental impact and addressing sustainability issues is an important step as we move forward (Van Wynsberghe, 2021). Educational centres could therefore consider how to ensure sustainability either by selecting less resource-intensive AI tools and/or by selecting tools and companies following more sustainable practices.
There are multiple guidelines addressing AI policy and ethical concerns. While AI legislation continues to evolve, the EU AI Act (European Parliament and Council of the European Union, 2024) stands out as a key document to consider, alongside regional and national legislation, when developing an official AI policy or strategy (this is further discussed in the next section on AI governance and monitoring).
Some things to consider are: the ethical use of AI, legal and regulatory compliance, data privacy and security, integrating relevant AI in the curriculum, potential collaboration with industry that can facilitate both AI integration in the curriculum and staying updated on AI advancements, resource allocation, staff training.
Even if the centre is using AI, this does not necessarily mean it has an official AI policy/strategy. However, it is important to consider developing such a policy/strategy at the same time as considering implementing/expanding the use of AI.
When AI tools are open-source then users can access and modify them. This can contribute to better transparency, data protection, safety and oversight (Yan et al., 2023), and also to flexibility and customisation.
4.
Does this use align with ethical and academic standards?
3.
Who owns the rights to AI-generated content?
2.
Are there datasets to train the AI sourced ethically and legally?
7.
Are there biases and how can they lead to unfairness or discrimination?
6.
Is the content appropriate and adjusted to the target-groups's need?
5.
Is there a free version of the tool and if not, can the centre ensure access to all its members? More generally, can the centre ensure accessibility for all users? Are there barriers to its use by some individuals?