Evaluation Schema
Frieda Klaus
Created on November 12, 2024
More creations to inspire you
THE EUKARYOTIC CELL WITH REVIEW
Presentation
WWII JUNE NEWSPAPER
Presentation
PRIVATE TOUR IN SÃO PAULO
Presentation
FACTS IN THE TIME OF COVID-19
Presentation
AUSSTELLUNG STORYTELLING
Presentation
WOLF ACADEMY
Presentation
STAGE2- LEVEL1-MISSION 2: ANIMATION
Presentation
Transcript
Funded by the European Union. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Education and Culture Executive Agency (EACEA). Neither the European Union nor EACEA can be held responsible for them.
for AI in education on data, privacy, ethics, and EU values
Evaluation Schema
Start
Contents
03
06
05
02
01
04
Other considerations
Pedagogical considerations
Operational considerations
AI governance and monitoring
Assessing current AI use, AI maturity and related AI ethical concerns
References
Summary of the Evaluationn Schema
Back to title page
01
05
Other considerations
Summary of the Evaluation Schema
01
Assessing current AI use, AI maturity and related AI ethical concerns
02
AI governance and monitoring
03
Operational considerations
04
Pedagogical considerations
Back to title page
02
If the answer to 4a is "yes", who has access and uses these tools?
4c
4b
Does the educational centre officially endorse and support the use of specific AI tools, or is their use limited to individual teachers who have chosen to incorporate them independently in their teaching practices?
4a
Does the centre have an official AI policy/ strategy?
Assessing current AI use, AI maturity and related AI ethical concerns
Which of the following technologies are currently used in the educational centre?
Do any of these or other technologies used in the educational centre rely on AI?
If the answer to 4a is "yes", does the educational centre provide training/ support for the use of these tools?
+info
+info
+info
+info
+info
Back to title page
03
AI Governance and monitoring
Adherence to relevant regional/ national/ European policies and legislation
Issues of privacy, data protection, technical robustness and safety
Diversity, non-driscrimination, fairness and equity
Transparency, accountability and oversight
+info
+info
+info
+info
- The Institute for Ethical AI in Education in their 2021 report suggest it is important to strike a balance “between privacy and the legitimate use of data for achieving well-defined and desirable educational goals” (p.8)
Back to section overview
Personal and sensitiv data
+info
- Many AI tools can be used without collecting personal/sensitive data. If this is the case then the most simple approach is to consider whether the particular tool is in line with current national/regional/european legislation. For example:
- If personal/sensitive data is collected then it is important to discuss the issue with the AI provider in order to have all the necessary information regarding:
Back to title page
04
Operational considerations
Ensuring human agency and oversight in the teaching process
Ensuring training/ support for AI implementation/ use
+info
+info
Back to section overview
Are there monitoring systems in place to prevent overconfidence in or overreliance on the AI system?
Do teachers and school leaders have all the training and information needed to effectively use the system and ensure it is safe and does not cause harm or violate rights of students?
Is there a mechanism for leaners to opt-out if concerns have not been adequately addressed?
Are procedures in place for teachers tp monitor and intervene, for example in situations where empathy is required when dealing with learners or parents?
Ensuring human agency and oversight in the teaching process
Is the teacher role clearly defined so as to ensure that there is a teacher in the loop while the AI system is being used? How does the AI system affect the didactical role of the teacher?
Are the decisions that impact students conducted with teacher agency and is the teacher able to notice anomalies or possible discrimination?
+info
+info
+info
+info
+info
+info
Back to title page
05
Pedagogical considerations
Empowering teachers and teching
Ensuring students are prepared for an AI-driven workforce
Common ethical aspects related to assessment and AI, academic misconduct
Towards a balanced approach to AI use in teaching/training/ learning and how AI may influence the development of competencies but also social well-being
+info
+info
+info
+info
Back to section overview
More specifically:
More specifically:
7.
If AI tools are being used in assessments, do they align with fair and inclusive evaluation practices?
6.
If AI tools are being used in assessments, do they align with fair and inclusive evaluation practices?
5.
How can the centre ensure that AI does not replace but rather supports educators in assessing student learning?
4.
Common ethical aspects related to assessment and AI, academic misconduct
If AI tools are being used in assessments, do they align with fair and inclusive evaluation practices?
+info
+info
+info
Back to section overview
Unesco, 2014; Van Laar et al., 2017; Vincent-Lancrin & van der Vlies, 2020)
15.
14.
13.
12.
11.
10.
9.
8.
Towards a balanced approach to AI use in teaching/training/ learning and how AI may influence the development of competencies but also social well-being
Has the use of AI allowed teachers/ trainers to save time thus increaing the capacity of the educational centre?
Has the use of AI improved the teaching materials and methods thus extending the capabilities of the centre?
Has AI implementation improved or worsened assessment quality inlcuding fairness?
Has AI implementation improved or worsened accessibility?
Has AI implementation improved the personalisation of teaching content?
Has AI implementation improved or worsened learner engagement?
Has AI implementation improved or worsened leaner performance/grades/outcomes/access to the labour market?
How has AI implementation influencing the development of student's transversal or 21st century skills and emotional wellbeing?
+info
Back to title page
4.
3.
2.
1.
06
Other considerations
How sustainable and environmentally friendly is the AI tool?
Is the design of the AI tool ethical?
Is the data collected by the AI tool used currently or could it be used in the future for commercial purposes?
Is the AI tool open-source?
+info
+info
+info
+info
Does the centre provide general training (seminars/ workshops/ resources) supporting the development of student's AI-specific skills (e.g., prompt engineering, ethical considerations)?
3.
2.
Does the centre provide practical training in AI tools that are used in the specific field of study/ training?
Go to page
This section is more focused on teachers, learners, and IT staff (Chan, 2023). Here we consider ethical issues related to training and providing support for teachers, trainers, staff and students regarding AI. Providing this training allows the centre to ensure human agency and oversight when using AI, to promote AI literacy thus empowering both teachers and learners, to foster democratic participation in educational policy planning and AI practices and finally, to support equity and accountability.
Another way of avoiding or limiting academic misconduct is to provide teachers and trainers with the tools that enable them to detect misuse of AI, such as AI plagiarism checkers that are used to check for originality and can also help maintain academic integrity.
One way of avoiding or limiting academic misconduct using AI is to rethink assessment and learning activities.
Beyond providing clear guidelines regarding the use of AI for learning and assessment it is important to review the measures that are in place to avoid or limit academic misconduct related to AI and the tools that exist to identify it, in case it occurs.
Go to page
This section is more relevant to the centre’s senior management and IT staff or AI providers/developers collaborating with the centre to provide AI tools. Key issues such as adherence to relevant regional/national/european policies and legislation, issues of privacy and data protection, technical robustness and safety, transparency and accountability, diversity, non-discrimination, fairness and equity are addressed in this section. As readers will note, some of the issues such as adherence to relevant legislation and data protection laws are straightforward to address when laws are already in place but other issues such as non-discrimination and fairness can be more complex. However, the goal of this and the following sections is to simplify the main ideas behind these concepts and provide some indications as to what individuals and centres should consider in order to act in the best interest of their students and employees based on the information they have and current policy.
9.
Is the AI tool easy to use?
8.
Is there sufficient training/ support available regarding the ethical use of AI?
7.
Is there sufficient training/ support for all individuals who will interact with the AI tools?
AI officially in use
AI in use but not officially
AI not in use
- What/ how much data is collected
- Who has access to this data
- How the data is used
- Wether the amount of data is more than necessary
- Wether individual users can withdraw their consent to their data being used ( also related to user autonomy)
- Wether any data breaches or inadvertend sharing of personal/ sensitiv information have occured and, if that is the case, what measures have been taken to avoid similar issues in the future
Go to page
This first section provides some guiding questions that should help gain a general idea of the centre’s AI maturity based on their current use and understanding of AI and other technologies. The concept of AI maturity and the related questions are based on a previous report (JISC, 2022) and tackle issues such as, to what degree are AI or other digital technologies already in use and to what degree is this use supported and endorsed by the centre.The following sections are broadly based on the dimensions proposed by Chan (2023) although they have been adjusted taking into account the review by Şenocak et al. (2024) and the ethical guidelines proposed by the European Commission (2022). Undoubtedly they are also informed by other works on ethics in AI and education (e.g., Holmes et al., 2022; Holmes et al., 2023; Nguyen et al. 2023; but also: Council of Europe, 2023) and guidelines proposed by educational and other institutions (e.g., AI HLEG, 2019; Chinese University of Hong Kong, 2023; Monash University, n.d.; Russell Group, 2023; UCL, n.d.).
- AI HLEG (High-Level Expert Group on Artificial Intelligence) (2019). Ethics guidelines for trustworthy artificial intelligence. European Commission. Retrieved from: https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=60419
- Atwell, G., Bekiarides, G., Deitmer, L., Perini, M., Roppertz, S., Stieglitz, D., & Tutlys, V. (2021). Artificial intelligence & vocational education and training. How to shape the future. Taccle AI. Retrieved from: https://taccleai.eu/wp-content/uploads/2021/12/TaccleAI_Recommendations_UK_compressed.pdf
- Bekiaridis, G. (2024). Supplement to the DigCompEDU Framework. Outlining the skills and competences of educators related to AI in education (Attwell, G. Ed.). AIPioneers.org. Retrieved from: https://aipioneers.org/supplement-to-the-digcompedu-framework
- CAST (2024). Universal Design for Learning Guidelines version 3.0. Retrieved from https://udlguidelines.cast.org
- Chan, C. K. Y. (2023). A comprehensive AI policy education framework for university teaching and learning. International Journal of Educational Technology in Higher Education, 20(1), 38. https://link.springer.com/article/10.1186/s41239-023-00408-3
- Chinese University of Hong Kong (2023). Use of Artificial Intelligence Tools in Teaching, Learning and Assessments: A Guide for Students. Retrieved from: https://www.aqs.cuhk.edu.hk/documents/A-guide-for-students_use-of-AI-tools.pdf
- Council of Europe (2023). Human rights by design future-proofing human rights protection in the era of AI. Retrieved from: https://rm.coe.int/follow-up-recommendation-on-the-2019-report-human-rights-by-design-fut/1680ab2279
- European Commission, Directorate-General for Education, Youth, Sport, and Culture. (2022). Ethical guidelines on the use of artificial intelligence (AI) and data in teaching and learning for educators. Publications Office of the European Union. https://data.europa.eu/doi/10.2766/153756
- European Union. (2016). Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data (General Data Protection Regulation). Official Journal of the European Union, L 119, 1–88. https://eur-lex.europa.eu/eli/reg/2016/679/oj
- Holmes, W., Porayska-Pomsta, K., Holstein, K., Sutherland, E., Baker, T., Shum, S. B., Santos, O. C., Rodrigo, M. T., Cukurova, M., Bittencourt, I. I., & Koedinger, K. R. (2022). Ethics of AI in education: Towards a community-wide framework. International Journal of Artificial Intelligence in Education, 1-23. https://doi.org/10.1007/s40593-021-00239-1
- Holmes, W., Iniesto, F., Anastopoulou, S., & Boticario, J. G. (2023). Stakeholder perspectives on the ethics of AI in distance-based higher education. International Review of Research in Open and Distributed Learning, 24(2), 96-117. https://doi.org/10.19173/irrodl.v24i2.6089
- JISC (2022). AI in tertiary Education. A summary of the current state of play. JISC Repository. Retrieved from https://repository.jisc.ac.uk/8783/1/ai-in-tertiary-education-report-june-2022.pdf
- Martínez-Comesaña, M., Rigueira-Díaz, X., Larrañaga-Janeiro, A., Martínez-Torres, J., Ocarranza-Prado, I., & Kreibel, D. (2023). Impacto de la inteligencia artificial en los métodos de evaluación en la educación primaria y secundaria: revisión sistemática de la literatura. Revista de Psicodidáctica, 28(2), 93-103. https://doi.org/10.1016/j.psicod.2023.06.001
- Monash University (n.d.). Assessment policy and process. Retrieved from: https://www.monash.edu/learning-teaching/teachhq/Teaching-practices/artificial-intelligence/assessment-policy-and-process
- Nguyen, A., Ngo, H. N., Hong, Y., Dang, B., & Nguyen, B. P. T. (2023). Ethical principles for artificial intelligence in education. Education and Information Technologies, 28(4), 4221-4241. https://doi.org/10.1007/s10639-022-11316-w
- Redecker, C. (2017). European Framework for the Digital Competence of Educators: DigCompEdu (Punie, Y. Ed.). Publications Office of the European Union. https://doi.org/10.2760/178382
- Russell Group (2023). Russell Group principles on the use of generative AI tools in education. Retrieved from: https://russellgroup.ac.uk/media/6137/rg_ai_principles-final.pdf
- Şenocak, D., Bozkurt, A., & Koçdar, S. (2024). Exploring the Ethical Principles for the Implementation of Artificial Intelligence in Education: Towards a Future Agenda. In Transforming Education With Generative AI: Prompt Engineering and Synthetic Content Creation (pp. 200-213). IGI Global.
- The Institute for Ethical AI in Education (2021). The Ethical Framework for AI in Education. Buckingham.ac.uk. Retrieved from: https://www.buckingham.ac.uk/wp-content/uploads/2021/03/The-Institute-for-Ethical-AI-in-Education-The-Ethical-Framework-for-AI-in-Education.pdf
- Tommasi, F., & Perini, M. (2024). Guidelines to design your own AI projects and initiatives (Wubbels, C. & Sartori, R. Eds.). AIPioneers.org Retrieved from: https://aipioneers.org/knowledge-base/report-guidelines-to-design-your-own-ai-projects-and-initiatives/
- UCL (n.d.). Using AI tools in assessment. Retrieved from: https://www.ucl.ac.uk/teaching-learning/generative-ai-hub/using-ai-tools-assessment
- UNESCO (2023a). ChatGPT and artificial intelligence in higher education. IESALC UNESCO. Retrieved from: https://www.iesalc.unesco.org/wp-content/uploads/2023/04/ChatGPT-and-Artificial-Intelligence-in-higher-education-Quick-Start-guide_EN_FINAL.pdf
- UNESCO (2023b). Guidance for generative AI in education and research. Retrieved from: https://unesdoc.unesco.org/ark:/48223/pf0000386693
- UNESCO (2019). Artificial Intelligence in Education: Challenges and Opportunities for Sustainable Development. Retrieved from: https://unesdoc.unesco.org/ark:/48223/pf0000366994
- UNESCO (2014). UNESCO Education Policy Brief (Vol.2), Skills for holistic human development. Retrieved from: unesdoc.unesco.org/ark:/48223/pf0000245064/PDF/245064eng.pdf.multi
- Van Laar, E., Van Deursen, A. J., Van Dijk, J. A., & De Haan, J. (2017). The relation between 21st-century skills and digital skills: A systematic literature review. Computers in human behavior, 72, 577-588. https://doi.org/10.1016/j.chb.2017.03.010
- Van Wynsberghe, A. (2021). Sustainable AI: AI for sustainability and the sustainability of AI. AI and Ethics, 1(3), 213-218. https://doi.org/10.1007/s43681-021-00043-6
- Vincent-Lancrin, S., & van der Vlies, R. (2020). Trustworthy artificial intelligence (AI) in education: Promises and challenges., OECD Education Working Papers, (218), OECD Publishing. https://doi.org/10.1787/a6c90fa9-en
- World Health Organization. (2015). Public health implications of excessive use of the internet, computers, smartphones, and similar electronic devices: Meeting report, Main Meeting Hall, Foundation for Promotion of Cancer Research, National Cancer Research Centre, Tokyo, Japan, 27–29 August 2014. World Health Organization. https://apps.who.int/iris/handle/10665/184264
- Yan, L., Sha, L., Zhao, L., Li, Y., Martinez‐Maldonado, R., Chen, G., Li, X., Jin, Y., & Gašević, D. (2024). Practical and ethical challenges of large language models in education: A systematic scoping review. British Journal of Educational Technology, 55(1), 90-112. http://dx.doi.org/10.1111/bjet.13370
References
10.
How does the centre plan to monitor/ audit the AI tool's performance long-teram in order to ensure data protection and privacy, fairness, and overall alignment with intended outcomes?
9.
Is there a procedure in place that permits stakeholders to present concerns and feedback regarding the use of the AI tool and the influence it has on teaching, learning and overall well-being?
8.
Is the purpose of using the AI tool clear to all individuals involved?
2.
Are there regional/ national/ international policies/ regulations that must be taken into account?
1.
What policies and regulations have schools included in their decision-making processes on the use of AI to date?
We consider that the following questions pout forward by the European Commission in the Ethical guidelines published in 2022 (p.19) can be very useful in evaluating this aspect.
1.
Is there support for teachers/ trainers so that they can adjust their teaching to the use of AI?For example consider the following aspects:
Beyond outlining the rules in an official document, providing training via workshops, learning pills or other resources is the best way to educate on the ethical use of AI for learning and assessment.
Students as well as teachers must know in which contexts using AI for learning is appropriate, whether they have to acknowledge AI use and in which contexts AI use is prohibited. Understanding the limits and ethical implications of using AI for learning is indispensable and can reinforce accountability and integrity.
The school must outline in its AI policy or code of conduct the specific rules that must be followed in relation to the ethical use of AI in learning and assessment.
AI can be used to supplement evaluation, to provide more ongoing assessments during the learning process and facilitate teachers so they can attend to more students and cater to their specific needs while spending less time on certain tasks. However, fully automating grading and evaluation is not the goal and teachers should always maintain oversight and be responsible for final decision-making since they have the capacity to consider the abilities of the student in different contexts and types of tasks and understand their educational needs and progress in a way that is not possible solely based on AI.
Nevertheless, it is crucial that using these AI tools is not a step back in fairness and inclusivity. Institutions should ensure that the AI’s assessment methods do not lead to bias and that all students, regardless of background, are evaluated equitably.
Using AI in assessment allows educational centres and teachers to track progress more often, providing standardised feedback, and grading certain types of assignments. This can potentially improve the learning experience.
If one of the technologies already in use relies on AI a first step would be to consider what this technology does and who interacts with this AI or is exposed to it. It is important to consult with staff and students who are using/will use the AI in order to ensure democratic participation in educational policy planning and empowerment.
The greater the number of technologies already in use by the centre the more likely it is for the centre to already have the technological resources and support staff to implement AI and be able to do so while aware of its limitations and the ethical concerns related to its use.
If one of the technologies already in use relies on AI a first step would be to consider what this technology does and who interacts with this AI or is exposed to it. It is important to consult with staff and students who are using/will use the AI in order to ensure democratic participation in educational policy planning and empowerment.
The greater the number of technologies already in use by the centre the more likely it is for the centre to already have the technological resources and support staff to implement AI and be able to do so while aware of its limitations and the ethical concerns related to its use.
Go to page
This section focuses almost exclusively on teachers, trainers and students, those most involved in the pedagogical aspects of AI use in education. The guiding questions also support the development of policies that encourage the empowerment of teachers and learners in their respective tasks and that push for students to be better prepared for an AI-driven workforce (particularly important in the context of Adult Education and Vocational Education and Training: also see Attwell et al., 2021; UNESCO, 2019). Moreover, this section takes into consideration common ethical dilemmas related to the use of AI in assessment, academic misconduct and following a balanced approach to AI use in teaching, training and learning. Lastly, the section contemplates whether and how the potential influences of AI on the development of competencies and societal well-being can be addressed.
Another approach to designing AI tools ethically is considering the potential of tools to be addictive, encouraging excessive engagement or overuse. This is also an essential consideration given that there are increasing studies showing the effects of excessive screen time, social media or internet addiction on mental health (World Health Organisation, 2015).
It can be achieved firstly by following the principles of Universal Design in Learning (CAST, 2024). By incorporating Universal Design Principles, educational AI tools can better serve diverse learners, including those with disabilities or learning differences, fostering an inclusive learning environment. For example, universal design principles can include features like adjustable font sizes, text-to-speech capabilities, and alternative content delivery methods.
Ethical design is related to the creation of AI tools that are inclusive, safe and support student well-being.
Centres should know whether this is the case for the AI tool they are using and whether stakeholders are aware and can potentially opt-out of their data being used. Many centres or individuals may not be against their data being used for commercial purposes but it is important that they be informed and able to decide.
For example, AI driven personalised learning platforms may collect anonymised data regarding students’ progress, strengths and weaknesses that can later be used to improve the platform but also be sold to third-party companies developing other educational products and interested in learner profiles and difficulties.
In the AI Governance and monitoring section we have already considered the issue of personal or sensitive data. However, even when this data is not collected by the AI tool, other data may still be collected and used for commercial purposes.
They also promote transparency, fairness, accountability, minimise discrimination, while empowering staff and students, and maintaining human oversight. The following sections will also provide some more information regarding these issues.
Ensuring both access and training avoid the risk of increasing the digital divide between users of AI.
There are free and paid versions of many AI tools. If the centre decides to incorporate an AI tool it is important to ensure access for everyone.
While many individuals may be comfortable using a new tool without specific training and troubleshooting issues independently this will not be true for everyone. As a result, some individuals will be at a disadvantage if the centre does not provide training.
Participating in research related to these aspects or dedicating some time to review the existing evidence can also help to evaluate these issues although it is admittedly time consuming and not always an option for all centres.
One relatively simple approach is to have regular meetings (for example at the end of a teaching module, semester or year) and ask stakeholders their opinion about the above questions. Separate meetings could be held with students, teachers or trainers and administrative or IT staff. Keep in mind that depending on the AI tool being used, some of the questions may not be relevant so they can be eliminated. For example, if AI is not being used for assessment question 10 can be omitted.
While it may be difficult for the centre to evaluate each of these aspects it is important to consider the benefits and potential drawbacks to the use of the AI tool(s) and try to mitigate their negative consequences.
4.
How aware are employees of the importance of data privacy and their role in protecting it? Do employees receive data privacy and protection training regularly?
3.
Is personal/ sensitive data being collected by the AI tool(s) (to be) used in the educational centre?
Go to page
This fourth and final section of the evaluation schema concentrates on a number of issues that are often less contemplated and admittedly hard to address (Şenocak et al., 2024). These are sustainability, ethical design and commercialization. Şenocak et al. (2024) also include autonomy but in this document questions related generally to user autonomy (the possibility to withdraw consent) and specifically to teacher and learner autonomy (the possibility to opt out of AI use or override certain AI-driven suggestions) have been included in the previous sections. The remaining issues could also be included in the AI Governance and monitoring section, but given their complexity they are presented separately in this final section since it may not always be possible to either answer the pertinent questions to address these issues or to select AI tools that fully align with sustainability and ethical design values and that can ensure that none of the data collected will be used for commercial purposes. The section aims to briefly explain these issues and provide guiding questions that can serve as the first steps towards raising awareness around them and investigating how to find better ways of addressing them in the future.
AI has a large environmental impact and addressing sustainability issues is an important step as we move forward (Van Wynsberghe, 2021). Educational centres could therefore consider how to ensure sustainability either by selecting less resource-intensive AI tools and/or by selecting tools and companies following more sustainable practices.
There are multiple guidelines regarding AI policy and ethical concerns and while AI legislation is still limited it is important to review relevant documents in order to create an official AI policy/strategy (also mentioned in the next section on AI governance and monitoring).
Some things to take into consideration are: the ethical use of AI, legal and regulatory compliance, data privacy and security, integrating relevant AI in the curriculum, potential collaboration with industry that can facilitate both AI integration in the curriculum and staying updated on AI advancements, resource allocation, staff training.
Even if the centre is using AI this does not necessarily mean it has an official AI policy/strategy. However, it is important to consider developing such a policy/strategy at the same time that it considers implementing/expanding the use of AI.
When AI tools are open-source then users can access and modify them. This can contribute to better transparency, data protection, safety and oversight (Yan et al., 2023), and also to flexibility and customisation.
7.
Are there biases and how can they lead to unfairness or discrimination?
6.
Is the content appropriate and adjusted to the target-groups's need?
5.
Is there a free version of the tool and if not, can the centre ensure access to all its members? More generally, can the centre ensure accessibility for all users? Are there barriers to its use by some individuals?