AI Pioneers Ethics Evaluation Schema
Alexia Antzaka
Created on October 27, 2024
AI Pioneers Evaluation schema for AI in education on data, privacy, ethics, and EU values
More creations to inspire you
FIRE FIGHTER
Horizontal infographics
VIOLA DAVIS
Horizontal infographics
LOGOS
Horizontal infographics
ZODIAC SUN SIGNS AND WHAT THEY MEAN
Horizontal infographics
ALEX MORGAN
Horizontal infographics
10 SIGNS A CHILD IS BEING BULLIED
Horizontal infographics
EUROPE PHYSICAL MAP
Horizontal infographics
Transcript
Evaluation schema for AI in education on data, privacy, ethics, and EU values
AI governance and monitoring
Other considerations
Assessing current AI use, AI maturity and related AI ethical concerns
Summary of the Evaluation Schema
Operational considerations
Pedagogical considerations
References
Summary of the Evaluation Schema
AI governance and monitoring
Assessing current AI use, AI maturity and related AI ethical concerns
Operational considerations
Pedagogical considerations
Other considerations
Back to title page
Assessing current AI use, AI maturity and related AI ethical concerns
4a. Does the educational centre officially endorse and support the use of specific AI tools, or is their use limited to individual teachers who have chosen to incorporate them independently in their teaching practices?
1. Which of the following technologies are currently used in the educational centre?
Learning analytics, Cloud Computing, Big data/data mining, machine learning, Virtual reality, Augmented reality, Mobile learning, Internet of Things, Adaptive learning, Remote learning, 3-D technology, Robotics, online networking platforms/social media, E-learning platforms, E-assessment
2. Do any of these or other technologies used in the educational centre rely on AI?
3. Does the centre have an official AI policy/strategy?
3. Does the centre have an official AI policy/strategy?
AI in use but not officially
AI officially in use
4b. If the answer to 4a is “yes”, does the educational centre provide training/support for the use of these tools? i. Both support and training are providedii. Support is provided by technical staff but not trainingiii. Support is provided informally by those who already use the tools but no training is provided.iv. Neither support or training are provided
c. If the answer to 4a is “yes”, who has access and uses these tools? Everyone, administration, teachers, trainers, learners.
Back to title page
AI not in use
AI Governance and monitoring
Back to title page
ADHERENCE TO RELEVANT REGIONAL/NATIONAL/EUROPEAN POLICIES AND LEGISLATION
ISSUES OF PRIVACY, DATA PROTECTION, TECHNICAL ROBUSTNESS AND SAFETY
DIVERSITY, NON-DISCRIMINATION, FAIRNESS AND EQUITY
TRANSPARENCY, ACCOUNTABILITY AND OVERSIGHT
AI Governance and monitoring: Adherence to relevant regional/national/european policies and legislation
Back to title page
Back to section overview
1. What policies and regulations have schools included in their decision-making processes on the use of AI to date?
2. Are there regional/national/international policies/regulations that must be taken into account?
Back to title page
Back to section overview
AI Governance and monitoring: Issues of privacy, data protection, techincal robustness and safety
3. Is personal/sensitive data being collected by the AI tool(s) (to be) used in the educational centre?
4. How aware are employees of the importance of data privacy and their role in protecting it? Do employees receive data privacy and protection training regularly?
Back to title page
Back to section overview
AI Governance and monitoring: Diversity, non-discrimination, fairness and equity
5. Is there a free version of the tool and if not, can the centre ensure access to all its members? More generally, can the centre ensure accessibility for all users? Are there barriers to its use by some individuals?
6. Is the content appropriate and adjusted to the target-group’s needs?
7.Are there biases and how can they lead to unfairness or discrimination?
Back to title page
Back to section overview
AI Governance and monitoring: Transparency, accountability and oversight
9. Is there a procedure in place that permits stakeholders to present concerns and feedback regarding the use of the AI tool and the influence it has on teaching, learning and overall well-being?
10. How does the centre plan to monitor/audit the AI tool’s performance long-term in order to ensure data protection and privacy, fairness, and overall alignment with intended outcomes?
8. Is the purpose of using the AI tool clear to all individuals involved (students, teachers, administrators etc.)?
Operational considerations
Back to title page
ENSURING HUMAN AGENCY AND OVERSIGHT IN THE TEACHING PROCESS
ENSURING TRAINING/SUPPORT FOR AI IMPLEMENTATION/USE
Back to title page
Back to section overview
Operational considerations: Ensuring human agency and oversight in the teaching process
1. Is the teacher role clearly defined so as to ensure that there is a teacher in the loop while the AI system is being used? How does the AI system affect the didactical role of the teacher?
2. Are the decisions that impact students conducted with teacher agency and is the teacher able to notice anomalies or possible discrimination?
3. Are procedures in place for teachers to monitor and intervene, for example in situations where empathy is required when dealing with learners or parents?
4. Is there a mechanism for learners to opt-out if concerns have not been adequately addressed?
5. Are there monitoring systems in place to prevent overconfidence in or overreliance on the AI system?
6. Do teachers and school leaders have all the training and information needed to effectively use the system and ensure it is safe and does not cause harm or violate rights of students?
We consider that the following questions put forward by the European Commission in the Ethical guidelines published in 2022 (p. 19) can be very useful in evaluating this aspect.
Back to title page
Back to section overview
Operational considerations: Ensuring training/support for AI implementation/use
7. Is there sufficient training/support for all individuals who will interact with the AI tools?
8. Is there sufficient training/support available regarding the ethical use of AI?
9. Is the AI tool easy to use?
Pedagogical considerations
EMPOWERING TEACHERS AND TEACHING
Back to title page
ENSURING STUDENTS ARE PREPARED FOR AN AI-DRIVEN WORKFORCE
COMMON ETHICAL ASPECTS RELATED TO ASSESSMENT AND AI, ACADEMIC MISCONDUCT
TOWARDS A BALANCED APPROACH TO AI USE IN TEACHING/TRAINING/LEARNING AND HOW AI MAY INFLUENCE THE DEVELOPMENT OF COMPETENCIES BUT ALSO SOCIETAL WELL-BEING
Back to title page
Back to section overview
Pedagogical considerations: Empowering teachers and teaching
1. Is there support for teachers/trainers so that they can adjust their teaching to the use of AI? For example consider the following aspects:
a. Is there support regarding adjustment of the curriculum and activities to include AI for teaching or teaching aimed at developing AI skills?b. Is there support to adjust assessment to include or exclude the use of AI (for example, adjust questions so AI can be used or opt for in person, paper and pencil assessments to avoid AI use)?c. Are there meetings/workshops discussing the degree of human oversight of AI tools and maintenance of control over decision-making?d. Is there support to develop DigCompEdu Areas (Professional Engagement, Digital Resources, Teaching and learning, Assessment, Empowering Learners, Facilitating Learner´s Digital Competence)?
Back to title page
Back to section overview
Pedagogical considerations: Ensuring students are prepared for an AI-driven workforce
2. Does the centre provide practical training in AI tools that are used in the specific field of study/training?
YES
NO
3. Does the centre provide general training (seminars/workshops/resources) supporting the development of students’ AI-specific skills (e.g., prompt engineering, ethical considerations)?
Back to title page
Back to section overview
Pedagogical considerations: Common ethical aspects related to assessment and AI, academic misconduct
5. How can the centre ensure that AI does not replace but rather supports educators in assessing student learning? More specifically:
a. Is the AI system’s role in assessments supplementary, providing data that aids the teacher rather than replacing their judgement? b. Does the teacher have final authority over grades and evaluations?
4. If AI tools are being used in assessments, do they align with fair and inclusive evaluation practices? More specifically:
a. Does the AI system inadvertently favour certain student groups? b. Are the metrics it uses relevant and representative of each student’s abilities and aligned to the curriculum?
a. Is the AI system’s role in assessments supplementary, providing data that aids the teacher rather than replacing their judgement? b. Does the teacher have final authority over grades and evaluations?
6. Are students educated about the ethical use of AI in learning and assessment?
7. What measures are in place to prevent and identify academic misconduct related to AI?
Pedagogical considerations: Towards a balanced approach to AI use in teaching/training/learning and how AI may influence the development of competencies but also societal well-being
Back to title page
Back to section overview
8. Has the use of AI allowed teachers/trainers to save time thus increasing the capacity of the educational centre?
9. Has the use of AI improved the teaching materials and methods thus extending the capabilities of the centre?
10. Has AI implementation improved or worsened assessment quality including fairness?
11. Has AI implementation improved or worsened accessibility?
12. Has AI implementation improved the personalisation of teaching content?
13. Has AI implementation improved or worsened learner engagement?
14. Has AI implementation improved or worsened learner performance/grades/outcomes/access to the labour market?
15. How is AI implementation influencing the development of students’ transversal or 21st century skills (Unesco, 2014; Van Laar et al., 2017; Vincent-Lancrin & van der Vlies, 2020) and emotional wellbeing? Consider the following skills/aspects:
a. Digital literacyb. Collaboration, communication and teamwork (interpersonal skills)c. Emotional wellbeing and intrapersonal skills (e.g. self-motivation, perseverance)d. Creativity, critical thinking and problem solvinge. Global citizenship (tolerance…)
Other considerations
1. How sustainable and environmentally friendly is the AI tool?
a. What is the energy usage of this tool, and are there options for using less resource-intensive versions?b. Are the companies behind the tool committed to sustainable practices, such as using renewable energy for their servers?
2. Is the design of the AI tool ethical?
a. Does it incorporate universal design, is it accessible, and does it meet the needs of students with special educational needs?b. Does it avoid addictive features that could encourage overuse and dependency and ensure security?
3. Is the data collected by the AI tool used currently or could it be used in the future for commercial purposes?
4. Is the AI tool open-source?
Back to title page
References
- AI HLEG (High-Level Expert Group on Artificial Intelligence) (2019). Ethics guidelines for trustworthy artificial intelligence. European Commission. Retrieved from: https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=60419
- Atwell, G., Bekiarides, G., Deitmer, L., Perini, M., Roppertz, S., Stieglitz, D., & Tutlys, V. (2021). Artificial intelligence & vocational education and training. How to shape the future. Taccle AI. Retrieved from: https://taccleai.eu/wp-content/uploads/2021/12/TaccleAI_Recommendations_UK_compressed.pdf
- Bekiaridis, G. (2024). Supplement to the DigCompEDU Framework. Outlining the skills and competences of educators related to AI in education (Attwell, G. Ed.). AIPioneers.org. Retrieved from: https://aipioneers.org/supplement-to-the-digcompedu-framework
- CAST (2024). Universal Design for Learning Guidelines version 3.0. Retrieved from https://udlguidelines.cast.org
- Chan, C. K. Y. (2023). A comprehensive AI policy education framework for university teaching and learning. International Journal of Educational Technology in Higher Education, 20(1), 38. https://link.springer.com/article/10.1186/s41239-023-00408-3
- Chinese University of Hong Kong (2023). Use of Artificial Intelligence Tools in Teaching, Learning and Assessments: A Guide for Students. Retrieved from: https://www.aqs.cuhk.edu.hk/documents/A-guide-for-students_use-of-AI-tools.pdf
- Council of Europe (2023). Human rights by design future-proofing human rights protection in the era of AI. Retrieved from: https://rm.coe.int/follow-up-recommendation-on-the-2019-report-human-rights-by-design-fut/1680ab2279
- European Commission, Directorate-General for Education, Youth, Sport, and Culture. (2022). Ethical guidelines on the use of artificial intelligence (AI) and data in teaching and learning for educators. Publications Office of the European Union. https://data.europa.eu/doi/10.2766/153756
- European Union. (2016). Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data (General Data Protection Regulation). Official Journal of the European Union, L 119, 1–88. https://eur-lex.europa.eu/eli/reg/2016/679/oj
- Holmes, W., Porayska-Pomsta, K., Holstein, K., Sutherland, E., Baker, T., Shum, S. B., Santos, O. C., Rodrigo, M. T., Cukurova, M., Bittencourt, I. I., & Koedinger, K. R. (2022). Ethics of AI in education: Towards a community-wide framework. International Journal of Artificial Intelligence in Education, 1-23. https://doi.org/10.1007/s40593-021-00239-1
- Holmes, W., Iniesto, F., Anastopoulou, S., & Boticario, J. G. (2023). Stakeholder perspectives on the ethics of AI in distance-based higher education. International Review of Research in Open and Distributed Learning, 24(2), 96-117. https://doi.org/10.19173/irrodl.v24i2.6089
- JISC (2022). AI in tertiary Education. A summary of the current state of play. JISC Repository. Retrieved from https://repository.jisc.ac.uk/8783/1/ai-in-tertiary-education-report-june-2022.pdf
- Martínez-Comesaña, M., Rigueira-Díaz, X., Larrañaga-Janeiro, A., Martínez-Torres, J., Ocarranza-Prado, I., & Kreibel, D. (2023). Impacto de la inteligencia artificial en los métodos de evaluación en la educación primaria y secundaria: revisión sistemática de la literatura. Revista de Psicodidáctica, 28(2), 93-103. https://doi.org/10.1016/j.psicod.2023.06.001
- Monash University (n.d.). Assessment policy and process. Retrieved from: https://www.monash.edu/learning-teaching/teachhq/Teaching-practices/artificial-intelligence/assessment-policy-and-process
- Nguyen, A., Ngo, H. N., Hong, Y., Dang, B., & Nguyen, B. P. T. (2023). Ethical principles for artificial intelligence in education. Education and Information Technologies, 28(4), 4221-4241. https://doi.org/10.1007/s10639-022-11316-w
- Redecker, C. (2017). European Framework for the Digital Competence of Educators: DigCompEdu (Punie, Y. Ed.). Publications Office of the European Union. https://doi.org/10.2760/178382
- Russell Group (2023). Russell Group principles on the use of generative AI tools in education. Retrieved from: https://russellgroup.ac.uk/media/6137/rg_ai_principles-final.pdf
- Şenocak, D., Bozkurt, A., & Koçdar, S. (2024). Exploring the Ethical Principles for the Implementation of Artificial Intelligence in Education: Towards a Future Agenda. In Transforming Education With Generative AI: Prompt Engineering and Synthetic Content Creation (pp. 200-213). IGI Global.
- The Institute for Ethical AI in Education (2021). The Ethical Framework for AI in Education. Buckingham.ac.uk. Retrieved from: https://www.buckingham.ac.uk/wp-content/uploads/2021/03/The-Institute-for-Ethical-AI-in-Education-The-Ethical-Framework-for-AI-in-Education.pdf
- Tommasi, F., & Perini, M. (2024). Guidelines to design your own AI projects and initiatives (Wubbels, C. & Sartori, R. Eds.). AIPioneers.org Retrieved from: https://aipioneers.org/knowledge-base/report-guidelines-to-design-your-own-ai-projects-and-initiatives/
- UCL (n.d.). Using AI tools in assessment. Retrieved from: https://www.ucl.ac.uk/teaching-learning/generative-ai-hub/using-ai-tools-assessment
- UNESCO (2023a). ChatGPT and artificial intelligence in higher education. IESALC UNESCO. Retrieved from: https://www.iesalc.unesco.org/wp-content/uploads/2023/04/ChatGPT-and-Artificial-Intelligence-in-higher-education-Quick-Start-guide_EN_FINAL.pdf
- UNESCO (2023b). Guidance for generative AI in education and research. Retrieved from: https://unesdoc.unesco.org/ark:/48223/pf0000386693
- UNESCO (2019). Artificial Intelligence in Education: Challenges and Opportunities for Sustainable Development. Retrieved from: https://unesdoc.unesco.org/ark:/48223/pf0000366994
- UNESCO (2014). UNESCO Education Policy Brief (Vol.2), Skills for holistic human development. Retrieved from: unesdoc.unesco.org/ark:/48223/pf0000245064/PDF/245064eng.pdf.multi
- Van Laar, E., Van Deursen, A. J., Van Dijk, J. A., & De Haan, J. (2017). The relation between 21st-century skills and digital skills: A systematic literature review. Computers in human behavior, 72, 577-588. https://doi.org/10.1016/j.chb.2017.03.010
- Van Wynsberghe, A. (2021). Sustainable AI: AI for sustainability and the sustainability of AI. AI and Ethics, 1(3), 213-218. https://doi.org/10.1007/s43681-021-00043-6
- Vincent-Lancrin, S., & van der Vlies, R. (2020). Trustworthy artificial intelligence (AI) in education: Promises and challenges., OECD Education Working Papers, (218), OECD Publishing. https://doi.org/10.1787/a6c90fa9-en
- World Health Organization. (2015). Public health implications of excessive use of the internet, computers, smartphones, and similar electronic devices: Meeting report, Main Meeting Hall, Foundation for Promotion of Cancer Research, National Cancer Research Centre, Tokyo, Japan, 27–29 August 2014. World Health Organization. https://apps.who.int/iris/handle/10665/184264
- Yan, L., Sha, L., Zhao, L., Li, Y., Martinez‐Maldonado, R., Chen, G., Li, X., Jin, Y., & Gašević, D. (2024). Practical and ethical challenges of large language models in education: A systematic scoping review. British Journal of Educational Technology, 55(1), 90-112. http://dx.doi.org/10.1111/bjet.13370
Back to title page
- In the AI Governance and monitoring section we have already considered the issue of personal or sensitive data. However, even when this data is not collected by the AI tool, other data may still be collected and used for commercial purposes.
- For example, AI driven personalised learning platforms may collect anonymised data regarding students’ progress, strengths and weaknesses that can later be used to improve the platform but also be sold to third-party companies developing other educational products and interested in learner profiles and difficulties.
- Centres should know whether this is the case for the AI tool they are using and whether stakeholders are aware and can potentially opt-out of their data being used. Many centres or individuals may not be against their data being used for commercial purposes but it is important that they be informed and able to decide.
If the centre already includes this type of training in the curriculum then it is important to maintain and review ensuring that all students can take full advantage of it.
- If the educational centre officially endorses/supports the use of specific AI tools then if an AI strategy/policy exists it should be reviewed and the centre can consider how it can be adjusted to expand the use of AI to other areas if this is its aim.
- If an AI policy/strategy does not exist then the centre can begin to develop it taking into consideration the areas which are more familiar (in which AI is already in use) and then expand to other areas/tools. Support and training will once again be relevant.
- The greater the number of technologies already in use by the centre the more likely it is for the centre to already have the technological resources and support staff to implement AI and be able to do so while aware of its limitations and the ethical concerns related to its use.
- If one of the technologies already in use relies on AI a first step would be to consider what this technology does and who interacts with this AI or is exposed to it. It is important to consult with staff and students who are using/will use the AI in order to ensure democratic participation in educational policy planning and empowerment.
Having the option to opt out of AI-driven processes when concerns (particularly in relation to learning needs) arise promotes transparency and student agency. Establishing such a mechanism assures students and parents that they have control over AI’s influence on their learning experience. For example, if a personalised learning tool based on AI fails to recognize areas where the student needs more challenging resources it should be possible to consult this issue and either adjust the AI settings or temporarily opt-out of the recommendations entirely.
- While AI has the potential to empower teachers and allow them to automatise certain repetitive tasks while allowing for more creativity in others training and support are essential to make this transition regardless of individuals’ educational or socioeconomic background.
- The centre should organise internal or external seminars/workshops to cater for these needs and ensure regular support.
- Encouraging collaboration between staff and sharing of experiences/materials etc. can also be beneficial in creating a community and keeping up to date with developments.
- The centre can also consider identified AI competencies (Bekiarides, 2024) and whether the centre supports its employees development of these competencies or whether there are regional organisations or private companies that can help.
- Even if the centre is using AI this does not necessarily mean it has an official AI policy/strategy. However, it is important to consider developing such a policy/strategy at the same time that it considers implementing/expanding the use of AI.
- Some things to take into consideration are: the ethical use of AI, legal and regulatory compliance, data privacy and security, integrating relevant AI in the curriculum, potential collaboration with industry that can facilitate both AI integration in the curriculum and staying updated on AI advancements, resource allocation, staff training.
- There are multiple guidelines regarding AI policy and ethical concerns and while AI legislation is still limited it is important to review relevant documents in order to create an official AI policy/strategy (also mentioned in the next section on AI governance and monitoring).
As aforementioned this final dimension is related to certain issues that are more complex to address but regarding which awareness is critical and gradually better approaches to addressing them should be found. More specifically: sustainability, ethical design and commercialization.
When AI is used in decision making it is essential for teachers to understand that they have the ultimate responsibility for any outcomes or decisions. For instance, if an AI-driven tool is used for formative assessment, teachers need to interpret the information and analysis provided by the AI tool rather than follow recommendations mechanically. They must also be informed about potential biases and discrimination and be able to correct them. This balance ensures that AI enhances rather than limits the teacher’s role as a facilitator of learning.
AI has a large environmental impact and addressing sustainability issues is an important step as we move forward (Van Wynsberghe, 2021). Educational centres could therefore consider how to ensure sustainability either by selecting less resource-intensive AI tools and/or by selecting tools and companies following more sustainable practices.
The following subsection tackles the issue of training more specifically but it is worth mentioning that this question particularly highlights the role of teachers and school (or educational centre) leaders in safeguarding the rights of students.
This section is more focused on teachers, learners, and IT staff (Chan, 2023). Here we consider ethical issues related to training and providing support for teachers, trainers, staff and students regarding AI. Providing this training allows the centre to ensure human agency and oversight when using AI, to promote AI literacy thus empowering both teachers and learners, to foster democratic participation in educational policy planning and AI practices and finally, to support equity and accountability.
Go to section questions
- It is important for the centre to raise awareness and teach staff including trainers and teachers about data privacy and protection. This ensures that they have a better understanding of how to protect their own data as well as the data they work with.
- Such training should be organised periodically (e.g. it could be organised yearly as well as being included in onboarding processes).
Considering these questions the centre can start outlining main issues that must be considered when developing/updating its AI strategy/policy.
This section is more related to teachers, trainers and students (learners). Its aim is to promote their empowerment and ensure students are prepared for an AI-driven workforce. This aspect is particularly important in the context of Adult Education and Vocational Education and Training. Moreover, it takes into consideration common ethical dilemmas related to AI use in assessment and academic misconduct but also taps into evaluating whether there is a balanced approach to AI use in teaching/training/learning and how AI may influence the development of competencies but also societal well-being. In this case, the questions are presented in four subsections. Once again, the pop-up information aims to guide users/educational centres in answering the questions and evaluating their answers.
The aim of this initial section is to derive a general idea of the centre’s AI maturity based on their current use of AI and other digital technologies (JISC, 2022) and the existence of related policies. Answering the questions provided below will provide a better understanding of the complexity and degree of support that will be required to implement or extend the use of AI in the educational centre. We present some guidelines on how to interpret possible answers.
The issue of regular monitoring and audits has already been raised in the previous section. This question highlights that these audits should include questions tapping into whether stakeholders may be relying too heavily on AI, especially in decision-making processes in order to avoid unquestioning acceptance of its recommendations and undermining human professional judgement.
This subsection may be more complicated to evaluate and is something that should be considered in the long term. The aim is to use the following questions in order to understand how the use of the AI tool(s) is affecting teaching and learning, whether or not it has led to improvements and how it has affected well-being.The first two questions are based on the report published by the JISC in 2022 which discusses the possibilities of AI to extend capabilities and increase capacity.
- While it may be difficult for the centre to evaluate each of these aspects it is important to consider the benefits and potential drawbacks to the use of the AI tool(s) and try to mitigate their negative consequences.
- One relatively simple approach is to have regular meetings (for example at the end of a teaching module, semester or year) and ask stakeholders their opinion about the above questions. Separate meetings could be held with students, teachers or trainers and administrative or IT staff. Keep in mind that depending on the AI tool being used, some of the questions may not be relevant so they can be eliminated. For example, if AI is not being used for assessment question 10 can be omitted.
- Participating in research related to these aspects or dedicating some time to review the existing evidence can also help to evaluate these issues although it is admittedly time consuming and not always an option for all centres.
Explaining why an AI tool is being used in a specific context is important so that individuals understand what it can offer and are also in a position to evaluate whether or not it is fulfilling its purpose. The purpose of the tool should be explained in accessible language through information sessions, handouts or online and stakeholders’ questions and concerns should be addressed.
- Beyond providing clear guidelines regarding the use of AI for learning and assessment it is important to review the measures that are in place to avoid or limit academic misconduct related to AI and the tools that exist to identify it, in case it occurs.
- One way of avoiding or limiting academic misconduct using AI is to rethink assessment and learning activities.
- One option is to design activities and assessments to limit AI use (for example, paper and pencil quizzes performed in class are less likely to allow AI use than multiple choice questions performed at home).
- Another option is to allow or even incorporate AI use in the activity and adjust the assessment criteria. Some educational institutions provide guidance on how to incorporate AI as an assistive or integral component of assessment (UCL, n.d.).
- Another way of avoiding or limiting academic misconduct is to provide teachers and trainers with the tools that enable them to detect misuse of AI, such as AI plagiarism checkers that are used to check for originality and can also help maintain academic integrity.
- The centre must provide adequate and regular support/training through internal or external seminars/workshops.
- This is essential to secure human oversight, agency, as well as transparency and accountability when using AI tools for both basic and advanced users.
- It also allows individuals to use the tools to their full potential, thus empowering them in their teaching or learning.
Stakeholders including students, teachers, administrative, and IT staff should be able to provide feedback on the use of the AI tool. This could be implemented in different ways (email, suggestion box, periodic meetings etc.) and contributes to accountability, democratic participation and long-term monitoring (see question below).
- While training on specific AI tools essential to the field is important in an ever-changing labour market, general knowledge on extensively used tools that rely on AI (search engines, face/voice recognition, large language models) ensures human agency, autonomy and empowerment.
- Ethical issues are particularly important and should be explained with practical examples and discussed.
This first section provides some guiding questions that should help gain a general idea of the centre’s AI maturity based on their current use and understanding of AI and other technologies. The concept of AI maturity and the related questions are based on a previous report (JISC, 2022) and tackle issues such as, to what degree are AI or other digital technologies already in use and to what degree is this use supported and endorsed by the centre.The following sections are broadly based on the dimensions proposed by Chan (2023) although they have been adjusted taking into account the review by Şenocak et al. (2024) and the ethical guidelines proposed by the European Commission (2022). Undoubtedly they are also informed by other works on ethics in AI and education (e.g., Holmes et al., 2022; Holmes et al., 2023; Nguyen et al. 2023; but also: Council of Europe, 2023) and guidelines proposed by educational and other institutions (e.g., AI HLEG, 2019; Chinese University of Hong Kong, 2023; Monash University, n.d.; Russell Group, 2023; UCL, n.d.).
Go to section questions
This section is more relevant for the centre’s senior management and IT staff or AI providers/developers. The section poses questions that should guide individuals and centres to understand whether they have taken into account relevant regional/national/european policies and legislation, issues of privacy and data protection, technical robustness and safety, transparency and accountability, diversity, non-discrimination, fairness and equity. The questions are therefore separated into subsections that reflect the previously mentioned categories. For many centres some of these questions will have to be addressed to the company or individuals they rely on for the implementation of AI. It is important to consider that these questions can guide the centre to ensure sufficient oversight for the purposes the tool will be used for as well as transparency and accountability. Once again we present some explanations regarding the aim of each question and guidelines to their interpretation.
- When the AI tool is meant to be used by teachers, trainers, learners and generally staff who are not experts in informatics and technology it is important to keep in mind that the tool should be simple so individuals can access and use it without excessive amounts of training that will reduce the time they have to dedicate to other activities.
- Ease of use is also likely to contribute to equal access for all.
The issue of bias in AI tools has been discussed extensively. The European Commission’s (2022) guidelines suggest posing the following questions:
- Are there procedures in place to ensure that AI use will not lead to discrimination or unfair behaviour for all users?
- Does the AI system documentation or its training process provide insight into potential bias in the data?
- Are procedures in place to detect and deal with bias or perceived inequalities that may arise? (European Commission, 2022, p. 20)
- Many AI tools can be used without collecting personal/sensitive data. If this is the case then the most simple approach is to consider whether the particular tool is in line with current national/regional/european legislation. For example:
- Is the educational centre in compliance with relevant data protection regulations (e.g., the GDPR published by the EU in 2016)?
- Does it have a system in place to avoid data breaches?
- If personal/sensitive data is collected then it is important to discuss the issue with the AI provider in order to have all the necessary information regarding:
- what/how much data is collected
- who has access to this data
- how the data is used
- whether the amount of data is more than necessary
- whether individual users can withdraw their consent to their data being used (also related to user autonomy)
- whether any data breaches or inadvertent sharing of personal/sensitive information have occurred and, if that is the case, what measures have been taken to avoid similar issues in the future
- The school must outline in its AI policy or code of conduct the specific rules that must be followed in relation to the ethical use of AI in learning and assessment.
- Students as well as teachers must know in which contexts using AI for learning is appropriate, whether they have to acknowledge AI use and in which contexts AI use is prohibited. Understanding the limits and ethical implications of using AI for learning is indispensable and can reinforce accountability and integrity.
- Beyond outlining the rules in an official document, providing training via workshops, learning pills or other resources is the best way to educate on the ethical use of AI for learning and assessment.
- If individuals are not using AI officially or unofficially there will be more steps to take, including strategy/policy but also training and awareness raising regarding the use of AI.
- Introducing the use of AI with its benefits and limitations and raising awareness about ethical concerns is always important but more so when individuals have not been exposed to its use previously.
- When a centre decides to use an AI tool it is also responsible for the content that will be made available to its users. While in Adult Education and Vocational Education and Training the issue may be less complex with most learners being adults it is still important to consider whether content may be offensive, inappropriate or not fully adapted to the target-group.
- Once again, discussing these concerns with the AI provider/developer is important. However, it is also important to consider implementing a process to report issues with inappropriate content.
- Once again, these are questions the educational centre can discuss with the AI provider/developer particularly focusing on whether this bias can lead to discrimination or unfairness.
- This is particularly relevant if the AI tool is involved in a decision-making process (e.g., admissions, assessments). If this is the case then there are a few things to consider:
- Firstly, the centre must understand the potential biases that may arise.
- Secondly it must decide whether despite these potential biases the AI tool still complies with the centre’s AI strategy/policy and whether it wants to use it.
- If the centre decides to use the AI tool despite the potential biases it is crucial to inform users of this bias and ensure that this is taken into account, thus ensuring human oversight in the process.
- For example, if the AI tool is used in assessment and it could put students with special educational needs at a disadvantage then alternative ways of assessment may need to be considered or ways of correcting for this bias. Biases often occur due to the dataset on which the AI tool was trained so this is something that can be taken into account or improved. Similarly, if the potential bias could occur in admissions then it is important for the people supervising the process to correct this bias.
- If the AI tool is not involved in the aforementioned processes then it is still important to discuss the existence of biases and how these may influence interaction with the tool, the content it produces, etc. Teaching users including staff, teachers, trainers and learners about bias is essential. There are many resources that are already available to teach about bias in AI and the AI Pioneers Handbook on policy and ethics in the use of AI in Education is also a good starting point.
- The centre should also dedicate training sessions to ethical concerns regarding AI allowing to:
- raise awareness
- share/discuss concerns
- present the centre’s AI strategy/policy and discuss it with users
- This will support human agency and oversight but also contribute to democratic participation in educational policy planning and AI practices.
While questions on data collection and management and fairness and bias are key to ensure transparency and accountability there are some final questions that can be posed to tackle these important aspects. The issue of human agency is also tackled in the following section regarding operational considerations. The difference is that the questions presented in that section are more specifically focused on AI used for teaching and learning and focus on fostering human agency and oversight in the teaching/learning process.
Many situations in educational contexts require empathy. In all learners they are linked to their emotional and mental states and in younger learners they may also include navigating complex family dynamics. This does not mean that AI cannot be part of certain teaching/learning processes but it highlights that its role should be that of complementing or facilitating the teacher who must provide the support and guidance needed by students in reaching their learning goals.
Designing a plan to monitor/audit the AI tool’s performance and influence on educational and other outcomes in the long term is essential in order to guarantee that its use will continue to adhere to the values that were initially taken into account during its initial implementation. Establishing benchmarks to evaluate among other things the tool’s impact on educational outcomes, potential biases, and deviations from its intended purpose is critical. There are different approaches to monitoring and these may include collecting feedback from stakeholders or conducting performance audits. Once again, this approach strengthens democratic participation and human oversight.
This section is more relevant to the centre’s senior management and IT staff or AI providers/developers collaborating with the centre to provide AI tools. Key issues such as adherence to relevant regional/national/european policies and legislation, issues of privacy and data protection, technical robustness and safety, transparency and accountability, diversity, non-discrimination, fairness and equity are addressed in this section. As readers will note, some of the issues such as adherence to relevant legislation and data protection laws are straightforward to address when laws are already in place but other issues such as non-discrimination and fairness can be more complex. However, the goal of this and the following sections is to simplify the main ideas behind these concepts and provide some indications as to what individuals and centres should consider in order to act in the best interest of their students and employees based on the information they have and current policy.
Go to section questions
This section focuses almost exclusively on teachers, trainers and students, those most involved in the pedagogical aspects of AI use in education. The guiding questions also support the development of policies that encourage the empowerment of teachers and learners in their respective tasks and that push for students to be better prepared for an AI-driven workforce (particularly important in the context of Adult Education and Vocational Education and Training: also see Attwell et al., 2021; UNESCO, 2019). Moreover, this section takes into consideration common ethical dilemmas related to the use of AI in assessment, academic misconduct and following a balanced approach to AI use in teaching, training and learning. Lastly, the section contemplates whether and how the potential influences of AI on the development of competencies and societal well-being can be addressed.
Go to section questions
When AI tools are open-source then users can access and modify them. This can contribute to better transparency, data protection, safety and oversight (Yan et al., 2023), and also to flexibility and customisation.
- While many individuals may be comfortable using a new tool without specific training and troubleshooting issues independently this will not be true for everyone. As a result, some individuals will be at a disadvantage if the centre does not provide training.
- There are free and paid versions of many AI tools. If the centre decides to incorporate an AI tool it is important to ensure access for everyone.
- Ensuring both access and training avoid the risk of increasing the digital divide between users of AI.
- They also promote transparency, fairness, accountability, minimise discrimination, while empowering staff and students, and maintaining human oversight. The following sections will also provide some more information regarding these issues.
- If AI tools are in use but not officially endorsed/supported by the centre it is important to consider for which purposes AI is already being used (though unofficially) and consider whether the centre should support/endorse these uses.
- Depending on the situation the centre could start by supporting AI use for the purposes it is already being used unofficially and plan to expand to other areas in which it could be useful.
- Once again it will be important to train all individuals in order to ensure equal access to the tools but there will be more support from those already using them.
- If the centre does not wish to support this use of AI it should be clearly stated in the AI strategy/policy and acceptable alternatives should be considered.
- Ethical design is related to the creation of AI tools that are inclusive, safe and support student well-being.
- It can be achieved firstly by following the principles of Universal Design in Learning (CAST, 2024). By incorporating Universal Design Principles, educational AI tools can better serve diverse learners, including those with disabilities or learning differences, fostering an inclusive learning environment. For example, universal design principles can include features like adjustable font sizes, text-to-speech capabilities, and alternative content delivery methods.
- Another approach to designing AI tools ethically is considering the potential of tools to be addictive, encouraging excessive engagement or overuse. This is also an essential consideration given that there are increasing studies showing the effects of excessive screen time, social media or internet addiction on mental health (World Health Organisation, 2015).
- When a centre has decided to use AI for specific purposes then ensuring equal access to this AI tool is the responsibility of the centre. Otherwise it could increase the digital divide and other inequalities between employees and learners.
- The Institute for Ethical AI in Education (2021) suggests a number of ways to ensure equity:
- ask the AI provider/developer to confirm measures are taken to mitigate biases in design and training
- consider as part of the centre’s AI strategy/policy how to reduce the digital divide
- consider whether the AI tool(s) are accessible to users with special education needs or disabilities and ask the AI provider/developer about this aspect
- If accessibility issues arise it is crucial to address them in order to ensure all users can fully benefit from the AI tool.
- Using AI in assessment allows educational centres and teachers to track progress more often, providing standardised feedback, and grading certain types of assignments. This can potentially improve the learning experience.
- Nevertheless, it is crucial that using these AI tools is not a step back in fairness and inclusivity. Institutions should ensure that the AI’s assessment methods do not lead to bias and that all students, regardless of background, are evaluated equitably.
- AI can be used to supplement evaluation, to provide more ongoing assessments during the learning process and facilitate teachers so they can attend to more students and cater to their specific needs while spending less time on certain tasks. However, fully automating grading and evaluation is not the goal and teachers should always maintain oversight and be responsible for final decision-making since they have the capacity to consider the abilities of the student in different contexts and types of tasks and understand their educational needs and progress in a way that is not possible solely based on AI.
This fourth and final section of the evaluation schema concentrates on a number of issues that are often less contemplated and admittedly hard to address (Şenocak et al., 2024). These are sustainability, ethical design and commercialization. Şenocak et al. (2024) also include autonomy but in this document questions related generally to user autonomy (the possibility to withdraw consent) and specifically to teacher and learner autonomy (the possibility to opt out of AI use or override certain AI-driven suggestions) have been included in the previous sections. The remaining issues could also be included in the AI Governance and monitoring section, but given their complexity they are presented separately in this final section since it may not always be possible to either answer the pertinent questions to address these issues or to select AI tools that fully align with sustainability and ethical design values and that can ensure that none of the data collected will be used for commercial purposes. The section aims to briefly explain these issues and provide guiding questions that can serve as the first steps towards raising awareness around them and investigating how to find better ways of addressing them in the future.
Go to section questions
- If the centre does not yet include this type of training it is important to prioritise it in the curriculum.
- The centre could create a research team investigating what AI is being used for in the field of training and reach out to industry and organisations for insight and potentially even for support in providing this training (if it lacks the resources to do so independently).
- This can be done gradually for different courses/programs but will be a continuous process given the leaps in the development of AI and its increasing use in multiple fields.
- Practical AI training equips students with real-world skills for the job market and it is crucial that it be provided consistently across courses/programs. While it is hard to predict which tools will be in use in a specific field in 5 years time, it will be easier for an individual with practical experience in AI to adapt to new AI tools.
This section is more relevant to teaching, and learning and IT staff (Chan, 2023). In this section we consider ethical issues related to training and support for teachers, trainers, staff and students regarding AI in order to ensure human agency and oversight, support AI literacy and ethical AI use as well as democratic participation in educational policy planning and AI practices. The questions can also promote equity and accountability. The questions are separated into two subsections: ensuring human agency and oversight, ensuring training/support for AI implementation/use. We present some guidelines to help interpret possible answers and indicate next steps that can be taken.
It’s critical for teachers to clearly understand how they must interact with and oversee AI procedures, especially those related to decision-making, assessment, and student support. Teacher agency is paramount and AI’s role should be to facilitate tasks while the teacher can maintain their teaching style while ensuring AI outputs are adapted to both individual needs and goals and overall instructional goals.