AI : Opportunities, risks, and safeguarding
Start
Learning Objectives
- How it can provide opportunities/help
- Understand the risk/safeguarding concerns with children/young people
- How to respond to AI concerns
- Update your safeguarding passport
Next
Previous
“I’m scared about the use of AI because I don’t fully understand it. I’d really like to understand though, as I want to keep up with technological progress. I’d like to understand the values, but also the dangers and how it could be used detrimentally.”
Next
Previous
What is AI?
Artificial Intelligence (AI) is the field of computer science focused on creating systems that can perform tasks that normally require human intelligence.These tasks include things like:
- Natural language processing
Next
Previous
Opportunities using AI
- Transportation and travel
- Entertainment and leisure
- Shopping and consuner services
- Work and professional tasks
- Commuication and language
- Smart homes and daily lives
- Finance and money management
- Environment and sustainability
Next
AI and the law
- UK GDPR and the Data Protection Act 2018
- Intellectual Property (IP) Laws
Extra information
- Online Safety Act 2023 (enforced 2025)
- Age-Appropriate Design Code (Children’s Code, ICO)
- Keeping Children Safe in Education (KCSIE)
- UK Safer Internet Centre Guidance
- NHS / NICE Guidance on Digital Mental Health Tools
Next
Previous
Young people with neurodivergency and mental health using AI
- Personalised learning and support
- Emotional support and expression
- Engagement through gamification
- Structure and independence
- Accessibility and inclusion
Next
Previous
Harms of AI
Direct Harms
Cognitive and social harms
- Misinformation and disinformation
Security and existential risks
- Cybersecurity risks
- Autonomous weaponization
- Existential risk
Next
Previous
Neurodivergency and mental health risk
- Psychological & Emotional Risks
- Misinformation & Misinterpretation
- Social & Developmental Risks
- Inequity & Accessibility Concerns
- Exacerbated Vulnerability to Harmful Content
- Manipulation & Exploitation Risks
Next
Previous
Synthetic Media
- Exposure to inappripriate or harmful content
- Manipulation and exploitation
- Psychological and emotional impact
Next
Previous
Sextortion
Next
Previous
Chat bots
- Exposure to inappropriate content
- Misinformation and misleading guidance
- Privacy and data security
- Manipulation and emotional impact
- Reinforcing bias and stereotypes
- Social and developmental concerns
- Cyberbullying and malicious users
Next
Previous
Next
Previous
Voice cloning
- Identity theft and impersonation
- Cyberbullying and harassment
- Emotional and psychological impact
- Exploitation in media or misinformation
Next
Previous
Hallucinations
- Misinformation and disinformation
- Health and wellbeing risks
- Exposure to harmful or inappropriate content
- Safeguarding and grooming concerns
Next
Previous
Deep Fakes
- Misinformation and manipulation
- Cyberbullying and harassment
- Exposure to inappropriate content
Next
Previous
Scenarios
QR CODE FROM SLIDO
Next
Previous
1. Exploring Online AI Use
Questions to Ask:“Can you tell me about the apps or websites you spend time on?” “Do you ever use chatbots or AI tools to talk about your feelings?” “Are there times when something online has made you feel upset or worried?” How to Respond: Validate feelings: “It makes sense you’d feel upset if content online scares you.”
Gently probe for specifics: “Can you give me an example of what happened?”
Explore patterns: “Do you notice this happens with certain apps or times of day?”
Next
Previous
2. Checking for Emotional Impact
Questions to Ask: “Have you ever felt more anxious, sad, or angry after using an app or AI tool?” “Do you find yourself thinking about something you saw online even when you’re offline?”
“Does it ever affect your sleep, schoolwork, or friendships?” How to Respond: Normalize emotions: “Many teens feel upset by what they see online; it’s okay to talk about it.”
Reflect and clarify: “So it sounds like these videos make you feel anxious before school?”
Connect to coping: “Let’s think about ways to feel safer or calmer online.”
Next
Previous
3. Understanding Social and Peer Effects
Questions to Ask: “Do you talk to friends online or just AI bots?” “Has anyone ever said something online that made you feel scared or pressured?” “Do you feel like AI content changes how you think about yourself or others?” How to Respond: Explore reality vs. AI content: “Sometimes AI content can look real but isn’t. How do you tell the difference?”
Support social connection: “It can help to have friends or adults you trust to talk about this.”
Next
Previous
4. Assessing Safety and Risk
Questions to Ask: “Has any online content ever made you feel unsafe?” “Have you been asked to do things online you didn’t want to do?” “Do you ever feel pressure from AI content, like trying risky challenges?” How to Respond: Offer safety strategies: “It’s okay to stop, block, or tell an adult if something feels wrong.” Encourage reporting: “You can always share what happens online with someone you trust.” Check emotional support: “How do you feel after talking to someone about this?”
Previous
Next
5. Encouraging Critical Thinking about AI
Questions to Ask: “Do you think AI always tells the truth?” “How do you know if something online is real or made up?” “Do you notice AI showing you similar types of content over and over?” How to Respond: Provide gentle education: “AI can repeat ideas it thinks you’ll like, even if they’re not accurate.” Empower with strategies: “It helps to check facts and take breaks from apps that feel stressful.” Encourage reflection: “What kind of online content makes you feel good versus stressed?”
Next
Previous
6. Supporting Coping and Regulation
Questions to Ask: “What helps you feel calm if something online worries you?” “Are there activities or people that make you feel safe after seeing upsetting content?” “Would you like ideas for safe ways to use AI and social media?” How to Respond: Validate and collaborate: “It sounds like drawing or talking with your friend helps. Let’s build on that.” Offer practical strategies: “We can set limits, check content, and practice grounding techniques.” Reinforce autonomy: “You have a choice in what you interact with online, and it’s okay to say no.”
Next
Previous
Reporting & removal
- CEOP Safety Centre - report online sexual abuse/grooming
- Internet Watch Foundation (IWF) - anonymously report child sexual abuse material, including AI‑generated images
- Report Harmful Content - Non‑criminal harmful content (trolling, doxxing, impersonation, some deepfakes)
- UK Safer Internet Centre - report online child sexual abuse images or videos
- Revenge Porn Helpline & StopNCII.org - Non‑consensual intimate images (adults 18+)
Next
Previous
Fred Rogers
AI L&L
Healios L&D
Created on August 15, 2025
Start designing with a free template
Discover more than 1500 professional designs like these:
View
Basic Interactive Microsite
View
3D Corporate Reporting
View
Microsite Vibrant Travel Guide
View
Tourism Guide Microsite
View
Online Product Catalog
View
Vintage Advent Calendar
View
Professional Advent Calendar
Explore all templates
Transcript
AI : Opportunities, risks, and safeguarding
Start
Learning Objectives
Next
Previous
“I’m scared about the use of AI because I don’t fully understand it. I’d really like to understand though, as I want to keep up with technological progress. I’d like to understand the values, but also the dangers and how it could be used detrimentally.”
Next
Previous
What is AI?
Artificial Intelligence (AI) is the field of computer science focused on creating systems that can perform tasks that normally require human intelligence.These tasks include things like:
Next
Previous
Opportunities using AI
Next
AI and the law
Extra information
Next
Previous
Young people with neurodivergency and mental health using AI
Next
Previous
Harms of AI
Direct Harms
- Privacy and surveillance
Cognitive and social harms- Mental health impacts
Security and existential risksNext
Previous
Neurodivergency and mental health risk
Next
Previous
Synthetic Media
Next
Previous
Sextortion
Next
Previous
Chat bots
Next
Previous
Next
Previous
Voice cloning
Next
Previous
Hallucinations
Next
Previous
Deep Fakes
Next
Previous
Scenarios
QR CODE FROM SLIDO
Next
Previous
1. Exploring Online AI Use Questions to Ask:“Can you tell me about the apps or websites you spend time on?” “Do you ever use chatbots or AI tools to talk about your feelings?” “Are there times when something online has made you feel upset or worried?” How to Respond: Validate feelings: “It makes sense you’d feel upset if content online scares you.” Gently probe for specifics: “Can you give me an example of what happened?” Explore patterns: “Do you notice this happens with certain apps or times of day?”
Next
Previous
2. Checking for Emotional Impact Questions to Ask: “Have you ever felt more anxious, sad, or angry after using an app or AI tool?” “Do you find yourself thinking about something you saw online even when you’re offline?” “Does it ever affect your sleep, schoolwork, or friendships?” How to Respond: Normalize emotions: “Many teens feel upset by what they see online; it’s okay to talk about it.” Reflect and clarify: “So it sounds like these videos make you feel anxious before school?” Connect to coping: “Let’s think about ways to feel safer or calmer online.”
Next
Previous
3. Understanding Social and Peer Effects Questions to Ask: “Do you talk to friends online or just AI bots?” “Has anyone ever said something online that made you feel scared or pressured?” “Do you feel like AI content changes how you think about yourself or others?” How to Respond: Explore reality vs. AI content: “Sometimes AI content can look real but isn’t. How do you tell the difference?” Support social connection: “It can help to have friends or adults you trust to talk about this.”
Next
Previous
4. Assessing Safety and Risk Questions to Ask: “Has any online content ever made you feel unsafe?” “Have you been asked to do things online you didn’t want to do?” “Do you ever feel pressure from AI content, like trying risky challenges?” How to Respond: Offer safety strategies: “It’s okay to stop, block, or tell an adult if something feels wrong.” Encourage reporting: “You can always share what happens online with someone you trust.” Check emotional support: “How do you feel after talking to someone about this?”
Previous
Next
5. Encouraging Critical Thinking about AI Questions to Ask: “Do you think AI always tells the truth?” “How do you know if something online is real or made up?” “Do you notice AI showing you similar types of content over and over?” How to Respond: Provide gentle education: “AI can repeat ideas it thinks you’ll like, even if they’re not accurate.” Empower with strategies: “It helps to check facts and take breaks from apps that feel stressful.” Encourage reflection: “What kind of online content makes you feel good versus stressed?”
Next
Previous
6. Supporting Coping and Regulation Questions to Ask: “What helps you feel calm if something online worries you?” “Are there activities or people that make you feel safe after seeing upsetting content?” “Would you like ideas for safe ways to use AI and social media?” How to Respond: Validate and collaborate: “It sounds like drawing or talking with your friend helps. Let’s build on that.” Offer practical strategies: “We can set limits, check content, and practice grounding techniques.” Reinforce autonomy: “You have a choice in what you interact with online, and it’s okay to say no.”
Next
Previous
Reporting & removal
Next
Previous
Fred Rogers