Want to create interactive content? It’s easy in Genially!

Get started free

AI L&L

Healios L&D

Created on August 15, 2025

Start designing with a free template

Discover more than 1500 professional designs like these:

Basic Interactive Microsite

3D Corporate Reporting

Microsite Vibrant Travel Guide

Tourism Guide Microsite

Online Product Catalog

Vintage Advent Calendar

Professional Advent Calendar

Transcript

AI : Opportunities, risks, and safeguarding

Start

Learning Objectives

  • Understanding what AI is
  • How it can provide opportunities/help
  • Understand the risk/safeguarding concerns with children/young people
  • Laws
  • How to respond to AI concerns
  • Scenarios
  • Resources
  • Update your safeguarding passport

Next

Previous

“I’m scared about the use of AI because I don’t fully understand it. I’d really like to understand though, as I want to keep up with technological progress. I’d like to understand the values, but also the dangers and how it could be used detrimentally.”

Next

Previous

What is AI?

Artificial Intelligence (AI) is the field of computer science focused on creating systems that can perform tasks that normally require human intelligence.These tasks include things like:

  • Learning
  • Reasoning
  • Perception
  • Natural language processing
  • Decision-making

Next

Previous

Opportunities using AI

  • Education and learning
  • Health care
  • Transportation and travel
  • Entertainment and leisure
  • Shopping and consuner services
  • Work and professional tasks
  • Commuication and language
  • Smart homes and daily lives
  • Finance and money management
  • Environment and sustainability

Next

AI and the law

  • UK GDPR and the Data Protection Act 2018
  • Equality Act 2010
  • Intellectual Property (IP) Laws
  • Administrative Law

Extra information

  • Online Safety Act 2023 (enforced 2025)
  • Age-Appropriate Design Code (Children’s Code, ICO)
  • Keeping Children Safe in Education (KCSIE)
  • UK Safer Internet Centre Guidance
  • NHS / NICE Guidance on Digital Mental Health Tools

Next

Previous

Young people with neurodivergency and mental health using AI

  • Personalised learning and support
  • Emotional support and expression
  • Communication assistance
  • Engagement through gamification
  • Structure and independence
  • Accessibility and inclusion

Next

Previous

Harms of AI

Direct Harms

  • Physical harm
  • Economic harm
  • Privacy and surveillance
Cognitive and social harms
  • Misinformation and disinformation
  • Bias and discrimination
  • Mental health impacts
Security and existential risks
  • Cybersecurity risks
  • Autonomous weaponization
  • Existential risk

Next

Previous

Neurodivergency and mental health risk

  • Psychological & Emotional Risks
  • Misinformation & Misinterpretation
  • Social & Developmental Risks
  • Privacy & Safety Risks
  • Inequity & Accessibility Concerns
  • Exacerbated Vulnerability to Harmful Content
  • Manipulation & Exploitation Risks

Next

Previous

Synthetic Media

  • Exposure to inappripriate or harmful content
  • Manipulation and exploitation
  • Psychological and emotional impact
  • Privacy concerns
  • Addiction and overuse
  • Legal and ethical issues

Next

Previous

Sextortion

  • Online grooming
  • Threats and coercion
  • Extortion
  • Emotional manipulation

Next

Previous

Chat bots

  • Exposure to inappropriate content
  • Misinformation and misleading guidance
  • Privacy and data security
  • Manipulation and emotional impact
  • Cybersecurity and scams
  • Reinforcing bias and stereotypes
  • Social and developmental concerns
  • Cyberbullying and malicious users
  • Mental health

Next

Previous

Next

Previous

Voice cloning

  • Identity theft and impersonation
  • Online grooming
  • Cyberbullying and harassment
  • Emotional and psychological impact
  • Consent and data privacy
  • Exploitation in media or misinformation
  • Security vulnerabilities

Next

Previous

Hallucinations

  • Misinformation and disinformation
  • Health and wellbeing risks
  • Exposure to harmful or inappropriate content
  • Undermining education
  • Safeguarding and grooming concerns
  • Erosion of trust
  • Equity and vulnerability

Next

Previous

Deep Fakes

  • Misinformation and manipulation
  • Cyberbullying and harassment
  • Scams and fraud
  • Psychological impact
  • Exposure to inappropriate content

Next

Previous

Scenarios

QR CODE FROM SLIDO

Next

Previous

1. Exploring Online AI Use Questions to Ask:“Can you tell me about the apps or websites you spend time on?” “Do you ever use chatbots or AI tools to talk about your feelings?” “Are there times when something online has made you feel upset or worried?” How to Respond: Validate feelings: “It makes sense you’d feel upset if content online scares you.” Gently probe for specifics: “Can you give me an example of what happened?” Explore patterns: “Do you notice this happens with certain apps or times of day?”

Next

Previous

2. Checking for Emotional Impact Questions to Ask: “Have you ever felt more anxious, sad, or angry after using an app or AI tool?” “Do you find yourself thinking about something you saw online even when you’re offline?” “Does it ever affect your sleep, schoolwork, or friendships?” How to Respond: Normalize emotions: “Many teens feel upset by what they see online; it’s okay to talk about it.” Reflect and clarify: “So it sounds like these videos make you feel anxious before school?” Connect to coping: “Let’s think about ways to feel safer or calmer online.”

Next

Previous

3. Understanding Social and Peer Effects Questions to Ask: “Do you talk to friends online or just AI bots?” “Has anyone ever said something online that made you feel scared or pressured?” “Do you feel like AI content changes how you think about yourself or others?” How to Respond: Explore reality vs. AI content: “Sometimes AI content can look real but isn’t. How do you tell the difference?” Support social connection: “It can help to have friends or adults you trust to talk about this.”

Next

Previous

4. Assessing Safety and Risk Questions to Ask: “Has any online content ever made you feel unsafe?” “Have you been asked to do things online you didn’t want to do?” “Do you ever feel pressure from AI content, like trying risky challenges?” How to Respond: Offer safety strategies: “It’s okay to stop, block, or tell an adult if something feels wrong.” Encourage reporting: “You can always share what happens online with someone you trust.” Check emotional support: “How do you feel after talking to someone about this?”

Previous

Next

5. Encouraging Critical Thinking about AI Questions to Ask: “Do you think AI always tells the truth?” “How do you know if something online is real or made up?” “Do you notice AI showing you similar types of content over and over?” How to Respond: Provide gentle education: “AI can repeat ideas it thinks you’ll like, even if they’re not accurate.” Empower with strategies: “It helps to check facts and take breaks from apps that feel stressful.” Encourage reflection: “What kind of online content makes you feel good versus stressed?”

Next

Previous

6. Supporting Coping and Regulation Questions to Ask: “What helps you feel calm if something online worries you?” “Are there activities or people that make you feel safe after seeing upsetting content?” “Would you like ideas for safe ways to use AI and social media?” How to Respond: Validate and collaborate: “It sounds like drawing or talking with your friend helps. Let’s build on that.” Offer practical strategies: “We can set limits, check content, and practice grounding techniques.” Reinforce autonomy: “You have a choice in what you interact with online, and it’s okay to say no.”

Next

Previous

Reporting & removal

  • CEOP Safety Centre - report online sexual abuse/grooming
  • Internet Watch Foundation (IWF) - anonymously report child sexual abuse material, including AI‑generated images
  • Report Harmful Content - Non‑criminal harmful content (trolling, doxxing, impersonation, some deepfakes)
  • UK Safer Internet Centre - report online child sexual abuse images or videos
  • Revenge Porn Helpline & StopNCII.org - Non‑consensual intimate images (adults 18+)

Next

Previous

Fred Rogers