Professional Diploma in Digital Learning Design
Assignment Title: Dyslexia Screener for Educators Author: Polly Marsh
Introduction
The Re-Design of the Dyslexia Screener training video
The GL Assessment Dyslexia Screener provides a detailed profile of strengths and challenges to support the investigation of a specific learning difficulty such as Dyslexia, so that you can identify and offer help to individuals. To support users to get the most out of the screener there is an accompanying training video. However this has now become outdated and needs a complete re-design.
+ INFO
Index
Part 1 - Core Assignment
Part 2 - Learning Design Pathway
Part 3 - Reflective Commentary
Part 1 Core Assignment (LX Design)
PART 1 - Index
The 'WHO' of Learning
the 'WHY' of Learning
The 'WHAT' of Learning
The 'WHAT IF' of Learning
The 'HOW' of Learning
01
The 'WHO' of Learning
The 'WHO' of learning
The Power of Personas
In digital learning design, the greatest risk is creating a "one-size-fits-all" solution that fails to resonate with the actual human beings behind the screen. Unlike a physical classroom where an instructor can read the room and pivot, digital content is static once launched. To ensure effectiveness, we can create Learning Personas. By defining a persona’s existing skills, professional goals, and daily pressures, we shift the focus from merely delivering information to solving real-world performance gaps.
Personas transform learners from a demographic statistic into a living breathing person. They ensure that the final digital product isn't just a piece of software, but a meaningful learning experience that respects the user's time and cognitive load. To ensure the Dyslexia Screener training video is updated effectively let's explore who will be accessing it... To ens
Learning personas
DESIGN CONSIDERATIONS
The primary purpose of creating the personas is to transition from a generic "one-size-fits-all" training model to a user-centered design approach. By analysing different roles (Trust Inclusion Lead, newly qualified SENCo, and a Secondary English teacher) I have identified specific pain points and motivations.
Time & Pace
Technology
Design
Evidence: Kirsty needs time for learning to embed. Jo prefers bitesize content and Neil is struggling with existing workload. Conclusion: Learning needs to be modular, self-paced and short to fit in with busy schedule. Consider a 'save and return' functionality.
Evidence: Kirsty struggled with accessing apps, and Jo deals with intermittent school Wi-Fi. Conclusion: Consider including a 'Getting Setup/Started' guide and minimising elements that may take up bandwidth, e.g. large and hi-res videos.
Evidence: Kirsty and Neil both would like to ask questions. Jo would like immediate practical application and Neil needs content to be relevant. Conclusion: Include FAQs, the option to submit questions/book further training with a human. Add printable resources
02
The 'Why' of Learning
The 'why' of learning
The "Why" is the bridge between a learner’s current reality and their future potential. It is the reason a learner chooses to engage rather than simply click through. Without a clearly defined purpose, even the most visually stunning multimedia or technologically advanced toolkit becomes content that exists but doesn't transform.
Why am I updating the current training video? The current branding looks dated and out of touch with today’s market, making it hard to project a modern image. Because the delivery is one-sided and lacks interactive features, the audience stays passive rather than getting involved. The content itself is too dry and technical, focusing on raw data without a clear story to help people understand why the information actually matters. Finally, there is a major gap in the guidance provided; users are being given reports without the necessary instruction on how to actually analyse the numbers or turn them into useful insights.
BLOOMS TAXONOMY
What does this look like in practice?
2. UNDERSTAND
1. REMEMBER
3. APPLY
Distinguish between the different subtests and what they are assessing.
Define Dyslexia and identify some of the common traits.
Intepret the data from the Dyslexia Screener.
EVALUATE
ANALYSE
CREATE
Prioritise support and resources.
Compare the results from the Screener to other attainment data and contextual knowledge of the student.
Produce targets, share with relevant stakeholders and plan reviews.
the WHY of learning
What will this look like for the Dyslexia Screener training video?
Learning Objectives
Learning Gap
Learning Outcomes
Learning Aim
A learning outcome is a clear, measurable statement that describes what a learner will be able to do, know, or value by the end. A learning outcome focuses on what the learner achieves.
The learning aim should be a broad statement that describes the overall purpose and intention of a course or lesson. It doesn't describe every tiny task, but rather the big-picture goal
A learning objective is a statement that describes what a learner should know/be able to do by the end of the course. It tells the learner where they are going and tells the instructor how to measure if they actually got there.
A learning gap identifies specific knowledge that needs to be aquired during the course. It's the different between current and desired knowledge.
+ INFO
+ INFO
+ INFO
+ INFO
03
The 'What' of Learning
The 'WHAT' OF LEARNING
The course will be one stand alone training video.
Course Structure
It will be divided into four components: - Getting Started
- Delivery
- Post Testing
- Analysis
Relevant course materials will be available throughout the course to accompany the content.
module framework
This streamlined digital course features a single core module designed for completion within 20 to 30 minutes. Comprising six key topics, the curriculum was developed using a top-down instructional design approach, using a final goal to plan specific details and learning pathway. By starting with defined learning outcomes, each multimedia-rich topic is strategically aligned to meet core objectives. The course leverages Bloom’s Taxonomy to cultivate higher-order thinking and applies cognitive learning principles to ensure active, meaningful engagement.
04
The 'What if' of Learning
WHat If...
Budget
Technology
+ INFO
+ INFO
Delivery Team
Design Team
+ INFO
+ INFO
timeline
September 2025
May 2026
July 2026
August 2026
October - May
Go Live
Plan
Pilot & Feedback
Communicate
Design
The Digital Learning team meet to discuss the roadmap for the year ahead and share out workload and prioritise accordingly.
Pilot video on LMS. Champion teacher group views and interacts with training video on the platform. Testers complete structured feedback template Instructional Designer and Video Production Lead.
Soft launch to select user groups. Release video to targeted early-access group (e.g., existing Dyslexia Screener users, internal staff). Monitor for any last-minute issues. Gather initial engagement data.
Full launch and rollout. Video goes live to all target audiences. Launch communications deployed across all channels. Welcome emails sent to registered users. Monitor platform for any access or technical issues.
Using existing training materials, identify what needs updating, and refining for an improves user experience. Analyse feedback themes and prioritise required updates.
05
The 'HOW' of LEARNING
Mode of DELIVERY ASYNCHRonous
Knowledge Retention
Inclusivity
Pacing
Flexibility
Asynchronous learning enhances knowledge retention by shifting the focus from passive listening to active engagement. Without the pressure to keep pace with a live speaker, learners can manage their own cognitive load. This flexibility also facilitates spaced repetition, allowing students to revisit complex modules at strategic intervals to overcome the "forgetting curve" and ensure long-term mastery of the material.
Asynchronous learning fosters inclusivity by leveling the playing field for diverse learners, providing essential processing time for deep thinkers and neurodivergent students to digest complex information at their own speed, while offering linguistic support, such as subtitles and translation tools, for non-native speakers. This mode ensures that all participants can access materials without barriers..
Unlike live learning where information can be easily missed, asynchronous learning lets you control the speed. You can pause and rewind complex sections for clarity or fast-forward through familiar basics to focus on new, essential content. There will be the option to contact the Customer Experience team for further support and guidance if needed.
Asynchronous learning provides essential flexibility by allowing participants to balance training with professional or childcare commitments on their own schedules. By removing the constraints of a fixed classroom, learners can choose the most productive environment, whether at home, a library, or the office to ensure a reliable internet connection and a space conducive to focus.
BUILDING BLOCKS & FORMATS
Assessment & Feedback
Content
Activities
Training Video - this will be the core of the content and will consist of interactive self-guided learning. It will take around 30 minutes. Case Studies - case studies will highlight different types of reports and how they can be interpreted. Infographics - this will included hot spotted images for learners to explore further.
Interactive Quizzes - the use of quizzes will help learners to consolidate knowledge at various points through out the module. These will be presented at multiple choice questions and matching activities. Webinars - additional webinars will be available to access to further consolidate knowledge.
Quizzes - Interactive quizzes will form part of the assessment process throughout the module. However there will be a more formal assessment at the end to learners can ensure they have met each of the learning objectives.
Learning flow
Core and Spoke Model
+ INFO
LEARNING FLOW IN ACTION
This concluding topic will include signposting to further support and a final assessment to cover the whole training module.
Use this side of the card to provide more information about a topic. Focus on one concept. Make learning and communication more efficient.
Use this side of the card to provide more information about a topic. Focus on one concept. Make learning and communication more efficient.
Use this side of the card to provide more information about a topic. Focus on one concept. Make learning and communication more efficient.
Use this side of the card to provide more information about a topic. Focus on one concept. Make learning and communication more efficient.
Use this side of the card to provide more information about a topic. Focus on one concept. Make learning and communication more efficient.
Use this side of the card to provide more information about a topic. Focus on one concept. Make learning and communication more efficient.
This topic focuses on interpretation and nuances rather than factual concepts so will include content, activities and quizzes
The introduction will feature pure content for the learner to engage with.
There will be activities for learners to engage with followed by a quiz to check understanding.
This topic will be mainly content led with some activities to engage learners.
This topic will be mainly content led with some activities to engage learners.
Topic 6
Title
Topic 1
Title
Title
Topic 5
Title
Topic 4
Title
Topic 3
Title
Topic 2
Write a brief description here
Introduction to Dyslexia
Write a brief description here
Data Triangulation & Next Steps
Write a brief description here
Analysing the Data
Write a brief description here
Running Reports
Write a brief description here
Delivering the Assessment
Write a brief description here
Preparing to Test
Assessment & Feedback
Content
Activities
BLOOMS TAXONOMY
ASSESSMENT STRATegy
Part 2 Learning Design Pathway
PART 2 Index
Rollout Plan
Evaluation Plan
01
ROLLOUT PLAN
LEARNING platform - ispring
CONSIDERATIONS FOR USING ISPRING
STAFFING AND RESOURCES ROLES
Staffing and Resources for rollout
Go Live Plan
learner engagement strategies
Targeted Email Newsletters
Blog Content
Visuals and Infographics
Testimonials
Social Media
Case Studies
02
Evaluation plan
LEARNING EVALUATION APPROACH
Evaluating the effectiveness of the revamped Dyslexia Screener training video is essential to ensure it achieves its intended learning outcomes, delivers value to educators, and supports improved dyslexia screening practice in schools. A robust evaluation approach enables us to demonstrate impact to stakeholders, identify areas for continuous improvement, and justify ongoing investment in professional development resources.
This evaluation plan employs Kirkpatrick's Four Levels of Evaluation as the framework for assessing training effectiveness across multiple dimensions, from initial learner reaction through to real-world impact on practice and outcomes.
RATIONALE
OVERVIEW OF MODEL
LEARNING EVALUATION APPROACH
Accessibility plan
Representation
Engagement
Action & Expression
User Experience EVALUATION
Learning Analytics and insights
Data
Insights
Action
Monitoring
PART 3 Index
Core Assignment Commentary
Learning Design Commentary
Personal Learning & Next Steps
CORE ASSESSMENT COMMENTARY
Reflective commentary on core deliverables: The primary thinking behind this LX design was to transition from a "one-size-fits-all" model to a user-centered experience. The original training was identified as having a dated visual identity and a "passive" delivery that left learners uninvested. My decisions were driven by the need to solve real-world performance gaps, specifically the "major gap" in instructional guidance regarding report analysis and data interpretation.The design evolved through the creation of three distinct Learning Personas (Kirsty, Neil, and Jo), which allowed me to shift focus from merely delivering information to addressing specific daily pressures and professional goals. A major challenge encountered was the diverse range of technical confidence and time constraints among educators. To address this, I chose an asynchronous, "bitesize" format that respects the user's cognitive load and allows for spaced repetition. Constraints such as the requirement to use internal staff and specific tools like iSpring and Microsoft PowerPoint shaped the outcome into a cost-efficient, self-guided module.
CORE ASSESSMENT COMMENTARY
Application of Theory and Concepts to LX Design Several key theories and frameworks were integral to the design: Bloom’s Taxonomy: This framework was used to move beyond simple recall. The design explicitly links learning outcomes to higher-order thinking, such as a "Data Triage Task" where learners must analyse and prioritize support based on fictional student data. Paivio’s Dual Coding Theory (1986): Recognising that 90% of information is absorbed through sight, I integrated infographics and animation to combine visual and verbal information, which is shown to significantly improve comprehension and recall. Kolb’s Experiential Learning Cycle (1984): To ensure the training translates into practice, I included detailed case studies. This connects abstract concepts (like "spiky profiles" in reports) to practical application, answering the learner's need for tangible impact. Core and Spoke Model: This provides a flexible learning flow where a single core module acts as a "home base," while optional "spokes" like Data Spotlights and Webinars allow learners to personalise their journey based on specific needs.
CORE ASSESSMENT COMMENTARY
AI Use or Rationale for Non-Use In this design, AI was utilised in two primary ways: Awesome Interactivity (Genially AI): I leveraged Genially’s "AI" (Awesome Interactivity) features to transform "dry" content into an engaging, animated experience. This directly addressed the previously identified problem of passive delivery. Automated Assessment: The course utilises automated quizzes, which eliminates the need for manual assessors and ensures immediate feedback for learners. This was a practical decision to maintain cost-efficiency while allowing the module to scale across a large user base. Rationale for Non-Use of Generative AI: While automation was used for interactivity and assessment, Generative AI (such as for content authoring) was intentionally avoided for core instructional material. The reasoning was a commitment to accuracy and clinical validity. Given the technical and sensitive nature of dyslexia screening, I collaborated with Subject Matter Experts (SMEs) to ensure all scripts and report interpretations were pedagogically sound and aligned with modern brand voice. This practical and ethical consideration ensured that the instruction remained authoritative and reliable for educators.
PATHWAY ASSESSMENT COMMENTARY
Reflective Commentary on Pathway Deliverables The pathway assessment work extended the initial LX design by shifting focus from the "what" (the content) to the "how" (the long-term success and adoption). While the core design focused on creating an engaging asynchronous module, the Rollout Plan refined this by ensuring the training actually reaches the diverse personas—Kirsty, Neil, and Jo—through targeted, multi-channel engagement.A key redirection occurred when moving from design to the Evaluation Plan. I realised that a successful redesign isn't just about visual appeal but about measurable impact on practice. This led to the decision to move beyond simple satisfaction scores and implement a Kirkpatrick-based framework. This progression in approach ensures that the project accounts for behavioral changes in educators, such as improved accuracy in identifying "spiky profiles" in student data.
PATHWAY ASSESSMENT COMMENTARY
Application of Theory and Concepts to Pathway Work The pathway work was heavily informed by behavioral and cognitive theories to maximise learner engagement and retention: Paivio’s Dual Coding Theory (1986) & Medina’s Brain Rules (2008): These theories shaped the decision to use infographics for the rollout. By combining visual and verbal information, the plan makes complex data interpretation "at a glance," improving comprehension and driving traffic to the full training. Kolb’s Experiential Learning Cycle (1984): This was applied through the inclusion of case studies in the engagement strategy. By connecting abstract screening concepts to concrete examples of school-wide impact, the design addresses the learners' need for practical application. Robert Cialdini’s Social Proof (2006): To overcome potential resistance to new technology, the plan uses video testimonials from a "Champion" pilot group. Seeing peers endorse the training builds credibility and reduces the perceived risk for busy educators. Kirkpatrick’s Four Levels of Evaluation: This framework provides the structure for the evaluation plan, ensuring we measure everything from initial learner reaction (Level 1) to the final impact on student outcomes (Level 4).
PATHWAY ASSESSMENT COMMENTARY
AI Use or Rationale for Non-Use AI was strategically integrated into the pathway work to enhance engagement and efficiency. Some AI tools were utilised to create the interactive promotional materials and infographics. Its value lies in its ability to transform static information into animated, dynamic content, which is shown to increase information retention by 42%. This was particularly appropriate for educators like Neil, who value tech-savvy, accessible learning. Automated Data Analytics: The pathway utilises iSpring’s LMS analytics to monitor learner habits, completion trends, and "friction points". This practical use of automation allows for "Content Optimisation" without requiring constant manual oversight. Rationale for Non-Use: While AI was used for interactivity and data tracking, I chose not to use AI for the "Qualitative Data" analysis phase, such as interpreting open-text feedback. The ethical and practical implication here was the need for human empathy and context. To truly understand why a SENCo like Jo might "drop off" or feel frustrated, a human-led review of feedback (during bi-weekly team meetings) was deemed more valuable for making nuanced structural adjustments than an automated sentiment analysis.
PERSONAL LEARNING AND NEXT STEPS
This project revealed that my initial assumptions about digital learning often prioritised content delivery over learner experience. I previously assumed that a well-produced video was sufficient for professional development. However, identifying the "passive delivery" and "dry" narrative of the original screener training challenged this. The work demonstrated that my skills have evolved from being a content curator to a learner-centric designer. By creating Learning Personas (Kirsty, Neil, and Jo), I moved beyond a "one-size-fits-all" approach to a model that respects the user's "cognitive load" and "daily pressures". My approach now centers on Top-Down Instructional Design, where I start with high-level learning outcomes, such as "prioritising support" rather than just "knowing facts" to shape the entire digital architecture. In my future professional practice, I will move away from static, linear training toward the "Core and Spoke" model implemented here. This allows me to provide a stable "home base" for all learners while offering optional "spokes" for those needing deeper specialisation . In a realistic work context, this means I can design a single course that serves both a time-poor English teacher and a highly specialised SENCo by allowing them to navigate their own learning flow. Furthermore, I will consistently use interactive infographics and animation, as the evidence shows this increases information retention by 42%. While I have integrated iSpring’s analytics for monitoring completion and drop-off points, I need to develop deeper skills in translating this data into "Level 4" Kirkpatrick impact reports. This will involve learning how to correlate training completion with actual school-wide student outcome data. Given that this project focuses on Dyslexia, and I will be working on further projects related to SEND, a logical next step is to gain a formal certification in web accessibility. This will ensure that future "multimedia-rich" designs are not just engaging but fully inclusive for neurodivergent educators and students.
References
- Bandura, A. (1977) Social Learning Theory. Englewood Cliffs, NJ: Prentice Hall.
- Behavioural Insights Team (2014) EAST: Four Simple Ways to Apply Behavioural Insights. London: Behavioural Insights Team.
- Black, P. and Wiliam, D. (1998) 'Inside the Black Box: Raising Standards through Classroom Assessment', Phi Delta Kappan, 80(2), pp. 139–148.
- Bloom, B. S. (ed.) (1956) Taxonomy of Educational Objectives: The Classification of Educational Goals. Handbook I: Cognitive Domain. New York: David McKay Company.
- Brown, P.C., Roediger, H.L. and McDaniel, M.A. (2014) Make It Stick: The Science of Successful Learning. Cambridge, MA: Harvard University Press.
- Chartered Institute of Marketing (2023) Email Marketing Best Practice Guide. Cookham: CIM.
- Chartered Institute of Public Relations (2022) Content Marketing and Thought Leadership Guide. London: CIPR.
- Cialdini, R.B. (2006) Influence: The Psychology of Persuasion. Revised edn. New York: Harper Business.
- Content Marketing Institute (2023) B2B Content Marketing: Benchmarks, Budgets, and Trends. Cleveland, OH: Content Marketing Institute.
- Deci, E.L. and Ryan, R.M. (1985) Intrinsic Motivation and Self-Determination in Human Behavior. New York: Plenum Press.
- Department for Education (2016) Standard for Teachers' Professional Development. London: Department for Education.
- Dunlosky, J., Rawson, K.A., Marsh, E.J., Nathan, M.J. and Willingham, D.T. (2013) 'Improving Students' Learning with Effective Learning Techniques', Psychological Science in the Public Interest, 14(1), pp. 4–58.
- Education Endowment Foundation (2018) Metacognition and Self-Regulated Learning: Guidance Report. London: Education Endowment Foundation.
- Education Endowment Foundation (2021) Effective Professional Development: Guidance Report. London: Education Endowment Foundation.
- Education and Training Foundation (2022) Digital Marketing for Learning Providers. London: ETF.
- Garrison, D.R., Anderson, T. and Archer, W. (2000) 'Critical Inquiry in a Text-Based Environment', The Internet and Higher Education, 2(2–3), pp. 87–105.
- Gibbs, G. (1988) Learning by Doing: A Guide to Teaching and Learning Methods. Oxford: Further Education Unit, Oxford Polytechnic.
- Gollwitzer, P.M. (1999) 'Implementation Intentions: Strong Effects of Simple Plans', American Psychologist, 54(7), pp. 493–503.
- JISC (2023) Digital Marketing Guide for Education. Bristol: JISC.
References
- Kirkpatrick, D. L. and Kirkpatrick, J. D. (2006) Evaluating Training Programs: The Four Levels. 3rd edn. San Francisco, CA: Berrett-Koehler.
- Kolb, D.A. (1984) Experiential Learning: Experience as the Source of Learning and Development. Englewood Cliffs, NJ: Prentice Hall.
- Lave, J. and Wenger, E. (1991) Situated Learning: Legitimate Peripheral Participation. Cambridge: Cambridge University Press.
- Laurillard, D. (2002) Rethinking University Teaching: A Conversational Framework for the Effective Use of Learning Technologies. 2nd edn. London: RoutledgeFalmer.
- Locke, E.A. and Latham, G.P. (2002) 'Building a Practically Useful Theory of Goal Setting and Task Motivation', American Psychologist, 57(9), pp. 705–717.
- Mayer, R.E. (2009) Multimedia Learning. 2nd edn. New York: Cambridge University Press.
- Medina, J. (2008) Brain Rules: 12 Principles for Surviving and Thriving at Work, Home, and School. Seattle, WA: Pear Press.
- National Foundation for Educational Research (2022) Teacher Voice and Professional Decision-Making. Slough: NFER.
- Paivio, A. (1986) Mental Representations: A Dual Coding Approach. Oxford: Oxford University Press.
- Roediger, H.L. and Butler, A.C. (2011) 'The Critical Role of Retrieval Practice in Long-Term Retention', Trends in Cognitive Sciences, 15(1), pp. 20–27.
- Schön, D.A. (1983) The Reflective Practitioner: How Professionals Think in Action. New York: Basic Books.
- Sweller, J. (1988) 'Cognitive Load During Problem Solving: Effects on Learning', Cognitive Science, 12(2), pp. 257–285.
- Teacher Development Trust (2015) Developing Great Teaching: Lessons from the International Reviews into Effective Professional Development. London: Teacher Development Trust.
- Thaler, R.H. and Sunstein, C.R. (2008) Nudge: Improving Decisions about Health, Wealth, and Happiness. New Haven, CT: Yale University Press.
While users possess a foundational understanding of SEND barriers and assessment administration, a critical knowledge gap exists in data interpretation. To bridge this, users must learn how to translate screener results into actionable, evidence-based interventions.
what's wrong with it?
- Brand Identity: The visual identity feels dated and no longer aligns with current market standards or our modern brand voice.
- Audience Engagement: The delivery is passive; it lacks the interactive elements necessary to keep the audience engaged and invested.
- Content Narrative: The material is overly technical and "dry," missing a compelling narrative to make the information resonate.
- Technical Guidance: There is a significant gap in instructional depth regarding the actual analysis and interpretation of the reports.
S - Improve educators knowledge of how to set up and analyse reports. M - Decrease the need for learners to seek additional customer support. A - Apply learning immediately into own school setting. R - Improve outcomes for students by being more effective with identifying need. T - Identify students with learning barriers in a more timely and effective way.
Webinars
Learners will be able to access Webinars to help them set up and navigate the platform that the Dyslexia Screener sits on.
Webinars
The final phase involves taking data-driven actions to refine the screener. These actions may include: Content Optimisation: Revising layouts or text in modules where data shows low user engagement. Targeted Interventions: Implementing additional support materials at identified points of friction. Structural Adjustments: Refining the pacing and flow of the screener based on time-on-task metrics. SME Consultation: Reviewing findings with Subject Matter Experts to ensure the tool remains relevant and accurate. All modifications will be monitored in the subsequent evaluation cycle to measure their impact on the overall user experience.
Case Studies
Links will be available so learners can see how the product they are learning has been used in other settings, both nationally and internationally.
Case Studies
Data Spotlights
Learners will be able to access short videos to support them with their understanding of data terminology used within the Training Video.
Data Spotlights
Monitoring
When to Monitor:Daily - Login anomalies, system errors, new enrollments Weekly - Completion trends, assessment results, support tickets Monthly - Engagement patterns, content performance, learner feedback analysis Quarterly - Trend analysis, cohort comparisons.
Key Patterns to Identify: Timing - When do learners engage most? Content Preferences - Video vs text vs interactive Learning Journeys - common pathways through content Friction Points: Where do learners abandon or struggle?
Description: A blog post will be published on the Renaissance website explaining the updates to the training video, why they matter for effective dyslexia screening, and how the training supports teachers in early identification. The content will be optimised for search engines to attract organic traffic from educators searching for dyslexia screening guidance.
Rationale for Selection: Content marketing serves dual purposes, it provides valuable information to educators whilst positioning Renaissance as a thought leader in dyslexia screening. Additionally, search engine-optimised content ensures the training reaches educators beyond our existing database who are actively seeking dyslexia screening support, expanding our potential learner base.
Description: Short video testimonials (60–90 seconds) will be recorded with teachers from the Champion pilot group, sharing how the training video improved their confidence, knowledge, and practice in using the Dyslexia Screener. These testimonials will feature on the training landing page, in promotional emails, and across social media channels.
Rationale for Selection: Robert Cialdini's seminal work on influence (Influence: The Psychology of Persuasion, 2006) identifies social proof as one of the most powerful drivers of decision-making. When educators see peer (teachers like themselves) endorsing the training and describing tangible benefits, they are more likely to perceive the training as credible, relevant, and worth their time. Research from the National Foundation for Educational Research (NFER) confirms that teacher voice and peer recommendation are highly valued in educational decision-making. Video testimonials are particularly effective because they convey authenticity through facial expressions, tone, and body language that text testimonials cannot replicate.
Given the versatile nature of the Digital Learning team, I have assumed the dual responsibilities of Instructional Designer and In-House Learning Designer. To guarantee the learning experience is perfectly tailored to our target audience, we will increase the frequency of feedback collection during the design cycle. Troubleshooting and sharing of ideas will form part of our bi-weekly team meetings.
Description: A detailed case study will be developed featuring a school that has successfully used the Dyslexia Screener training to improve early identification processes and intervention outcomes. The case study will include specific examples of impact on pupils, staff confidence, and whole-school practice.
Rationale for Selection: Case studies provide concrete examples of how training translates into practice, making abstract concepts tangible and relatable. Kolb's Experiential Learning Cycle (1984) similarly highlights the importance of connecting new learning to practical application. For educators considering the training, a case study answers the critical question: "What difference will this actually make in my school?" By demonstrating proven impact, case studies reduce perceived risk and increase commitment to engaging with the training.
- Proficiently setup, execute and run reports the Dyslexia Screener.
- Analyse and interpret the reports.
- Priortise support based on the analysis of the data.
- Effectively articulate results, and next steps, to other members of staff and parents.
Description: Shareable infographics will be created summarising key concepts from the training video, for example, "5 Steps to Effective Dyslexia Screening" or "Understanding Your Screener Results at a Glance." These will be distributed via social media, email, and partner channels.
Rationale for Selection: Paivio's Dual Coding Theory (1986) demonstrates that combining visual and verbal information significantly improves comprehension and recall. Complex information about dyslexia screening processes and score interpretation can be made more accessible through visual representation. Infographics also serve as effective promotional tools, they are highly shareable, provide immediate value to the viewer, and create curiosity that drives traffic to the full training video. John Medina's research in Brain Rules (2008) further supports the power of visual communication, showing that visual information is processed more efficiently than text alone.
For both the design and delivery phases, the primary technical requirements include Microsoft PowerPoint for content authoring and an iSpring Suite license to convert presentations into interactive, e-learning formats (such as SCORM or HTML5). During the design stage, stable internet access and hardware capable of media processing are necessary, while the delivery phase requires a browser-based environment compatible with the chosen output. A dedicated Learning Management System (LMS) is not strictly mandatory for hosting, but it is highly recommended if the project requires tracking learner progress, assessment scores, or completion data, which standard web hosting cannot provide.
The budget for this project primarily covers iSpring and Microsoft PowerPoint licenses, with development and facilitation handled entirely by internal staff to maximize cost-efficiency. While the design utilises internal Subject Matter Experts from within the business, a "full-scale" development path remains an option, which would require additional funding for external graphic designers and premium asset libraries. Delivery costs are expected to be negligible, leveraging existing internal hosting or LMS infrastructure.
Description: Paid advertising campaigns will run on LinkedIn, Facebook, and X, targeting educators, SENCOs, headteachers, and school leaders with an interest in SEND and assessment. Advertisements will feature short video teasers and compelling messaging about the training's practical benefits.
Rationale for Selection: Organic social media reach has declined significantly in recent years, making paid promotion essential for reaching new audiences. Targeted advertising allows precise audience segmentation, ensuring the training video is promoted to those most likely to benefit such as educators actively seeking professional development in SEND and dyslexia support.
The Core and Spoke model is a flexible learning design that balances a solid foundation with personal choice. At its heart is a single core component which acts as the home base for all learners. Surrounding this core are various spokes, which are optional activities or resources that learners can choose from based on their specific interests or needs.
Description: The revamped training video will be promoted through Renaissance's existing email newsletter channels, targeting current Dyslexia Screener users, SENCOs, assessment coordinators, and school leaders. Emails will include compelling preview content, clear benefits messaging, and direct calls-to-action to access the training.
Rationale for Selection: Email remains one of the most effective channels for reaching educators with professional development opportunities. Unlike social media, which relies on algorithmic visibility, email delivers content directly to the intended audience. Personalised, targeted communications demonstrate significantly higher engagement rates than generic messaging. By targeting existing Dyslexia Screener users, we reach an audience with established need and interest, increasing the likelihood of engagement and completion.
Delivery requires a Digital Learning team for end-to-end production, utilising iSpring as the primary hosting platform. Traditional trainers are not required for delivery. I will collaborate with Subject Matter Experts (SMEs) to transform core content into digital modules, providing guidance on adapting classroom expertise for asynchronous learning. As this is a self-guided module without login requirements, dedicated community managers and technical support teams are unnecessary. Any trainee queries will be managed by the existing Customer Experience team. The course will utilise automated quizzes, eliminating the need for manual assessors.
Data Consultations
Learners will have the option to be book a Data Consultation with an Education Advisor to enable them to have a 30 minute 1:1 conversation to discuss their setting's own Dyslexia Screener data, if they feel they need more support.
Data Consultations
Insights
Learner Preferences - What format works best? When do people learn? Blockages - Where do learners drop off? What causes frustrations? Habits - How often do people return? What's the learning cadence? Results - Who completes? Who succeeds? What predicts performmance? Interest - Which topics are most engaging?
The Dyslexia Screener training video will help users to set up the screener and analyse results.
Further Reading
Links to other associations will enable learners to read up on latest research and news regarding Dyslexia.
British Dyslexia Association
Overview of kirkpatrick's model
Developed by Donald Kirkpatrick in the 1950s and refined over subsequent decades, this remains the most widely recognised and applied framework for evaluating learning interventions. The model proposes four sequential levels of evaluation, each building upon the previous:
RATIONALE FOR SELECTION
Kirkpatrick's model was selected for the following reasons:
Data Sources
- LMS (iSpring) - This will provide information on course completions, time spent, assessment scores and login frequency
- Intranet - Data from here will provide information on resource downloads, search queries and knowledge base usage
Professional Diploma in Digital Learning Design
Polly
Created on January 7, 2026
Start designing with a free template
Discover more than 1500 professional designs like these:
View
Higher Education Presentation
View
Psychedelic Presentation
View
Harmony Higher Education Thesis
View
Vaporwave presentation
View
Geniaflix Presentation
View
Vintage Mosaic Presentation
View
Modern Zen Presentation
Explore all templates
Transcript
Professional Diploma in Digital Learning Design
Assignment Title: Dyslexia Screener for Educators Author: Polly Marsh
Introduction
The Re-Design of the Dyslexia Screener training video
The GL Assessment Dyslexia Screener provides a detailed profile of strengths and challenges to support the investigation of a specific learning difficulty such as Dyslexia, so that you can identify and offer help to individuals. To support users to get the most out of the screener there is an accompanying training video. However this has now become outdated and needs a complete re-design.
+ INFO
Index
Part 1 - Core Assignment
Part 2 - Learning Design Pathway
Part 3 - Reflective Commentary
Part 1 Core Assignment (LX Design)
PART 1 - Index
The 'WHO' of Learning
the 'WHY' of Learning
The 'WHAT' of Learning
The 'WHAT IF' of Learning
The 'HOW' of Learning
01
The 'WHO' of Learning
The 'WHO' of learning
The Power of Personas
In digital learning design, the greatest risk is creating a "one-size-fits-all" solution that fails to resonate with the actual human beings behind the screen. Unlike a physical classroom where an instructor can read the room and pivot, digital content is static once launched. To ensure effectiveness, we can create Learning Personas. By defining a persona’s existing skills, professional goals, and daily pressures, we shift the focus from merely delivering information to solving real-world performance gaps.
Personas transform learners from a demographic statistic into a living breathing person. They ensure that the final digital product isn't just a piece of software, but a meaningful learning experience that respects the user's time and cognitive load. To ensure the Dyslexia Screener training video is updated effectively let's explore who will be accessing it... To ens
Learning personas
DESIGN CONSIDERATIONS
The primary purpose of creating the personas is to transition from a generic "one-size-fits-all" training model to a user-centered design approach. By analysing different roles (Trust Inclusion Lead, newly qualified SENCo, and a Secondary English teacher) I have identified specific pain points and motivations.
Time & Pace
Technology
Design
Evidence: Kirsty needs time for learning to embed. Jo prefers bitesize content and Neil is struggling with existing workload. Conclusion: Learning needs to be modular, self-paced and short to fit in with busy schedule. Consider a 'save and return' functionality.
Evidence: Kirsty struggled with accessing apps, and Jo deals with intermittent school Wi-Fi. Conclusion: Consider including a 'Getting Setup/Started' guide and minimising elements that may take up bandwidth, e.g. large and hi-res videos.
Evidence: Kirsty and Neil both would like to ask questions. Jo would like immediate practical application and Neil needs content to be relevant. Conclusion: Include FAQs, the option to submit questions/book further training with a human. Add printable resources
02
The 'Why' of Learning
The 'why' of learning
The "Why" is the bridge between a learner’s current reality and their future potential. It is the reason a learner chooses to engage rather than simply click through. Without a clearly defined purpose, even the most visually stunning multimedia or technologically advanced toolkit becomes content that exists but doesn't transform.
Why am I updating the current training video? The current branding looks dated and out of touch with today’s market, making it hard to project a modern image. Because the delivery is one-sided and lacks interactive features, the audience stays passive rather than getting involved. The content itself is too dry and technical, focusing on raw data without a clear story to help people understand why the information actually matters. Finally, there is a major gap in the guidance provided; users are being given reports without the necessary instruction on how to actually analyse the numbers or turn them into useful insights.
BLOOMS TAXONOMY
What does this look like in practice?
2. UNDERSTAND
1. REMEMBER
3. APPLY
Distinguish between the different subtests and what they are assessing.
Define Dyslexia and identify some of the common traits.
Intepret the data from the Dyslexia Screener.
EVALUATE
ANALYSE
CREATE
Prioritise support and resources.
Compare the results from the Screener to other attainment data and contextual knowledge of the student.
Produce targets, share with relevant stakeholders and plan reviews.
the WHY of learning
What will this look like for the Dyslexia Screener training video?
Learning Objectives
Learning Gap
Learning Outcomes
Learning Aim
A learning outcome is a clear, measurable statement that describes what a learner will be able to do, know, or value by the end. A learning outcome focuses on what the learner achieves.
The learning aim should be a broad statement that describes the overall purpose and intention of a course or lesson. It doesn't describe every tiny task, but rather the big-picture goal
A learning objective is a statement that describes what a learner should know/be able to do by the end of the course. It tells the learner where they are going and tells the instructor how to measure if they actually got there.
A learning gap identifies specific knowledge that needs to be aquired during the course. It's the different between current and desired knowledge.
+ INFO
+ INFO
+ INFO
+ INFO
03
The 'What' of Learning
The 'WHAT' OF LEARNING
The course will be one stand alone training video.
Course Structure
It will be divided into four components:- Getting Started
- Delivery
- Post Testing
- Analysis
Relevant course materials will be available throughout the course to accompany the content.
module framework
This streamlined digital course features a single core module designed for completion within 20 to 30 minutes. Comprising six key topics, the curriculum was developed using a top-down instructional design approach, using a final goal to plan specific details and learning pathway. By starting with defined learning outcomes, each multimedia-rich topic is strategically aligned to meet core objectives. The course leverages Bloom’s Taxonomy to cultivate higher-order thinking and applies cognitive learning principles to ensure active, meaningful engagement.
04
The 'What if' of Learning
WHat If...
Budget
Technology
+ INFO
+ INFO
Delivery Team
Design Team
+ INFO
+ INFO
timeline
September 2025
May 2026
July 2026
August 2026
October - May
Go Live
Plan
Pilot & Feedback
Communicate
Design
The Digital Learning team meet to discuss the roadmap for the year ahead and share out workload and prioritise accordingly.
Pilot video on LMS. Champion teacher group views and interacts with training video on the platform. Testers complete structured feedback template Instructional Designer and Video Production Lead.
Soft launch to select user groups. Release video to targeted early-access group (e.g., existing Dyslexia Screener users, internal staff). Monitor for any last-minute issues. Gather initial engagement data.
Full launch and rollout. Video goes live to all target audiences. Launch communications deployed across all channels. Welcome emails sent to registered users. Monitor platform for any access or technical issues.
Using existing training materials, identify what needs updating, and refining for an improves user experience. Analyse feedback themes and prioritise required updates.
05
The 'HOW' of LEARNING
Mode of DELIVERY ASYNCHRonous
Knowledge Retention
Inclusivity
Pacing
Flexibility
Asynchronous learning enhances knowledge retention by shifting the focus from passive listening to active engagement. Without the pressure to keep pace with a live speaker, learners can manage their own cognitive load. This flexibility also facilitates spaced repetition, allowing students to revisit complex modules at strategic intervals to overcome the "forgetting curve" and ensure long-term mastery of the material.
Asynchronous learning fosters inclusivity by leveling the playing field for diverse learners, providing essential processing time for deep thinkers and neurodivergent students to digest complex information at their own speed, while offering linguistic support, such as subtitles and translation tools, for non-native speakers. This mode ensures that all participants can access materials without barriers..
Unlike live learning where information can be easily missed, asynchronous learning lets you control the speed. You can pause and rewind complex sections for clarity or fast-forward through familiar basics to focus on new, essential content. There will be the option to contact the Customer Experience team for further support and guidance if needed.
Asynchronous learning provides essential flexibility by allowing participants to balance training with professional or childcare commitments on their own schedules. By removing the constraints of a fixed classroom, learners can choose the most productive environment, whether at home, a library, or the office to ensure a reliable internet connection and a space conducive to focus.
BUILDING BLOCKS & FORMATS
Assessment & Feedback
Content
Activities
Training Video - this will be the core of the content and will consist of interactive self-guided learning. It will take around 30 minutes. Case Studies - case studies will highlight different types of reports and how they can be interpreted. Infographics - this will included hot spotted images for learners to explore further.
Interactive Quizzes - the use of quizzes will help learners to consolidate knowledge at various points through out the module. These will be presented at multiple choice questions and matching activities. Webinars - additional webinars will be available to access to further consolidate knowledge.
Quizzes - Interactive quizzes will form part of the assessment process throughout the module. However there will be a more formal assessment at the end to learners can ensure they have met each of the learning objectives.
Learning flow
Core and Spoke Model
+ INFO
LEARNING FLOW IN ACTION
This concluding topic will include signposting to further support and a final assessment to cover the whole training module.
Use this side of the card to provide more information about a topic. Focus on one concept. Make learning and communication more efficient.
Use this side of the card to provide more information about a topic. Focus on one concept. Make learning and communication more efficient.
Use this side of the card to provide more information about a topic. Focus on one concept. Make learning and communication more efficient.
Use this side of the card to provide more information about a topic. Focus on one concept. Make learning and communication more efficient.
Use this side of the card to provide more information about a topic. Focus on one concept. Make learning and communication more efficient.
Use this side of the card to provide more information about a topic. Focus on one concept. Make learning and communication more efficient.
This topic focuses on interpretation and nuances rather than factual concepts so will include content, activities and quizzes
The introduction will feature pure content for the learner to engage with.
There will be activities for learners to engage with followed by a quiz to check understanding.
This topic will be mainly content led with some activities to engage learners.
This topic will be mainly content led with some activities to engage learners.
Topic 6
Title
Topic 1
Title
Title
Topic 5
Title
Topic 4
Title
Topic 3
Title
Topic 2
Write a brief description here
Introduction to Dyslexia
Write a brief description here
Data Triangulation & Next Steps
Write a brief description here
Analysing the Data
Write a brief description here
Running Reports
Write a brief description here
Delivering the Assessment
Write a brief description here
Preparing to Test
Assessment & Feedback
Content
Activities
BLOOMS TAXONOMY
ASSESSMENT STRATegy
Part 2 Learning Design Pathway
PART 2 Index
Rollout Plan
Evaluation Plan
01
ROLLOUT PLAN
LEARNING platform - ispring
CONSIDERATIONS FOR USING ISPRING
STAFFING AND RESOURCES ROLES
Staffing and Resources for rollout
Go Live Plan
learner engagement strategies
Targeted Email Newsletters
Blog Content
Visuals and Infographics
Testimonials
Social Media
Case Studies
02
Evaluation plan
LEARNING EVALUATION APPROACH
Evaluating the effectiveness of the revamped Dyslexia Screener training video is essential to ensure it achieves its intended learning outcomes, delivers value to educators, and supports improved dyslexia screening practice in schools. A robust evaluation approach enables us to demonstrate impact to stakeholders, identify areas for continuous improvement, and justify ongoing investment in professional development resources. This evaluation plan employs Kirkpatrick's Four Levels of Evaluation as the framework for assessing training effectiveness across multiple dimensions, from initial learner reaction through to real-world impact on practice and outcomes.
RATIONALE
OVERVIEW OF MODEL
LEARNING EVALUATION APPROACH
Accessibility plan
Representation
Engagement
Action & Expression
User Experience EVALUATION
Learning Analytics and insights
Data
Insights
Action
Monitoring
PART 3 Index
Core Assignment Commentary
Learning Design Commentary
Personal Learning & Next Steps
CORE ASSESSMENT COMMENTARY
Reflective commentary on core deliverables: The primary thinking behind this LX design was to transition from a "one-size-fits-all" model to a user-centered experience. The original training was identified as having a dated visual identity and a "passive" delivery that left learners uninvested. My decisions were driven by the need to solve real-world performance gaps, specifically the "major gap" in instructional guidance regarding report analysis and data interpretation.The design evolved through the creation of three distinct Learning Personas (Kirsty, Neil, and Jo), which allowed me to shift focus from merely delivering information to addressing specific daily pressures and professional goals. A major challenge encountered was the diverse range of technical confidence and time constraints among educators. To address this, I chose an asynchronous, "bitesize" format that respects the user's cognitive load and allows for spaced repetition. Constraints such as the requirement to use internal staff and specific tools like iSpring and Microsoft PowerPoint shaped the outcome into a cost-efficient, self-guided module.
CORE ASSESSMENT COMMENTARY
Application of Theory and Concepts to LX Design Several key theories and frameworks were integral to the design: Bloom’s Taxonomy: This framework was used to move beyond simple recall. The design explicitly links learning outcomes to higher-order thinking, such as a "Data Triage Task" where learners must analyse and prioritize support based on fictional student data. Paivio’s Dual Coding Theory (1986): Recognising that 90% of information is absorbed through sight, I integrated infographics and animation to combine visual and verbal information, which is shown to significantly improve comprehension and recall. Kolb’s Experiential Learning Cycle (1984): To ensure the training translates into practice, I included detailed case studies. This connects abstract concepts (like "spiky profiles" in reports) to practical application, answering the learner's need for tangible impact. Core and Spoke Model: This provides a flexible learning flow where a single core module acts as a "home base," while optional "spokes" like Data Spotlights and Webinars allow learners to personalise their journey based on specific needs.
CORE ASSESSMENT COMMENTARY
AI Use or Rationale for Non-Use In this design, AI was utilised in two primary ways: Awesome Interactivity (Genially AI): I leveraged Genially’s "AI" (Awesome Interactivity) features to transform "dry" content into an engaging, animated experience. This directly addressed the previously identified problem of passive delivery. Automated Assessment: The course utilises automated quizzes, which eliminates the need for manual assessors and ensures immediate feedback for learners. This was a practical decision to maintain cost-efficiency while allowing the module to scale across a large user base. Rationale for Non-Use of Generative AI: While automation was used for interactivity and assessment, Generative AI (such as for content authoring) was intentionally avoided for core instructional material. The reasoning was a commitment to accuracy and clinical validity. Given the technical and sensitive nature of dyslexia screening, I collaborated with Subject Matter Experts (SMEs) to ensure all scripts and report interpretations were pedagogically sound and aligned with modern brand voice. This practical and ethical consideration ensured that the instruction remained authoritative and reliable for educators.
PATHWAY ASSESSMENT COMMENTARY
Reflective Commentary on Pathway Deliverables The pathway assessment work extended the initial LX design by shifting focus from the "what" (the content) to the "how" (the long-term success and adoption). While the core design focused on creating an engaging asynchronous module, the Rollout Plan refined this by ensuring the training actually reaches the diverse personas—Kirsty, Neil, and Jo—through targeted, multi-channel engagement.A key redirection occurred when moving from design to the Evaluation Plan. I realised that a successful redesign isn't just about visual appeal but about measurable impact on practice. This led to the decision to move beyond simple satisfaction scores and implement a Kirkpatrick-based framework. This progression in approach ensures that the project accounts for behavioral changes in educators, such as improved accuracy in identifying "spiky profiles" in student data.
PATHWAY ASSESSMENT COMMENTARY
Application of Theory and Concepts to Pathway Work The pathway work was heavily informed by behavioral and cognitive theories to maximise learner engagement and retention: Paivio’s Dual Coding Theory (1986) & Medina’s Brain Rules (2008): These theories shaped the decision to use infographics for the rollout. By combining visual and verbal information, the plan makes complex data interpretation "at a glance," improving comprehension and driving traffic to the full training. Kolb’s Experiential Learning Cycle (1984): This was applied through the inclusion of case studies in the engagement strategy. By connecting abstract screening concepts to concrete examples of school-wide impact, the design addresses the learners' need for practical application. Robert Cialdini’s Social Proof (2006): To overcome potential resistance to new technology, the plan uses video testimonials from a "Champion" pilot group. Seeing peers endorse the training builds credibility and reduces the perceived risk for busy educators. Kirkpatrick’s Four Levels of Evaluation: This framework provides the structure for the evaluation plan, ensuring we measure everything from initial learner reaction (Level 1) to the final impact on student outcomes (Level 4).
PATHWAY ASSESSMENT COMMENTARY
AI Use or Rationale for Non-Use AI was strategically integrated into the pathway work to enhance engagement and efficiency. Some AI tools were utilised to create the interactive promotional materials and infographics. Its value lies in its ability to transform static information into animated, dynamic content, which is shown to increase information retention by 42%. This was particularly appropriate for educators like Neil, who value tech-savvy, accessible learning. Automated Data Analytics: The pathway utilises iSpring’s LMS analytics to monitor learner habits, completion trends, and "friction points". This practical use of automation allows for "Content Optimisation" without requiring constant manual oversight. Rationale for Non-Use: While AI was used for interactivity and data tracking, I chose not to use AI for the "Qualitative Data" analysis phase, such as interpreting open-text feedback. The ethical and practical implication here was the need for human empathy and context. To truly understand why a SENCo like Jo might "drop off" or feel frustrated, a human-led review of feedback (during bi-weekly team meetings) was deemed more valuable for making nuanced structural adjustments than an automated sentiment analysis.
PERSONAL LEARNING AND NEXT STEPS
This project revealed that my initial assumptions about digital learning often prioritised content delivery over learner experience. I previously assumed that a well-produced video was sufficient for professional development. However, identifying the "passive delivery" and "dry" narrative of the original screener training challenged this. The work demonstrated that my skills have evolved from being a content curator to a learner-centric designer. By creating Learning Personas (Kirsty, Neil, and Jo), I moved beyond a "one-size-fits-all" approach to a model that respects the user's "cognitive load" and "daily pressures". My approach now centers on Top-Down Instructional Design, where I start with high-level learning outcomes, such as "prioritising support" rather than just "knowing facts" to shape the entire digital architecture. In my future professional practice, I will move away from static, linear training toward the "Core and Spoke" model implemented here. This allows me to provide a stable "home base" for all learners while offering optional "spokes" for those needing deeper specialisation . In a realistic work context, this means I can design a single course that serves both a time-poor English teacher and a highly specialised SENCo by allowing them to navigate their own learning flow. Furthermore, I will consistently use interactive infographics and animation, as the evidence shows this increases information retention by 42%. While I have integrated iSpring’s analytics for monitoring completion and drop-off points, I need to develop deeper skills in translating this data into "Level 4" Kirkpatrick impact reports. This will involve learning how to correlate training completion with actual school-wide student outcome data. Given that this project focuses on Dyslexia, and I will be working on further projects related to SEND, a logical next step is to gain a formal certification in web accessibility. This will ensure that future "multimedia-rich" designs are not just engaging but fully inclusive for neurodivergent educators and students.
References
References
While users possess a foundational understanding of SEND barriers and assessment administration, a critical knowledge gap exists in data interpretation. To bridge this, users must learn how to translate screener results into actionable, evidence-based interventions.
what's wrong with it?
S - Improve educators knowledge of how to set up and analyse reports. M - Decrease the need for learners to seek additional customer support. A - Apply learning immediately into own school setting. R - Improve outcomes for students by being more effective with identifying need. T - Identify students with learning barriers in a more timely and effective way.
Webinars
Learners will be able to access Webinars to help them set up and navigate the platform that the Dyslexia Screener sits on.
Webinars
The final phase involves taking data-driven actions to refine the screener. These actions may include: Content Optimisation: Revising layouts or text in modules where data shows low user engagement. Targeted Interventions: Implementing additional support materials at identified points of friction. Structural Adjustments: Refining the pacing and flow of the screener based on time-on-task metrics. SME Consultation: Reviewing findings with Subject Matter Experts to ensure the tool remains relevant and accurate. All modifications will be monitored in the subsequent evaluation cycle to measure their impact on the overall user experience.
Case Studies
Links will be available so learners can see how the product they are learning has been used in other settings, both nationally and internationally.
Case Studies
Data Spotlights
Learners will be able to access short videos to support them with their understanding of data terminology used within the Training Video.
Data Spotlights
Monitoring
When to Monitor:Daily - Login anomalies, system errors, new enrollments Weekly - Completion trends, assessment results, support tickets Monthly - Engagement patterns, content performance, learner feedback analysis Quarterly - Trend analysis, cohort comparisons.
Key Patterns to Identify: Timing - When do learners engage most? Content Preferences - Video vs text vs interactive Learning Journeys - common pathways through content Friction Points: Where do learners abandon or struggle?
Description: A blog post will be published on the Renaissance website explaining the updates to the training video, why they matter for effective dyslexia screening, and how the training supports teachers in early identification. The content will be optimised for search engines to attract organic traffic from educators searching for dyslexia screening guidance. Rationale for Selection: Content marketing serves dual purposes, it provides valuable information to educators whilst positioning Renaissance as a thought leader in dyslexia screening. Additionally, search engine-optimised content ensures the training reaches educators beyond our existing database who are actively seeking dyslexia screening support, expanding our potential learner base.
Description: Short video testimonials (60–90 seconds) will be recorded with teachers from the Champion pilot group, sharing how the training video improved their confidence, knowledge, and practice in using the Dyslexia Screener. These testimonials will feature on the training landing page, in promotional emails, and across social media channels. Rationale for Selection: Robert Cialdini's seminal work on influence (Influence: The Psychology of Persuasion, 2006) identifies social proof as one of the most powerful drivers of decision-making. When educators see peer (teachers like themselves) endorsing the training and describing tangible benefits, they are more likely to perceive the training as credible, relevant, and worth their time. Research from the National Foundation for Educational Research (NFER) confirms that teacher voice and peer recommendation are highly valued in educational decision-making. Video testimonials are particularly effective because they convey authenticity through facial expressions, tone, and body language that text testimonials cannot replicate.
Given the versatile nature of the Digital Learning team, I have assumed the dual responsibilities of Instructional Designer and In-House Learning Designer. To guarantee the learning experience is perfectly tailored to our target audience, we will increase the frequency of feedback collection during the design cycle. Troubleshooting and sharing of ideas will form part of our bi-weekly team meetings.
Description: A detailed case study will be developed featuring a school that has successfully used the Dyslexia Screener training to improve early identification processes and intervention outcomes. The case study will include specific examples of impact on pupils, staff confidence, and whole-school practice. Rationale for Selection: Case studies provide concrete examples of how training translates into practice, making abstract concepts tangible and relatable. Kolb's Experiential Learning Cycle (1984) similarly highlights the importance of connecting new learning to practical application. For educators considering the training, a case study answers the critical question: "What difference will this actually make in my school?" By demonstrating proven impact, case studies reduce perceived risk and increase commitment to engaging with the training.
Description: Shareable infographics will be created summarising key concepts from the training video, for example, "5 Steps to Effective Dyslexia Screening" or "Understanding Your Screener Results at a Glance." These will be distributed via social media, email, and partner channels. Rationale for Selection: Paivio's Dual Coding Theory (1986) demonstrates that combining visual and verbal information significantly improves comprehension and recall. Complex information about dyslexia screening processes and score interpretation can be made more accessible through visual representation. Infographics also serve as effective promotional tools, they are highly shareable, provide immediate value to the viewer, and create curiosity that drives traffic to the full training video. John Medina's research in Brain Rules (2008) further supports the power of visual communication, showing that visual information is processed more efficiently than text alone.
For both the design and delivery phases, the primary technical requirements include Microsoft PowerPoint for content authoring and an iSpring Suite license to convert presentations into interactive, e-learning formats (such as SCORM or HTML5). During the design stage, stable internet access and hardware capable of media processing are necessary, while the delivery phase requires a browser-based environment compatible with the chosen output. A dedicated Learning Management System (LMS) is not strictly mandatory for hosting, but it is highly recommended if the project requires tracking learner progress, assessment scores, or completion data, which standard web hosting cannot provide.
The budget for this project primarily covers iSpring and Microsoft PowerPoint licenses, with development and facilitation handled entirely by internal staff to maximize cost-efficiency. While the design utilises internal Subject Matter Experts from within the business, a "full-scale" development path remains an option, which would require additional funding for external graphic designers and premium asset libraries. Delivery costs are expected to be negligible, leveraging existing internal hosting or LMS infrastructure.
Description: Paid advertising campaigns will run on LinkedIn, Facebook, and X, targeting educators, SENCOs, headteachers, and school leaders with an interest in SEND and assessment. Advertisements will feature short video teasers and compelling messaging about the training's practical benefits. Rationale for Selection: Organic social media reach has declined significantly in recent years, making paid promotion essential for reaching new audiences. Targeted advertising allows precise audience segmentation, ensuring the training video is promoted to those most likely to benefit such as educators actively seeking professional development in SEND and dyslexia support.
The Core and Spoke model is a flexible learning design that balances a solid foundation with personal choice. At its heart is a single core component which acts as the home base for all learners. Surrounding this core are various spokes, which are optional activities or resources that learners can choose from based on their specific interests or needs.
Description: The revamped training video will be promoted through Renaissance's existing email newsletter channels, targeting current Dyslexia Screener users, SENCOs, assessment coordinators, and school leaders. Emails will include compelling preview content, clear benefits messaging, and direct calls-to-action to access the training. Rationale for Selection: Email remains one of the most effective channels for reaching educators with professional development opportunities. Unlike social media, which relies on algorithmic visibility, email delivers content directly to the intended audience. Personalised, targeted communications demonstrate significantly higher engagement rates than generic messaging. By targeting existing Dyslexia Screener users, we reach an audience with established need and interest, increasing the likelihood of engagement and completion.
Delivery requires a Digital Learning team for end-to-end production, utilising iSpring as the primary hosting platform. Traditional trainers are not required for delivery. I will collaborate with Subject Matter Experts (SMEs) to transform core content into digital modules, providing guidance on adapting classroom expertise for asynchronous learning. As this is a self-guided module without login requirements, dedicated community managers and technical support teams are unnecessary. Any trainee queries will be managed by the existing Customer Experience team. The course will utilise automated quizzes, eliminating the need for manual assessors.
Data Consultations
Learners will have the option to be book a Data Consultation with an Education Advisor to enable them to have a 30 minute 1:1 conversation to discuss their setting's own Dyslexia Screener data, if they feel they need more support.
Data Consultations
Insights
Learner Preferences - What format works best? When do people learn? Blockages - Where do learners drop off? What causes frustrations? Habits - How often do people return? What's the learning cadence? Results - Who completes? Who succeeds? What predicts performmance? Interest - Which topics are most engaging?
The Dyslexia Screener training video will help users to set up the screener and analyse results.
Further Reading
Links to other associations will enable learners to read up on latest research and news regarding Dyslexia.
British Dyslexia Association
Overview of kirkpatrick's model
Developed by Donald Kirkpatrick in the 1950s and refined over subsequent decades, this remains the most widely recognised and applied framework for evaluating learning interventions. The model proposes four sequential levels of evaluation, each building upon the previous:
RATIONALE FOR SELECTION
Kirkpatrick's model was selected for the following reasons:
Data Sources