Want to create interactive content? It’s easy in Genially!
Generative AI & Instructional Design
Thomas Thompson
Created on October 5, 2023
Start designing with a free template
Discover more than 1500 professional designs like these:
View
Interactive Event Microsite
View
January School Calendar
View
Genial Calendar 2026
View
Annual calendar 2026
View
School Calendar 2026
View
2026 calendar
View
January Higher Education Academic Calendar
Transcript
Generative AI & Instructional Design
Definitions
Uses
Boundaries
Demo
In this module, you'll explore:
- Basic Terms for Discussing Generative AI
- Use Cases for Generative AI in Instructional Planning
- Boundary Conditions for the Utility of Generative AI in Education
- A Live Demo of Eduaide.Ai a tool for AI-Assisted Lesson Planning and Instructional Design
+ Info
Generative AI & Instructional Design
Definitions
Uses
Boundaries
Demo
Thomas Thompson
Middle School Social Studies Teacher
Co-Founder & Chief Executive Officer @ Eduaide.AI
M.S - Educational Technology - Johns Hopkins University Thesis: On Trends and Gaps in the Study of Open Educational Resources: A Systematic Literature Review
thomas.thompson@eduaide.ai
Generative AI & Instructional Design
Definitions
Uses
Boundaries
Demo
Large Language Models (LLMs)
Russel & Norvig (1994)
Generative AI
Eduaide.ai / Eduaide.Ai / Eduaide.AI / Eduaide.AI
A class of AI models that are designed to understand and generate human language text on a massive scale. These models are typically based on deep learning techniques, specifically on transformer architectures, and they are trained on vast amounts of text data from the internet.
Natural Language Processing & Machine Learning
Generative AI systems have the ability to create novel output, whether it be text, images, audio, or other forms of data, by learning patterns and structures from a dataset during training.
+ Explore
Characteristics
+ Explore
Generative AI & Instructional Design
Definitions
Uses
Boundaries
Demo
Use Cases for generative AI in Education
Effective Feedback
Intelligent Tutoring
Eduaide.ai / Eduaide.Ai / Eduaide.AI / Eduaide.AI
Personalized Learning
Translation & accessibility
Generative AI & Instructional Design
Definitions
Uses
Boundaries
Demo
Evaluating Education Facing AI
- Pedagogical Quality
- Reliability & Replicability
- Transparency
- Data Privacy & Security
- Accessibility
- Adaptability
- Principles & Ethics
- Ease of Integration
- Cost-Effectiveness
Eduaide.ai / Eduaide.Ai / Eduaide.AI / Eduaide.AI
Boundaries
Limitations of Foundation Models |
Hallucinations |
Bias
Linguistic Challenges in Prompting & Natural Language Processing (NLP)
The Proliferation of AI Systems
As existing models improve and new models come into existence, having a framework to evaluate their usefulness in education is paramount. How will education adapt to AI? How may AI change and evolve education?
- What are the affordances of the technology?
- What are the boundary conditions?
AI hallucinations can occur when the model generates text, images, or other content that doesn't have a direct connection to reality or doesn't follow logical or coherent patterns. These hallucinations can manifest in various ways: Textual hallucinations: Text that is disjointed, incoherent, or deviates significantly from the input or the expected context. In other words, the output is not reflected in the training set or prompt. Visual hallucinations: In the case of AI models that generate images or videos, the output may include surreal or fantastical visual content that doesn't correspond to any real-world scene or object. Misinformation and falsehoods: AI systems can inadvertently generate false or misleading information, leading to the dissemination of inaccurate content.
An example of a hallucination on ChatGPT where a prompter provided a fake URL that contains meaningful keywords. The response seems valid at first glance. Hallucinations can be both apparent and subtle.
Scale: LLMs are characterized by their enormous size, often consisting of hundreds of millions or even billions of parameters. The large number of parameters enable them to reflect intricate patterns and nuances in language. However, there are limitations.Pre-training: LLMs are typically pre-trained on a massive corpus of text data, which means they learn to predict the next word in a sentence or to fill in missing words by analyzing billions of sentences from various sources. LLMs have no understanding of the embodied, physical world and can only act on the abstraction of language. Fine-tuning: After pre-training, models can be fine-tuned on specific tasks or domains, making them adaptable to a wide range of natural language processing tasks such as text classification, machine translation, question answering, text generation, or instructional design. Versatility: LLMs can be used for both generation tasks (e.g., generating human-like text) and understanding tasks (e.g., answering questions, summarizing text, sentiment analysis). Their versatility has led to their adoption in various applications across industries.