Want to create interactive content? It’s easy in Genially!

Get started free

Chapter 2 - Day 9

Talent Growth Team

Created on October 27, 2025

Start designing with a free template

Discover more than 1500 professional designs like these:

Akihabara Agenda

Akihabara Content Repository

Interactive Scoreboard

Semicircle Mind Map

Choice Board Flipcards

Team Retrospective

Fill in the Blanks

Transcript

AI & the product lifecycle Chapter 2 Day 9

Launching & Monitoring AI Products

start

Launching &

Monitoring AI products

You'll use interactive elements throughout the AI Ignition programme. Magnifying glass icon = more to explore!

Learning objectives

Building on Day 8

Why this matters

And you can click Next for the next page...

Next

From Development to Deployment: Managing AI in Production

Launching an AI product is fundamentally different from launching traditional software. The rise of AI introduces unique complexities that demand careful planning and ongoing vigilance. While core Go-to-Market strategies still apply—setting pricing (usage-based, subscription), defining positioning, and optimising user onboarding—AI introduces unique dependencies on data, trust, and continuous learning. AI products face heightened scrutiny and market hype that demands a clear, honest value proposition.

The challenge: Success requires managing expectations, preparing for the fact that AI systems can fail in unexpected ways, and rigorously addressing ethical and legal complexities like bias mitigation, data privacy, and societal implications. The opportunity: By proactively defining and tracking a blend of quantitative and qualitative metrics, you gain a complete picture of your product's health and performance. AI allows you to build products that personalise experiences at massive scale, automate tedious tasks, and predict future trends with unprecedented accuracy. This commitment to responsible launch and monitoring gives you a significant competitive advantage.

Next

Back

1/11

The Three-Phase Launch Framework

Launching an AI product requires a systematic approach across three distinct phases. Each phase addresses specific risks and opportunities unique to AI-native products.

Phase 1: Pre-Launch Readiness

Focus area: Data, Privacy, & Legal Compliance

Key Actions & Deliverables

Focus area: Rigorous Testing and Failure Design

Key Actions & Deliverables

Focus area: Core Product Value

Key Actions & Deliverables

Next

Back

2/11

The Three-Phase Launch Framework

Phase 2: Launch Communication

Focus area: Hype vs. Trust

Key Actions & Deliverables

Phase 3: Post-Launch Monitoring

Focus area: Performance and Accuracy Metrics

Key Actions & Deliverables

Focus area: The Learning Loop

Key Actions & Deliverables

Let's take a look at the three-phase launch framework in more detail...

Next

Back

3/11

Phase I: Pre-Launch Readiness

Privacy by Design: The GDPR Imperative

For AI products processing user data (especially in the EU), privacy cannot be an afterthought. The most effective approach is privacy by design—integrating data protection principles directly into the product architecture from day one.

GDPR-Specific requirements for AI Products

Practical Implementation: Instead of asking "How do we make this AI feature comply with GDPR after building it?", ask "How do we design data collection and processing to minimise PII, include clear consent mechanisms, and provide transparency from the outset?"

Guardrails and Fallback Plans

From Day 7, you learned about AI failure modes. Before launch, you must design explicit guardrails and fallback plans for when things go wrong.

Essential Pre-Launch Safeguards

Next

Back

4/11

Phase II: Launch Communication—Managing Hype vs. Trust

The AI market is saturated with hype. Your product's success depends on transparent, honest communication that builds user trust rather than inflating expectations.

Transparent Messaging Framework

Instead of: "Our revolutionary AI will transform your workflow". Try: "Our AI analyses your usage patterns to suggest time-saving shortcuts. It learns as you use it, becoming more helpful over time. Occasionally it may suggest something that doesn't fit your workflow—when that happens, just dismiss it and it will learn."

Why this works

Onboarding is Critical: Use the first-run experience to educate users on what the AI can and cannot do. This is your opportunity to set appropriate expectations and demonstrate transparency from the start.

Next

Back

5/11

Phase III: Post-Launch Monitoring

Unlike traditional software where you track usage and performance, AI products require monitoring across three distinct dimensions. As Model Owner (from Day 6), you're responsible for the ongoing performance of the AI system.

Pillar 1: Model Health

Pillar 2: Business Outcomes

Pillar 3: Human Feedback

The Continuous Improvement Loop: These three pillars work together. Human Feedback reveals where users struggle → this informs Model Health investigations → improvements drive Business Outcomes → which generates more user engagement and feedback data → completing the loop.

Next

Back

6/11

🚀 AND in Action: Bristol Zoo Project—Gorilla Chat

Client: Bristol Zoo Project 🚀 The Challenge: Earlier this year, our ANDis at Club Wangari in Bristol collaborated with Bristol Zoo Project, a local conservation and education charity, to explore: "How might we bring the story of their new gorilla habitat to life and drive donations on the Bristol Zoo Project site?" 🌟 The Solution: Gorilla Chat An AI-powered tool built on real insights from those who know the Bristol Zoo gorillas best. The tool creates a unique and authentic chat experience that allows people to learn about the daily life of the gorillas—from troop dynamics to favourite foods—all while learning about the campaign to give them a new home, one conversation at a time.

Next, let's take a look at the launch process and key lessons next...

Next

Back

7/11

The Launch Process:

  • Rapid Research & Design: Followed a structured discovery process to bring a proof of concept to life
  • Strong Guardrails: Implemented robust safeguards against profanity and misinformation—critical concerns in an educational context with potential child users
  • Testing & Iteration: Conducted thorough testing to ensure the AI's responses were factually accurate, age-appropriate, and aligned with the zoo's educational mission
  • Soft Launch Strategy: Planned initial launch at Bristol Tech Festival to gather real-world feedback in a controlled environment before wider deployment
🎓 The Key lesson The Gorilla Chat example demonstrates all three launch phases in action:
  1. Pre-Launch—establishing guardrails for an educational AI tool;
  2. Communication—transparent messaging about learning from gorilla experts;
  3. Monitoring—soft launch allows for performance monitoring and iteration before full deployment.
The strong emphasis on content safety reflects the ethical responsibility required when building AI for educational contexts.

Next

Back

8/11

Next

Back

9/11

Next

Back

10/11

When monitoring a new AI- powered feature...

Next

Back

11/11

Instructions: Tap your screen and click the X button in the top right corner to exit Day 9 and save your progress. ⚠️ Do not use the top left X button unless you are finished, otherwise your progress will not be saved.

Day 9

You've launched your AI feature with appropriate safeguards and established monitoring to ensure ongoing performance. But AI products rarely exist in isolation—they require coordination across multiple systems, data sources, and stakeholders. Day 10, you'll learn about orchestration patterns that enable complex AI workflows, and you'll synthesise your complete AI-Native Product Strategy—bringing together everything from Days 6-9 into a coherent strategic memo.

Launching & Monitoring AI Products completed

Back

Pillar 1: Model Health

What you're measuring: The AI model's technical performance—data quality, prediction accuracy, and system reliability. Key metrics:

  • Accuracy/Precision/Recall: How often the model's predictions are correct
  • Latency: How quickly the AI responds to requests
  • Data Drift: Changes in input data distribution over time (from Day 7)
  • Model Drift: Degradation in prediction accuracy as the world changes (from Day 7)
Your responsibility as a model owner: Track these metrics on a dashboard. When accuracy drops below a threshold or drift is detected, coordinate with the data team to investigate and potentially retrain the model. Click back to return

Back

Pillar 2: Business Outcomes

What you're measuring: The AI model's impact on business goals—whether it's delivering ROI and driving value. Key metrics:

  • Conversion Rate: Does the AI feature drive desired user actions?
  • Engagement Time: Do users spend more time with AI-powered features?
  • Retention/Churn: Does the AI improve user retention?
  • Cost-per-Query: For LLM-based products, track inference costs
Your responsibility as model owner: Link AI performance directly to business metrics. This allows you to demonstrate ROI and make data-driven decisions about where to invest in model improvements. Click back to return

Back

Pillar 3: Human Feedback

What you're measuring: The user's qualitative experience—trust, usefulness, and satisfaction with AI outputs. Key metrics:

  • Thumbs Up/Down ratings: Simple, immediate user feedback
  • Explicit feedback: "Was this helpful?" or "Report an issue"
  • Adoption rate: What percentage of users actually use the AI feature?
  • Prompt refinement rate: How often users have to re-prompt or rephrase?
Your responsibility as model owner: Design always-on feedback loops. Integrate discreet, one-click feedback mechanisms directly next to AI-generated outputs (e.g., a simple "Was this answer useful? Yes/No" button). This feedback becomes invaluable training data for continuous improvement. Click back to return

Back

• Communicate a Clear, Honest Value Proposition, avoiding AI buzzwords• Focus messaging on Real-World Benefits (e.g., "Saves time") over technology • Use onboarding to Manage Expectations and build trust by being transparent about capabilities and limitations

• Establish robust Guardrails and detailed Test Plan for unexpected failures• Design clear Fallback Plan (alternative answer, user prompt, human handoff) • Design for Explainability to show users why the AI made a decision

✅ Why this works

  • Explains the benefit in concrete terms ("time-saving shortcuts")
  • Sets expectation that it learns and improves
  • Acknowledges limitations ("occasionally may suggest something that doesn't fit")
  • Explains user control ("just dismiss it and it will learn")

Volume Generation

Define your problem statement clearly, then ask AI to generate 50 solution ideas. The quantity forces exploration beyond obvious answers.

Example prompt: Problem: Remote workers struggle to submit expense reports, leading to delayed reimbursements and frustration. Generate 50 distinct solution ideas. Include: - 20 incremental improvements to existing processes - 20 technology-enabled innovations - 10 radical rethinks that eliminate the expense report entirely For each idea, provide a one-sentence description.

Imagine the AI as a chef trying to cook a meal (the answer) using ingredients from a massive fridge (all the data in the world). 🪟 Context Window = Size of the countertop The chef only has so much counter space for a fridge full of ingredients to cook the meal. ✂️ Chunking = Prepping ingredients To make cooking manageable with limited space, the chef divides ingredients into small, organized bowls — veggies, spices, etc. 🔢 Tokens = The Cost of Ingredients The more bowls (chunks) or ingredients (tokens) the chef uses, the more expensive and time-consuming it is. 🗺️ Embeddings = Labelling the bowls To avoid rummaging through the fridge later, the chef labels each bowl. These labels are like embeddings — smart tags describing what’s inside. 📚 RAG = The Sous Chef When the chef wants to make a dish, the sous chef looks at the recipe, fetches the right bowls with matching labels and lays them out on the counter. Then the chef uses just those to cook the perfect meal — the final AI answer.

🔨 Building on Day 8

Day 8: You established the data foundations—acquisition, governance, and quality infrastructure Today: You'll launch your AI feature with appropriate safeguards and establish monitoring to ensure ongoing performance Tomorrow: You'll synthesise your complete AI-Native Product Strategy and understand orchestration patterns

🚀Why this matters

Yesterday you established the data foundations for your AI feature—defining what data it needs, how to collect it ethically, and how to maintain quality at scale. Today, you'll take those foundations and launch your AI product into the real world, establishing the monitoring frameworks that ensure it performs reliably and improves continuously. This is where theoretical design meets practical operation.

• Ensure Data Privacy and Consent strategy complies with regulations (e.g., GDPR) • Proactively identify and mitigate bias in training data and model evaluation • Conduct an Ethical Risk Assessment to address societal implications

🚀 Smart chunking saves thousands With Tesco, we analysed 50 stakeholder interviews (90 minutes each). Each transcript was ~15,000 words—too large for efficient processing. By breaking them into thematic chunks and only processing relevant sections per query, we reduced token usage by 85%.

• Track AI-specific metrics like Model Drift and Confidence Scores alongside standard KPIs• Continuously monitor live usage for signs of Unintended Bias or unfair outcomes

Volume Generation

Define your problem statement clearly, then ask AI to generate 50 solution ideas. The quantity forces exploration beyond obvious answers.

Example prompt: Problem: Remote workers struggle to submit expense reports, leading to delayed reimbursements and frustration. Generate 50 distinct solution ideas. Include: - 20 incremental improvements to existing processes - 20 technology-enabled innovations - 10 radical rethinks that eliminate the expense report entirely For each idea, provide a one-sentence description.

• Confirm the AI solves a real market problem and adds unique value• Prioritise delightful UX and quality development to ensure ongoing value • Validate that the AI-driven outcome aligns with user needs

• Ensure robust system for Monitoring, Evaluating, and Retraining models • Treat user interactions as primary source of new data for continuous adaptation

Confidence thresholds: Don't show outputs when the AI's confidence score falls below a defined threshold Content filters: Block harmful, biased, or inappropriate responses before they reach users Human handoff triggers: Define clear conditions that trigger "eject to human" (from Thinqwin example, Day 7) Graceful degradation: If AI fails, show a useful fallback (e.g., popular items instead of personalised recommendations)

Lawful basis: Establish consent, legitimate interest, or contractual necessity for processing personal data Purpose limitation: Only use data for the stated purpose it was collected Data minimisation: Collect only what's necessary for the AI feature to function Right to explanation: Users can ask how automated decisions about them were made Right to object: Users can opt out of automated decision-making

📚Learning Objectives

  1. Apply a three-phase launch framework: Pre-Launch Readiness, Communication, and Post-Launch Monitoring
  2. Design transparent messaging that builds user trust while managing expectations
  3. Establish monitoring across three pillars: Model Health, Business Outcomes, and Human Feedback
  4. Recognise the continuous improvement loop essential to AI products