🎉

Congratulations!

You've completed all 3 scenarios

Key Takeaways

📊

AI Learns From Data

AI models can only learn from the data they're trained on. If the training data is limited, the AI's knowledge will be limited too.

⚠️

Bias Comes From Training

If training data is biased or incomplete, the AI will make biased or incorrect predictions. Missing examples = missing knowledge.

🔍

You Can Spot AI Bias

By looking at an AI's outputs, you can often figure out what it was trained on — and what's missing. This is a critical skill in the AI age.

🎯

Diverse Data = Better AI

The best AI models are trained on diverse, representative data. This helps them handle new situations they haven't seen before.

What You Learned

Scenario 1: Cats vs Dogs

Missing training examples (like black cats or fluffy dogs) cause the AI to fail on those specific cases.

Scenario 2: Fruits vs Vegetables

Similar-looking items with different labels (red apple vs red tomato) can confuse AI when it relies on simple patterns like color.

Scenario 3: Guess the Training Data

AI outputs reveal patterns from training data. You can "reverse engineer" what an AI learned by observing its behavior.

Why This Matters in the Real World

The same principles apply to real AI systems like ChatGPT, image recognition, and recommendation algorithms:

Back to Menu