You've completed all 3 scenarios
AI models can only learn from the data they're trained on. If the training data is limited, the AI's knowledge will be limited too.
If training data is biased or incomplete, the AI will make biased or incorrect predictions. Missing examples = missing knowledge.
By looking at an AI's outputs, you can often figure out what it was trained on — and what's missing. This is a critical skill in the AI age.
The best AI models are trained on diverse, representative data. This helps them handle new situations they haven't seen before.
Missing training examples (like black cats or fluffy dogs) cause the AI to fail on those specific cases.
Similar-looking items with different labels (red apple vs red tomato) can confuse AI when it relies on simple patterns like color.
AI outputs reveal patterns from training data. You can "reverse engineer" what an AI learned by observing its behavior.
The same principles apply to real AI systems like ChatGPT, image recognition, and recommendation algorithms: