AI / ML & Autonomy
Discover what features and labels are, how training examples teach a model, and why data quality — not just algorithm choice — determines whether a classifier is trustworthy.
Student can define feature and label, select useful features from telemetry, assign a correct label to a given example, and explain how a biased dataset degrades classifier performance.
Discover what features and labels are, how training examples teach a model, and why data quality — not just algorithm choice — determines whether a classifier is trustworthy.
Open Features, Labels and Training Data at `/twin/learn/activities/aiml_assisted_classification` — interactive feature selection, label assignment, and data quality feedback (teaching-grade; no real ML training pipeline).
Label five telemetry windows as nominal / warning / fault candidate; list one feature that most influenced each label decision.
Teaching-grade software activity slot — not a flight simulator or certified propagator.
Step 1 — Choose a telemetry example
Each row is one telemetry snapshot. Select an example to label.
| Feature | Value | Selected? |
|---|---|---|
| Battery Voltage | 7.4 V | |
| OBC Temperature | 43 °C | |
| Attitude Error | 0.25° | |
| Data Backlog | 8 MB | |
| Packet Age | 2 s |
3 features selected. Select at least 3 for a meaningful model input.
Step 2 — Assign a label
Based on the features you selected, what class does this example belong to?
Self-check · Local only
Local-only. No submission, no grade. Answers revealed here only.
What is a 'feature' in a machine learning context?
What does a 'label' tell a machine learning model?
Why does a biased training dataset harm a fault classifier?
Evidence capture · Local only
Local-only. No submission, no backend, no grade. Copy or screenshot to share.
Expected outputs learners should be able to show after the lab (Phase 9 evidence engine preview available).
Select features from a telemetry example; assign a label (nominal / warning / fault candidate); review why each feature choice helps or hurts the model.
Responses are not persisted in this preview unless a specific activity component adds storage later.
Name two ways a biased or incomplete training dataset could cause a fault classifier to make dangerous mistakes.
Position as a data ethics conversation: what happens when training data is mostly nominal? Ask students to find a dataset gap.
Suggested progression from the mission learning path. Links avoid missing activity routes.