AI / ML & Autonomy

Anomaly Classifier

Run a deterministic teaching classifier — rule-based or preset ML — on a telemetry vector; inspect the predicted class, confidence score, and contributing features.

High school
Time estimate
25–30 min
Complexity
advanced
Maturity
pilot ready
Simulator readiness
implemented
Software available now
Implemented as dual-classifier teaching lab at `/twin/learn/activities/aiml_normal_vs_abnormal` — deterministic lookup tables, no external ML library, browser-local only.

Learning outcomes

Student can run a classifier on a telemetry example, interpret the predicted class and confidence score, identify the key contributing features, and explain the difference between rule-based and ML-based detection.

  • Explain the difference between a rule-based and a preset ML classifier.
  • Interpret a confidence score: what it means and what it does not guarantee.
  • Identify the top contributing features for a given prediction.

Concept primer

Run a deterministic teaching classifier — rule-based or preset ML — on a telemetry vector; inspect the predicted class, confidence score, and contributing features.

Open Anomaly Classifier at `/twin/learn/activities/aiml_normal_vs_abnormal` — interactive scenario picker and dual-classifier comparison (deterministic teaching model; not a production ML system).

For one telemetry vector: write an if-then rule that would classify it correctly; then describe what a pattern-matching model would look for instead.

Interactive lab

Teaching-grade software activity slot — not a flight simulator or certified propagator.

Step 1 — Choose a telemetry scenario

Telemetry Scenario

FeatureValue
voltage7.4 V
temperature43 °C
pointing error0.25°
backlog8 MB
packet age2 s

Step 2 — Select classifier type

Classifier output — Rule-Based

Prediction

Predicted class

Nominal

Confidence

91% confidence

Top contributing features

voltagepointing errorpacket age

Explanation

All threshold rules pass — state classified as nominal.

Classifiers agree ✓

Rule-based: nominal (91%) · Preset ML: nominal (87%)

Self-check · Local only

3 questions — 0/3 answered correctly

Local-only. No submission, no grade. Answers revealed here only.

What is the main difference between a rule-based and a preset ML classifier?

What does a 'confidence score' from a classifier mean?

A classifier predicts 'Fault Candidate' for a perfectly normal manoeuvre mode change. What is this called?

Evidence capture · Local only

Your evidence — Anomaly Classifier

Local-only. No submission, no backend, no grade. Copy or screenshot to share.

Scenario selected
Nominal orbit pass
Classifier type
Rule-Based
Predicted class
nominal
Confidence
91%
Top features
voltage, pointing_error, packet_age
Classifiers agree
Yes
Rule-based prediction
nominal
Preset ML prediction
nominal
Explanation
All threshold rules pass — state classified as nominal.

Evidence capture

Expected outputs learners should be able to show after the lab (Phase 9 evidence engine preview available).

  • Chosen scenario and classifier type recorded
  • Predicted class and confidence score
  • Top contributing features with brief explanation
  • One observation where rule-based and ML classifiers differ
  • Self-check summary and copied evidence text

Reflection

Choose a telemetry scenario; select rule-based or preset classifier; inspect prediction, confidence, and top features; note where the two classifiers agree or disagree.

Responses are not persisted in this preview unless a specific activity component adds storage later.

Assessment / quick check

Give one example of a telemetry pattern that looks abnormal but is actually expected during a planned mode change — and explain how a classifier might incorrectly flag it.

Teacher notes

Use think-aloud protocol: ask students to narrate what each top feature tells the classifier. Stress that confidence ≠ correctness.

Next activity

Suggested progression from the mission learning path. Links avoid missing activity routes.