CubeSTEM Digital Twin · Track 6

Track 6 — Telemetry / Evidence

Complete five-session mini-course: read telemetry fields and units → interpret subsystem health thresholds → use replay timeline as evidence → diagnose anomalies from clues → connect evidence to mission success criteria.

Local-only mini-course — no account, no submissions, no gradebook. Evidence is browser-local only; copy or screenshot to share. Teaching-grade telemetry models — not real satellite telemetry, not a ground-station command interface, not certified anomaly diagnosis, not flight operations.

Track 6 — five-session mini-course

5 implemented sessions (canonical yardstick sequence)

Mini-course flow

Five sessions, one telemetry story: from reading fields to completing a mission evidence debrief

Start by decoding telemetry fields, units, and timestamps. Then interpret subsystem health thresholds (nominal / warning / critical). Replay a mission timeline to build before/after evidence. Diagnose an anomaly from multiple clues. Finally, link all evidence to success criteria in a mission debrief. Evidence and self-checks are local-only — copy, export, or screenshot to share. No account required.

Recommended pacing: one session per class. Use the Next → link at the bottom of each activity page to move through the sequence automatically. When you finish Session 5, the mini-course is complete. After Session 5 the mini-course bridges toward AI / ML Autonomy (Track 7) or the Curriculum Map as a fallback. No account required.

Open Track 7 hub (AI / ML Autonomy) →
Session 1
Telemetry Dashboard BasicsTelemetry Stream Basicsinteractive

Student can identify at least four telemetry channels and explain what each one measures.

  • Screenshot or description of five channels with correct meaning
  • Student identifies converging error visually
20–25 min
Session 2
Subsystem Interpretation WalkthroughHealth Status & Thresholdsinteractive

Student can interpret each telemetry subsystem channel and explain its mission-level significance.

  • Subsystem health table with yellow/red examples
  • Risk observation per subsystem backed by a channel
35–40 min
Session 3
Replay and Mission DebriefReplay Timeline Evidenceinteractive

Student can replay a run, identify key events in the telemetry, and write a short mission debrief statement.

  • Replay artifact reference (run id or screenshot)
  • Three-sentence debrief citing at least two chart moments
25–30 min
Session 4
Telemetry Trust and Stale DataAnomaly Detectiveinteractive

Student can identify stale telemetry, explain its risk, and describe a mitigation strategy.

  • Stale or flat-line segment identified on chart
  • Student proposes mitigation (watchdog, redundancy, operator procedure)
25–30 min
Session 5
Mission-Based STEM CapstoneMission Evidence Debriefinteractive

Student can complete a full mission journey and produce an evidence-based report connecting all tracks.

  • Mission report with budget summary + control chart + debrief
  • Explicit pass/partial/fail on student-stated criteria
50–60 min

Teacher plan

Track 6 Teacher Pack — Telemetry / Evidence

Complete five-session mini-course covering telemetry stream literacy, subsystem health thresholds, replay timeline evidence, anomaly detective reasoning, and mission evidence debrief. All evidence is local-only — no student accounts, no roster visibility, no automatic grading. Plan a manual share routine (copy/paste or screenshot).

Bridge — Mission Realism Lab

Investigate checksum, stale packet, retry, and corrupted telemetry in Mission Realism Lab.

15 min

15-minute demo

Whole-class intro or workshop opener.

  • Hook (2 min): What can one telemetry field tell you that raw bytes cannot?
  • Activity (8–10 min): Run Session 1 (Stream Basics) together — decode one packet preset.
  • Debrief (3–5 min): Why is a missing unit dangerous?

45 min

45-minute class lesson

Full period — two sessions with debrief.

  • Warm-up + vocabulary (5 min): telemetry, threshold, nominal, anomaly, evidence.
  • Session 1 (15 min): Stream Basics — fields, units, timestamps.
  • Session 2 (15 min): Health Status — nominal/warning/critical thresholds.
  • Exit ticket (5–8 min): name one yellow-flag scenario and the first action.

90 min

90-minute workshop

All five sessions + full evidence debrief.

  • Orientation + boundaries (5 min).
  • Sessions 1–2 (25 min): Stream Basics + Health Thresholds.
  • Session 3 (15 min): Replay Timeline — before/after evidence.
  • Session 4 (15 min): Anomaly Detective — multi-clue diagnosis.
  • Session 5 (15 min): Mission Evidence Debrief — link evidence to criteria.
  • Evidence + debrief (15 min): copy evidence text, class debrief.

3 hr

Half-day workshop (3 hours)

Pilot or intensive: all five sessions with extended labs and class debrief.

  • Opening + vocabulary warm-up (15 min): telemetry, threshold, nominal, anomaly, replay, evidence, success criterion.
  • Session 1 (25 min): Stream Basics — decode 2–3 packet presets, discuss missing-unit dangers.
  • Session 2 (30 min): Health Thresholds — classify power, thermal, ADCS, comms, payload channels; identify first action.
  • Session 3 (25 min): Replay Timeline — scrub before/after, write evidence quotes with channel + value + timestamp.
  • Session 4 (30 min): Anomaly Detective — classify clues (supporting / contradicting / neutral), agree on likely subsystem.
  • Session 5 (30 min): Mission Evidence Debrief — PASS/WARN/FAIL cards, write one-paragraph debrief per group.
  • Class debrief (25 min): each group presents one evidence card; discuss stale data and boundary reminders.

Facilitation prompts

  • "What is the difference between a number and a measurement? (Answer: a unit.)"
  • "Make one claim about what the spacecraft is doing right now. Which telemetry field proves it?"
  • "If the error is flat, is that good news or bad news? How would you know?"
  • "What would you need to see in the replay before you could say the mission partially failed?"

Common misconceptions

  • A flat error always means success. A flat-line at exactly zero with no residual oscillation is a classic stale-sensor signature, not a perfect controller.
  • Warning = failure. Warning thresholds trigger investigation, not shutdown. They provide time to act before a critical state.
  • Replay is the same as real-time monitoring. Replay lets you examine events after the fact — the evidence exists but was not acted on in real time.
  • Anomaly diagnosis is certain from one clue. Real anomaly diagnosis is probabilistic — always cite the supporting, neutral, and contradicting clues.
  • This is a certified mission operations tool. Track 6 activities are teaching-grade models. Not a ground-station command interface, not certified anomaly diagnosis, not flight operations.
  • Stale telemetry is always better than no data. Acting on stale data without checking the timestamp can be worse than having no data — operators must always verify the age of a reading before acting on it.
  • Success criteria can be decided after seeing the results. Criteria set after the run are prone to post-hoc rationalization. Good practice is to define the threshold before the mission and then compare evidence against it.

No student accounts, no automatic grading, no roster visibility. Evidence is local-only — plan a manual share routine (copy/paste, export, or screenshot).

Student path

Track 6 Student Path — five sessions, evidence at every step

Work through five sessions in order. At the end of each session, copy your evidence text or screenshot the evidence panel before you move on. Your progress is stored locally in this browser — no account required.

Bridge — Mission Realism Lab

Investigate checksum, stale packet, retry, and corrupted telemetry in Mission Realism Lab.

  1. 1
    Telemetry Dashboard Basics

    Telemetry Stream Basics · 20–25 min

    Which channel would you watch first to know if pointing is improving, and why?

  2. 2
    Subsystem Interpretation Walkthrough

    Health Status & Thresholds · 35–40 min

    Pick one yellow flag: which subsystem owns it first, and what confirming channel would you check next?

  3. 3
    Replay and Mission Debrief

    Replay Timeline Evidence · 25–30 min

    What is one claim you would not make without replay evidence, and what chart proves it?

  4. 4
    Telemetry Trust and Stale Data

    Anomaly Detective · 25–30 min

    Why is acting on stale attitude data sometimes worse than having no data?

  5. 5
    Mission-Based STEM Capstone

    Mission Evidence Debrief · 50–60 min

    What single chart would you show a reviewer to prove your spacecraft met its pointing goal?

After Track 6

Track 6 complete — when you finish Session 5, the mini-course is done. You can now read telemetry fields and units, interpret subsystem health thresholds, use replay evidence, reason from anomaly clues, and write a mission debrief connecting evidence to success criteria. Continue to Track 7 — AI / ML Autonomy → or browse the curriculum map.

Local progress only — no account, no sync, not a grade. Copy or screenshot your evidence before closing the browser.

Evidence checklist

What to capture across all five sessions

Use this checklist to verify you have captured evidence from every session. Evidence is local-only — copy the evidence text or screenshot the evidence panel to share manually. Use the session-5 evidence package as a debrief anchor.

Session 1

Telemetry Stream Basics

  • Packet preset and field count identified
  • One field where the unit is essential to safe interpretation named
  • Explanation of why missing timestamps are dangerous

Session 2

Health Status & Thresholds

  • Subsystem, channel, and observed value recorded
  • Health band (nominal / warning / critical) identified
  • First action for the scenario described

Session 3

Replay Timeline Evidence

  • Replay preset identified and timeline reviewed
  • Before and after values for at least one event
  • One-sentence debrief quoting channel, value, and timestamp

Session 4

Anomaly Detective

  • Anomaly scenario and supporting clues listed
  • Likely subsystem and diagnosis stated
  • Next check described to confirm or rule out diagnosis

Session 5

Mission Evidence Debrief

  • Mission objective and success criteria stated
  • Evidence cards selected with pass/warn/fail status
  • One-paragraph mission debrief connecting evidence to overall verdict

Classroom tip: use the session-5 evidence package as the class debrief anchor — each student presents one evidence card and defends their pass/warn/fail verdict.

Assessment map

Track 6 Assessment Map — 15 questions across five sessions

Each session has three local self-check questions. Use as a class discussion starter, exit-ticket prompt, or debrief anchor. No automatic grading — answers are revealed locally after the student chooses. Not a certified assessment instrument.

Telemetry Dashboard Basics (3 questions)

  • Why must every telemetry field carry a unit label?
  • What does a timestamp in a telemetry packet primarily tell you?
  • Which channel would you check first to know if the spacecraft is pointing correctly?

Subsystem Interpretation Walkthrough (3 questions)

  • What is the purpose of a warning threshold?
  • Battery voltage is 6.3 V. The critical threshold is below 6.8 V. What should the operator do first?
  • A reaction wheel is reading 3800 rpm and the warning threshold starts at 2500 rpm. What does this indicate?

Replay and Mission Debrief (3 questions)

  • What is the main value of replaying a completed mission run?
  • What should a mission debrief statement always include?
  • In a replay, battery voltage rises from 7.8 V to 8.1 V at t = 55 s. What does this suggest?

Telemetry Trust and Stale Data (3 questions)

  • The attitude error channel has read exactly 0.000° for 45 consecutive seconds. What is the most likely cause?
  • Which clue best CONTRADICTS the hypothesis that a comms fault caused elevated packet loss?
  • Why is acting on stale telemetry sometimes worse than having no data?

Mission-Based STEM Capstone (3 questions)

  • What connects a success criterion to telemetry evidence in a mission debrief?
  • A mission has three success criteria. Two PASS and one is PARTIAL. What is the correct debrief verdict?
  • Why should success criteria be written BEFORE a mission run, not after?

Misconception cards in the Teacher Pack are designed as discussion and debrief anchors, not automated scoring items.