CubeSTEM Mission Twin — V3.4 Space Rover Twin v0

Lunar Rover Rescue Mission

A lunar rover is deployed near a damaged solar panel. A CubeSat relay provides a limited communication window. Command the rover to reach the panel, scan it, diagnose faults from telemetry, manage battery, and return safely before power runs out.

Interactive PrototypeSoftware-onlyTeaching-grade modelBrowser-local

Learning outcomes

  • Space robotics command and telemetry
  • CubeSat relay and communication windows
  • Power management and battery budgeting
  • Fault diagnosis from telemetry evidence

Connected tracks (0–7)

T0 OrientationT1 Orbit & ContactT2 Mission DesignT3 Power & ThermalT4 CommunicationT5 ADCS & PointingT6 Telemetry & EvidenceT7 AI & Autonomy

Mission map

Lunar Surface Grid

Idle
0011223344556677BASESOLARPANEL
Base Target Obstacle Visited Rover

Telemetry

Rover Telemetry

Battery100.0%
Position(0, 0)
Heading90° East
Mission time0s
Wheel current0.1 A
Temperature22°C
Obstacle dist.Clear
Link statusActive
Signal delay2.0s
ScanPending

CubeSat relay

Communication Window

Relay Active
Window remaining60s

Cycle: 60s active / 30s blackout

Delay: 2.0s round-trip

Connected tracks:

Track 1 (Orbit & Contact) — pass schedule constrains operations

Track 4 (Communication) — relay protocol and command delay

Command panel

Rover Commands

Fault injection

Fault Mode

Teacher or demo operator can inject a fault to test student diagnosis.

Status: All systems nominal.

Mission realism (V3.6A models)

Telemetry packets and relay link

Teaching-grade only — conceptual ADCS/pointing → link coupling, not a certified lunar comms model.

Relay pass active — stronger link (teaching model)

No injected fault — telemetry packets follow nominal teaching health.

Telemetry packet health (teaching protocol)

  • Protocol run: ACK/NACK + retry: 14.3% success after retries. 19 retry attempts used. 6 packets unrecoverable after max retries.
  • Valid / corrupted / dropped: 1 / 0 / 6
  • Stale (model): 0
  • Retries (sum): 19
  • Latest seq / status / checksum: #7 · dropped · 3D95

Link quality

  • Margin: -8.0 dB
  • Label: unavailable
  • Packet success (model): 5.0%
  • Pass data volume: 2.48 Mbit raw → 124 kbit effective
  • Signal delay (mission): 2s

Good pointing: 4.35° error is well within the 30° beamwidth. Pointing loss: 0.6 dB — negligible link margin impact.

Mission challenge scorecard

Lunar Rover Rescue Mission

59/100Needs Review

Mission Completion Checklist

Mission completed

Scan target and return to base

Scan completed

Damaged solar panel inspected

Returned to base

Rover safely at starting position

Battery safe

100.0% (threshold: 30%)

Time used

0s mission time

Commands used

0 commands issued

Faults diagnosed

No fault active

Safe status

Battery above safe threshold throughout

Score Breakdown

Mission Success0 / 20

Mission objectives not fully met. Teaching realism: -8.0 dB margin (unavailable); Relay pass active — stronger link (teaching model)

Safety15 / 20

Battery at 100.0% — above safe threshold. ACK/NACK + retry: 14.3% success after retries. 19 retry attempts used. 6 packets unrecoverable after max retries.

Resource Management15 / 15

0 commands in 0s, battery at 100.0%. Relay link (mission): relay_active.

Telemetry Reasoning12 / 15

No fault injected — nominal telemetry reasoning. Teaching realism: -8.0 dB margin (unavailable); Relay pass active — stronger link (teaching model)

Divergence Diagnosis15 / 15

No divergences — telemetry matches expected behavior. ACK/NACK + retry: 14.3% success after retries. 19 retry attempts used. 6 packets unrecoverable after max retries.

Evidence Quality2 / 10

1 of 5 evidence criteria met.

Team Reflection0 / 5

No meaningful reflection written yet.

Evidence Quality Checklist

1 of 5 criteria met

  • Telemetry clue included
  • Command / action evidence included
  • Divergence explanation included
  • Safety decision included
  • Reflection included

Strengths

  • Resource Management: 0 commands in 0s, battery at 100.0%. Relay link (mission): relay_active.
  • Telemetry Reasoning: No fault injected — nominal telemetry reasoning. Teaching realism: -8.0 dB margin (unavailable); Relay pass active — stronger link (teaching model)
  • Divergence Diagnosis: No divergences — telemetry matches expected behavior. ACK/NACK + retry: 14.3% success after retries. 19 retry attempts used. 6 packets unrecoverable after max retries.

Areas for Improvement

  • Mission Success: Complete the scan at the target and return the rover to base. Mention relay pass usage from the realism panel.
  • Evidence Quality: Include telemetry clues, command evidence, divergence explanation, safety decision, and reflection.
  • Team Reflection: Write a reflection on your mission approach, what you learned, and what you would change.
Share manually — local only

This scorecard is a formative classroom rubric computed locally in your browser. It is not an official grade, not a certified assessment, and is not submitted to any server. Share your report manually by copying or printing.

Telemetry divergence engine

Expected vs Observed Telemetry

NominalWatchWarningCritical
All nominal — no major divergence detected between expected and observed telemetry.
Connected tracks:Track 3Power & ThermalTrack 4CommunicationTrack 5ADCS & PointingTrack 6Telemetry & EvidenceTrack 7AI & Autonomy

Divergence Summary

No significant divergence detected. Expected and observed telemetry are consistent.

Strongest clue: All telemetry nominal.

Evidence & report

Mission Evidence Report

What would you do differently if the communication window were shorter? Which track concept helped most: power, comm, telemetry, or autonomy?

Event log

No events yet. Issue commands to begin the mission.

Competition mode

Space Mission Challenge Day

Run this Lunar Rover Rescue Mission as a classroom challenge. Teams compete to achieve the highest scorecard while demonstrating safety, reasoning, and evidence quality. All scoring is local and manual — no online leaderboard or automatic submissions.

How to Run

  1. 1.Divide the class into teams of 3–5 students. Assign team roles (see below).
  2. 2.Each team opens the mission on their own device or shared screen.
  3. 3.Teams complete the mission, fill in diagnosis and reflection prompts.
  4. 4.Each team copies their scorecard report and shares it with the teacher (paste, print, or screenshot).
  5. 5.The teacher reviews reports, facilitates a class discussion on safety vs. speed, and announces results.

Suggested Team Roles

Mission Commander

Leads decision-making and sets mission priorities.

Rover Operator

Issues commands and manages the rover timeline.

Telemetry Analyst

Monitors telemetry, identifies anomalies and divergences.

Safety Officer

Watches battery, enforces safe thresholds, calls abort if needed.

Evidence Reporter

Documents mission evidence, writes the diagnosis and reflection.

Teacher Scoring Guide

  • • The 100-point scorecard is a formative rubric — use it as a discussion starter, not a final grade.
  • • Reward safety and reasoning over speed — highest speed is not the best mission score.
  • • Ask teams to justify their score with specific telemetry evidence from the report.
  • • Divergence diagnosis shows engineering thinking — reward effort even if the answer is imperfect.
  • • Evidence quality matters as much as mission completion.
  • • More commands is not always better — efficient resource management earns more points.

Common Misconceptions

  • • "Fastest team wins" — Speed is not the primary criterion; safety and reasoning are weighted equally.
  • • "More commands = better" — Efficient command usage earns higher resource management points.
  • • "This is an official grade" — The scorecard is a local formative rubric, not a certified assessment.
  • • "Copying the report submits it" — Reports are shared manually (paste/print). No automatic submission.
  • • "There is a leaderboard" — All scoring is local and manual. There is no online leaderboard.
  • • "Score alone matters" — Evidence quality and reasoning justification matter as much as the number.

Local / manual only. No online leaderboard, no automatic submissions, no backend accounts, no official grades. Students share their report manually. The teacher reviews using the rubric notes above. This keeps the v0 simple and privacy-friendly.

Teacher guide

Facilitation Guide

Timing presets

  • 15-minute demo: Show the mission map, issue 3–4 commands, point out telemetry changes, inject one fault, discuss what changed. Good for assembly or open-day walkthroughs.
  • 45-minute classroom: Students work in pairs. Navigate to the target, scan, manage battery, diagnose one injected fault, complete the evidence report. Debrief: compare command counts and battery remaining across pairs.
  • 90-minute workshop: Full mission with multiple fault injections. Students write a complete diagnosis, discuss trade-offs (speed vs safety, short comm window vs battery), compare strategies. Extension: can they complete the mission under tighter battery or shorter relay windows?

Discussion prompts

  • • What happens if you send commands during a relay blackout?
  • • How does your battery budget change with the wheel-stuck fault?
  • • Which telemetry value was the first clue to the fault?
  • • What would you do differently with a shorter communication window?
  • • Why does a rover need to return to base safely, not just complete the scan?
  • • How is this different from controlling a toy car with a remote?
  • • Which track concept helped most: power, comm, telemetry, or autonomy?
  • • Compare what the digital twin predicted with what telemetry showed — where did they diverge?
  • • Can one telemetry reading alone confirm a fault, or do you need multiple clues?

Common misconceptions

  • Misconception: A rover is just a remote-controlled toy car.
    Reality: A rover operates with delayed commands, limited power, and constrained communication windows.
  • Misconception: One telemetry number tells the whole story.
    Reality: Multiple telemetry values together reveal faults — wheel current, battery drain rate, and position all matter.
  • Misconception: Speed is always better than safety.
    Reality: Rushing drains battery faster and may trigger unsafe conditions.
  • Misconception: Communication windows do not affect the mission.
    Reality: Commands can only execute reliably during a relay pass. Planning around blackouts is essential.
  • Misconception: Faults should be guessed, not diagnosed with evidence.
    Reality: Real spacecraft engineers use telemetry trends and expected-vs-observed comparisons.
  • Misconception: Expected telemetry means guaranteed telemetry.
    Reality: Expected values are predictions from the digital twin model. Divergence is normal and requires diagnosis, not panic.
  • Misconception: This is a real rover simulator.
    Reality: This is a teaching-grade model. Real rover operations involve far more complexity.

Boundary reminder: This is a software-only, browser-local teaching prototype. It does not connect to real hardware, does not use certified rover physics, and should not be presented as a professional rover simulator. It teaches space robotics concepts through an interactive mission narrative.

Scorecard boundaries

What This Scorecard Is — and Is Not

  • Formative rubric — helps students reflect on mission performance across safety, reasoning, and evidence.
  • Local-only — computed in your browser. Not stored, not submitted, not synced to any server.
  • Teaching-grade — designed for classroom discussion, not certified engineering assessment.
  • Not an official grade — should not be used as a formal academic grade without teacher judgment.
  • Not a leaderboard — there is no online ranking, no automatic competition backend.
  • Manual sharing — students copy/print their report and share it with the teacher manually.
  • No AI scoring — all scoring is deterministic rules, not LLM or ML-based grading.
  • No backend — no accounts, no submissions, no cloud storage, no teacher dashboard.

Capability boundary

What this is — and is not

  • Teaching-grade model: Deterministic 2D grid simulation for education, not a certified rover/flight simulator.
  • Software-only: No real robot, CubeSat, or ground station hardware is connected.
  • Browser-local: All state runs in the browser. No backend, no accounts, no submissions saved.
  • No 3D physics: No terrain mesh, gravity model, or dynamics simulation. Movement is grid-based.
  • Simplified relay: Communication window is a time-based cycle, not real RF link budget modeling.
  • Pre-scripted faults: Faults are deterministic toggles, not random or AI-generated.
  • No AI diagnosis: Students diagnose faults themselves from telemetry evidence.
  • Prototype v0: First interactive mission challenge. Hardware-ready interfaces and deeper simulation planned for future phases.