CubeSTEM Digital Twin — V3.4 Mission Pack
Space Mission Challenge Pack
A browser-based CubeSat + space robotics mission-control challenge pack for classrooms, workshops, STEM labs, and pilot demos. Students operate virtual rover and satellite assets through realistic mission scenarios, diagnose telemetry divergence, and produce evidence-based mission reports.
Recommended flow
Full challenge sequence (classroom or workshop)
Follow these steps in order for a coherent Space Mission Challenge story. Step 4 is the advanced technical layer — optional for younger cohorts, expected for technical reviewers and university-style demos.
- 1
Learn mission context
Open the challenge pack hero and teacher runbook — roles, constraints, and how scoring works (formative, local-only).
- 2
Run Lunar Rover Rescue
Interactive rover mission with telemetry divergence, scorecard, and in-mission realism panel (V3.6B).
- 3
Run CubeSat–Rover Relay
Pass-window relay mission; compare command queue and download behavior with the rover mission.
- 4
Mission Realism Lab (advanced technical layer)
Parameter-driven teaching models: contact windows, link budget, packet protocol, ADCS coupling — same narrative as the in-mission panels, deeper knobs. Not certified RF/orbit/ADCS.
- 5
Diagnose telemetry divergence
Use expected-vs-observed comparison inside either mission; tie findings to scorecard evidence dimensions.
- 6
Generate a local mission report
Paste highlights manually from missions or realism lab into the optional realism field — no auto-fill, no backend.
Two Mission Challenges
Both missions include the Telemetry Divergence Engine v0 for expected-vs-observed telemetry comparison, and a unified 100-point Mission Challenge Scorecard across 7 dimensions.
Lunar Rover Rescue Mission
Command a virtual rover through a CubeSat relay window. Manage battery life, diagnose faults from telemetry, and produce an evidence-based mission report. Includes Telemetry Divergence Engine v0, 100-point Mission Challenge Scorecard, and an in-mission Mission realism panel (V3.6B) for packet health, link quality, and teaching-grade ADCS/link coupling copy.
CubeSat–Rover Relay Mission
Manage command uploads and telemetry downloads around satellite pass windows. Balance packet priority, queue management, and time-limited contacts. Includes Telemetry Divergence Engine v0, 100-point Mission Challenge Scorecard, and an in-mission Mission realism panel (V3.6B) using the same contact-window, link-budget, and packet-protocol teaching models as the Mission Realism Lab.
Advanced layer — Mission Realism Lab (after missions)
The challenge pack is now paired with a parameter-driven teaching layer: students adjust orbit, pass, link, and packet knobs in the lab, then see applied realism summaries inside each mission. This is still browser-local and teaching-grade — not certified RF or orbital analysis.
Open Mission Realism Lab →Suggested Classroom Flows
45-Minute Class Flow
- 5 min: Briefing — introduce mission story and team roles
- 5 min: Planning — review constraints and assign roles
- 25 min: Mission execution — run Lunar Rover Rescue or CubeSat–Rover Relay
- 10 min: Debrief — review scorecard, discuss decisions, capture evidence
90-Minute Workshop Flow
- 10 min: Space robotics context — why relay windows matter
- 15 min: Team formation + role assignment
- 20 min: Mission 1 — complete Lunar Rover Rescue with full scorecard
- 10 min: Break + interim debrief
- 25 min: Mission 2 — complete CubeSat–Rover Relay
- 10 min: Final debrief — compare missions, lessons learned
Half-Day Challenge Day Flow
- 30 min: Opening — context, rules, team formation (4–6 teams)
- 45 min: Mission Sprint 1 — all teams run Lunar Rover Rescue
- 15 min: Inter-team sharing + leaderboard snapshot (manual)
- 45 min: Mission Sprint 2 — all teams run CubeSat–Rover Relay
- 30 min: Evidence report completion
- 45 min: Final presentations + teacher assessment
Teacher Runbook
Facilitation guide for running Space Mission Challenge Pack in classrooms, workshops, and challenge days. All scoring is local and formative — share manually via screenshots or exported evidence text.
Team Roles
Assign one role per student (teams of 2–4 can combine roles)
- Mission Commander
Final decision authority, timeline management, go/no-go calls
- Rover Operator
Execute movement and science commands, monitor power state
- Telemetry Analyst
Watch expected vs observed divergence, flag anomalies
- Communications Officer
Track pass windows, manage command/telemetry queues
- Safety Officer
Monitor battery critical thresholds, abort triggers
- Evidence Reporter
Capture key decisions, screenshot evidence, draft report
Materials Needed
- Devices with modern browsers (Chrome/Edge/Firefox/Safari)
- Projector or screen for mission timeline visibility
- Printed role cards (optional — can use on-screen references)
- Student worksheets for planning and evidence logging
- Timer for pass windows and mission phases
Facilitation Tips
- Set expectations upfront — this is a teaching-grade simulation, not a certified flight system
- Emphasize team communication over individual speed
- Pause at divergence alerts — use them as teaching moments
- Let students fail safely — better learning from recoverable mistakes
- Require evidence capture before showing final scorecard
- Debrief on decision process, not just final score
Debrief Questions
- What was your most critical decision? What data supported it?
- Where did expected vs observed telemetry diverge? How did you spot it?
- What would you do differently with 20/20 hindsight?
- How did team roles help or hinder mission execution?
- What does this simulation capture well? Where are its limits?
Student Evidence & Report Flow
Students capture evidence during missions and produce a final mission report. Evidence is browser-local — copy/export text or screenshot to share.
During the Mission
- Mission Objective
State the mission goal in your own words. What defines success?
- Planning Questions
What constraints matter most? What trade-offs did you anticipate?
- Command Log
List your key commands and the reasoning behind each.
- Telemetry Clue Log
What did expected vs observed telemetry tell you?
- Divergence Diagnosis
Where did telemetry diverge? What was the root cause?
- Scorecard Reflection
Which dimensions did you score well on? Where could you improve?
Final Evidence Report Structure
- Mission name and team members
- Date and duration
- Executive summary (2–3 sentences)
- Key decisions made
- Divergence events and diagnosis
- Scorecard results (self-assessment)
- Lessons for a future mission
- Appendix: Screenshots or exported evidence text
Evidence Export
Each mission includes a "Copy Evidence" button that exports mission decisions, telemetry log, and scorecard results as formatted text. Paste into a document or learning management system.
Local Mission Report Generator
After completing your mission, generate a clean printable report from your evidence. Choose a template, paste your evidence, and save as PDF locally.
Competition Mode
Space Mission Challenge Day guide for classroom competition. Teams complete missions, receive 100-point formative scoring across 7 dimensions, and present evidence-based mission reports. All scoring is local-only — share manually via screenshots or exported text.
100-Point Mission Challenge Scorecard
Unified rubric across both missions — formative assessment only
| Dimension | Points | Focus Area |
|---|---|---|
| Mission Success | 30 pts | Objective completion, waypoint achievement |
| Safety | 20 pts | Battery protection, abort discipline, fault response |
| Resource Management | 15 pts | Power-aware commanding, queue efficiency |
| Telemetry Reasoning | 15 pts | Correct interpretation of telemetry streams |
| Divergence Diagnosis | 10 pts | Identifying expected vs observed mismatches |
| Evidence Quality | 5 pts | Clear command logs and reasoning |
| Team Reflection | 5 pts | Post-mission debrief participation |
| Total | 100 pts |
Competition Guidelines
- All scoring is local and formative — not official grades
- Teams manually report scores via screenshot or exported evidence
- Teacher uses rubric notes to assess team presentations
- No online leaderboard — manual tracking only
- Encourage learning over winning — debrief matters more than score
- Multiple runs allowed — best score or improvement story counts
Pilot & Commercial Value
Space Mission Challenge Day for pilot validation, grant proposals, CSR programs, and school outreach. Software-first approach enables immediate deployment without hardware procurement.
Who It Serves
- Schools & Colleges
STEM enrichment without hardware investment. Runs in computer labs with standard browsers.
- STEM Labs & Makerspaces
Space robotics concepts before physical rover builds. Software-first, hardware-ready later.
- Space Clubs & Competitions
Mission-control practice for CanSat, CubeSat, and rover challenge teams.
- NAVTTC-Style Training
Technical vocational training in satellite operations and telemetry analysis.
- CSR & Grant Programs
Demonstrable STEM outreach with clear learning outcomes and evidence reporting.
Demo Timing Options
- 5-minute demo — show one mission card, highlight scorecard, tease divergence
- 15-minute demo — run one mission phase, show telemetry comparison, discuss team roles
- 45-minute session — complete one full mission with debrief and evidence export
- Half-day challenge — run both missions with multiple teams and competition mode
Pricing Note
Pricing to be defined after pilot validation. Current release is demonstration and evaluation use.
Advanced technical layer — Mission Realism Lab
The Mission Realism Lab upgrades the challenge pack from lesson/demo to parameter-driven mission reasoning. Students explore ground track coverage, contact windows, link budget, rain/fade attenuation, packet protocol errors, and ADCS pointing coupling — all in one coherent simulation story. Teaching-grade, browser-local.
Open Mission Realism Lab →What This Is (and Is Not)
Boundaries — Not Included
- Software-only simulation
Browser-based teaching-grade prototype. Not a certified flight simulator.
- No remote hardware control
Does not control real rovers, satellites, or ground stations.
- No real RF or SDR
Simulated communications only. No actual radio transmission.
- Local-only scoring
No backend, no accounts, no online leaderboard. Share manually.
- Formative assessment only
Not official grades or certifications. Teacher discretion.
- Deterministic teaching models
Telemetry divergence uses rule-based scenarios, not AI/ML diagnosis.
What It Is
- Classroom-ready space robotics mission simulation
- Telemetry divergence detection and diagnosis practice
- Team-based mission control roleplay
- Evidence-based reporting and reflection
- Browser-local formative assessment with 100-point scorecard
Ready to explore? Start with a mission or browse the demo pack.