CubeSTEM Mission Twin — V3.4 CubeSat–Rover Relay Mission

CubeSat–Rover Relay Mission

A rover operates beyond direct ground-station range. A CubeSat periodically passes overhead and acts as a relay. You must decide which commands to uplink and which telemetry packets to downlink during short contact windows. If you send too much low-priority data, critical telemetry may be missed. If you command during blackout, actions are delayed.

Interactive PrototypeSoftware-onlyTeaching-grade relay modelBrowser-local

Learning outcomes

  • CubeSat relay operations
  • Contact windows and pass planning
  • Uplink/downlink constraints
  • Packet prioritization strategy

Connected tracks (0–7)

T0 OrientationT1 Orbit & ContactT2 Mission DesignT3 Power & ThermalT4 CommunicationT5 ADCS & PointingT6 Telemetry & EvidenceT7 AI & Autonomy

Pass window status

Mission Time

0s

Pass Window ACTIVE

60s remaining

Window startWindow close

Active Window

60s

Blackout

45s

Communication Flow

Ground → CubeSat → Rover

📡

Ground Station

Mission Control

Uplink ACTIVE
🛰️

CubeSat Relay

Pass window ACTIVE

Downlink ACTIVE
🤖

Surface Rover

Safe Point Alpha

Commands Pending

0

Packets Pending

4

Command Queue (Uplink)

Command Center

Pending (0)

No pending commands

Delivered (0)

No commands delivered yet

Rover Status

Surface Rover

Battery100.0%

Position

Safe Point Alpha

Scan Status

Not complete

Link Quality

85%

Safe Point

Alpha (0,0)

Command Execution

All commands executed

Downlink Strategy

Packet Priority

Choose which telemetry packets to prioritize during the limited pass window.

Next window delivery preview

  • Health StatusDeliver
  • Battery StatusDeliver
  • Position ReportDeliver
  • Thermal DataDeliver

Why prioritization matters

Large packets (like images) can fill your downlink budget and block critical telemetry. Critical health and battery status should usually come first.

Telemetry Queue (Downlink)

Data Downlink

Window Capacity0 / 6 units

Pending (4)

  • Health Status
    Critical1u
  • Battery Status
    Critical1u
  • Position Report
    Routine1u
  • Thermal Data
    Routine1u

Received (0)

No packets received yet

Mission realism (V3.6A models)

Relay link, contact window, and packets

Teaching-grade only — not certified RF or orbital analysis. Same formulas as Mission Realism Lab.

Contact window

Contact active — usable pass time

  • AOS / TCA / LOS (model): 0s / 30s / 60s
  • Usable time: 52.8s
  • Downlink status: excellent
  • Volume (raw → effective): 3.38 Mbit3.35 Mbit
  • Backlog warning: No

Link budget (teaching)

  • Margin: 22.3 dB
  • Quality: strong
  • Packet success (model): 99.0%
  • Rx -95.7 dBm · FSPL 139.6 dB (UHF teaching pass)

Good pointing: 6.1° error is well within the 36° beamwidth. Pointing loss: 0.7 dB — negligible link margin impact.

Packet protocol (ACK/NACK + retry)

ACK/NACK + retry: 100.0% success after retries. 0 retry attempts used. 0 packets unrecoverable after max retries.

Success / corrupted / dropped: 10 / 0 / 0

Retries (sum): 0

Latest seq / status / checksum: #10 · valid · E9E5

Priority mode Balanced Mix shapes which mission packets matter first when link margin is marginal.

Consequence line: Nominal relay realism: monitor pass time and packet success as conditions change.

Mission challenge scorecard

CubeSat–Rover Relay Mission

37/100Needs Review

Mission Completion Checklist

Mission complete

Scan delivered and critical telemetry received

Critical telemetry received

Essential health/status data downlinked

Scan completed

Panel scan command delivered and result received

Delay managed

Command delay within acceptable limits

Battery safe

100.0% (threshold: 30%)

Prioritization used

Mode: balanced

Evidence generated

Reflection written with reasoning

Score Breakdown

Mission Success0 / 20

Mission objectives not fully met. Teaching realism: 22.3 dB margin (strong); Contact active — usable pass time.

Safety20 / 20

Battery at 100.0% — above safe threshold. Packet integrity: 0 dropped (teaching protocol), 0 retries.

Resource Management0 / 15

0 commands delivered, 0 packets received, priority mode: balanced. Usable pass time (model): 52.8s.

Telemetry Reasoning0 / 15

Critical telemetry not yet received. Nominal relay realism: monitor pass time and packet success as conditions change.

Divergence Diagnosis15 / 15

No divergences — relay behavior matches expectations.

Evidence Quality2 / 10

1 of 5 evidence criteria met.

Team Reflection0 / 5

No meaningful reflection written yet.

Evidence Quality Checklist

1 of 5 criteria met

  • Telemetry clue included
  • Command / action evidence included
  • Divergence explanation included
  • Safety decision included
  • Reflection included

Strengths

  • Safety: Battery at 100.0% — above safe threshold. Packet integrity: 0 dropped (teaching protocol), 0 retries.
  • Divergence Diagnosis: No divergences — relay behavior matches expectations.

Areas for Improvement

  • Mission Success: Deliver scan command, receive scan result, and receive critical telemetry. Cite contact-window usage and link margin from the realism panel.
  • Resource Management: Deliver commands efficiently, receive key telemetry, and use prioritization wisely. Relate command timing to contact window (AOS/LOS) in the realism panel.
  • Telemetry Reasoning: Receive critical telemetry, review scan results, and explain your reasoning. Reference checksum/retry behavior from the realism panel.
  • Evidence Quality: Include telemetry clues, command evidence, divergence explanation, safety decision, and reflection.
  • Team Reflection: Write a reflection on your relay strategy, what worked, and what you would change.
Share manually — local only

This scorecard is a formative classroom rubric computed locally in your browser. It is not an official grade, not a certified assessment, and is not submitted to any server. Share your report manually by copying or printing.

Telemetry divergence engine

Expected vs Observed Telemetry

NominalWatchWarningCritical
All nominal — no major divergence detected between expected and observed telemetry.
Connected tracks:Track 1Orbit & ContactTrack 2Mission DesignTrack 4CommunicationTrack 6Telemetry & EvidenceTrack 7AI & Autonomy

Divergence Summary

No significant divergence detected. Expected and observed telemetry are consistent.

Strongest clue: All telemetry nominal.

Evidence & report

Mission Evidence Report

Event Log

  • Mission initialized. CubeSat relay established.Mission initialized. CubeSat relay established.

Competition mode

Space Mission Challenge Day

Run this CubeSat–Rover Relay Mission as a classroom challenge. Teams compete to achieve the highest scorecard while demonstrating safety, reasoning, and evidence quality. All scoring is local and manual — no online leaderboard or automatic submissions.

How to Run

  1. 1.Divide the class into teams of 3–5 students. Assign team roles (see below).
  2. 2.Each team opens the mission on their own device or shared screen.
  3. 3.Teams complete the mission, fill in diagnosis and reflection prompts.
  4. 4.Each team copies their scorecard report and shares it with the teacher (paste, print, or screenshot).
  5. 5.The teacher reviews reports, facilitates a class discussion on safety vs. speed, and announces results.

Suggested Team Roles

Mission Commander

Leads decision-making and sets mission priorities.

Rover Operator

Issues commands and manages the rover timeline.

Telemetry Analyst

Monitors telemetry, identifies anomalies and divergences.

Safety Officer

Watches battery, enforces safe thresholds, calls abort if needed.

Evidence Reporter

Documents mission evidence, writes the diagnosis and reflection.

Teacher Scoring Guide

  • • The 100-point scorecard is a formative rubric — use it as a discussion starter, not a final grade.
  • • Reward safety and reasoning over speed — highest speed is not the best mission score.
  • • Ask teams to justify their score with specific telemetry evidence from the report.
  • • Divergence diagnosis shows engineering thinking — reward effort even if the answer is imperfect.
  • • Evidence quality matters as much as mission completion.
  • • More commands is not always better — efficient resource management earns more points.

Common Misconceptions

  • • "Fastest team wins" — Speed is not the primary criterion; safety and reasoning are weighted equally.
  • • "More commands = better" — Efficient command usage earns higher resource management points.
  • • "This is an official grade" — The scorecard is a local formative rubric, not a certified assessment.
  • • "Copying the report submits it" — Reports are shared manually (paste/print). No automatic submission.
  • • "There is a leaderboard" — All scoring is local and manual. There is no online leaderboard.
  • • "Score alone matters" — Evidence quality and reasoning justification matter as much as the number.

Local / manual only. No online leaderboard, no automatic submissions, no backend accounts, no official grades. Students share their report manually. The teacher reviews using the rubric notes above. This keeps the v0 simple and privacy-friendly.

Teacher guide

Facilitation Guide

Timing presets

  • 15-minute demo: Show the pass window cycle, queue 2-3 commands, demonstrate priority selection, show telemetry delivery during the window. Discuss what happens in blackout.
  • 45-minute classroom: Students work in pairs. Queue commands, select priority mode, complete scan, receive critical telemetry, manage battery. Debrief: compare strategies and packet drop rates.
  • 90-minute workshop: Full mission with multiple pass cycles. Students experiment with different priority strategies, discuss trade-offs, write reflection answers. Extension: what if the window was only 30 seconds?

Discussion prompts

  • • Which packet did you prioritize first and why?
  • • What happened when the relay window closed?
  • • What telemetry did you need before making the next command?
  • • How would your plan change with a shorter pass window?
  • • Why can't images and critical telemetry always both be sent?
  • • What does "command delay" mean during blackout?
  • • Which track concept helped most: orbit, comm, power, or telemetry?
  • • The digital twin expected critical packets to arrive first — what actually happened and why?
  • • How does changing priority mode affect what the ground station receives vs what was expected?

Common misconceptions

  • Misconception: Satellites are always connected.
    Reality: CubeSats pass overhead for limited windows. Between passes, there is blackout.
  • Misconception: More data is always better.
    Reality: Large packets (images) can crowd out critical health/battery telemetry.
  • Misconception: Images should always be prioritized.
    Reality: Images can block critical telemetry. Health and safety data usually comes first.
  • Misconception: Command timing doesn't matter.
    Reality: Commands sent during blackout are delayed until the next pass window.
  • Misconception: Routine telemetry should crowd out safety packets.
    Reality: Critical health and battery packets should be prioritized over routine data.
  • Misconception: This is real CubeSat relay software.
    Reality: This is a teaching-grade model. Real relay operations involve certified RF analysis.
  • Misconception: If telemetry diverges from expected, something is always broken.
    Reality: Divergence can be caused by timing, queue order, or capacity limits — not only hardware faults. Multiple clues are needed.

Boundary reminder: This is a software-only, browser-local teaching prototype. It does not connect to real CubeSat or rover hardware, does not use certified orbital mechanics or RF link budgets, and should not be presented as professional relay operations software.

Scorecard boundaries

What This Scorecard Is — and Is Not

  • Formative rubric — helps students reflect on mission performance across safety, reasoning, and evidence.
  • Local-only — computed in your browser. Not stored, not submitted, not synced to any server.
  • Teaching-grade — designed for classroom discussion, not certified engineering assessment.
  • Not an official grade — should not be used as a formal academic grade without teacher judgment.
  • Not a leaderboard — there is no online ranking, no automatic competition backend.
  • Manual sharing — students copy/print their report and share it with the teacher manually.
  • No AI scoring — all scoring is deterministic rules, not LLM or ML-based grading.
  • No backend — no accounts, no submissions, no cloud storage, no teacher dashboard.

Capability boundary

What this is — and is not

  • Teaching-grade model: Deterministic pass-window simulation for education, not certified relay/flight software.
  • Software-only: No real satellite, rover, or ground station hardware is connected.
  • Browser-local: All state runs in the browser. No backend, no accounts, no submissions saved.
  • Simplified orbital model: Pass schedule is time-based cycle, not real-time orbital propagation.
  • No real RF analysis: Communication is modeled conceptually, not with certified link budgets.
  • Packet prioritization demo: Priority modes illustrate concepts, not production algorithms.
  • No real telemetry: Telemetry packets are simulated, not from actual spacecraft subsystems.
  • Prototype v0: Second interactive mission challenge. Deeper simulation planned for future phases.