CubeSTEM Digital Twin · Track 0

Orientation: From Mission Idea to Digital Twin

A four-session mini-course that builds mission vocabulary, subsystem reasoning, trade-off thinking, and an honest software-first test mindset.

Local-only mini-course: no account, no submissions, no gradebook, and no public remote hardware control.

What this mini-course teaches

Understand what a CubeSat mission is and why digital simulation helps before hardware.

Mini-course flow

Four activities, in order

Each activity is implemented as browser-based interactive software. Evidence and self-checks are local-only — copy, export, or screenshot if you want to share.

ImplementedLocal evidenceLocal self-check

Session 1

What is a CubeSat Mission?

15–20 min

ImplementedLocal evidenceLocal self-check

Learning goal: Student can explain what a CubeSat is, why missions need planning, and what a mission objective means in plain language.

Expected evidence (local)

  • Selected mission objective recorded in the activity
  • Top three subsystems identified for that objective
  • One-sentence mission objective stated in the mission brief panel

Session 2

Subsystem Detective

35–45 min

ImplementedLocal evidenceLocal self-check

Learning goal: Student can identify major CubeSat subsystems, explain each subsystem’s role, and justify which subsystem is involved when a mission clue or symptom appears.

Expected evidence (local)

  • Clue match summary (which clues mapped to which subsystem)
  • Selected mission symptom and subsystem diagnosis with reasoning
  • Reflection on subsystem vocabulary

Session 3

Mission / Subsystem Trade-off

40–50 min

ImplementedLocal evidenceLocal self-check

Learning goal: Student can explain how a mission objective changes subsystem priorities and justify at least one engineering trade-off using evidence.

Expected evidence (local)

  • Mission type and example objective recorded
  • Subsystem priorities and budget used / remaining
  • Trade-off warning explanation and selected design strategy

Session 4

Digital Twin Before Hardware

15–20 min

ImplementedLocal evidenceLocal self-check

Learning goal: Student can explain what a digital twin is, give one learning benefit, and name one honest limit of today’s CubeSTEM twin.

Expected evidence (local)

  • Selected test area recorded in the planner
  • Comparison notes: software digital twin vs physical hardware / classroom validation path
  • Evidence checklist selections captured in the exported or copied test plan summary

Teacher plan

Track 0 Orientation mini-course (facilitated, local-only)

Use this pack to run Track 0 as a coherent mini-course. Evidence is captured locally (copy/export/screenshot) — no submissions, rosters, or gradebook.

45-minute option

45 min

  • 5 min: frame mission objective vs payload; set local-only boundary.
  • 20 min: run Activity 00.1 as a whole-class guided walkthrough.
  • 10 min: pairs defend top subsystems using on-screen cards.
  • 8 min: quick bridge to “software-first testing” and one exit-ticket prompt.

90-minute option

90 min

  • 10 min: warm-up + vocabulary; boundary note (no flight sim, no remote hardware control).
  • 30–35 min: Activity 00.2A Subsystem Detective (teams).
  • 25–30 min: Activity 00.3A Trade-off (teams) with a structured compare-out.
  • 10–15 min: evidence capture + assessment prompt discussion.

Half-day workshop

3–4 hours

  • Run all four activities in order with short debriefs between each.
  • Add a “gallery walk” where teams compare evidence artifacts (manual share).
  • End with Activity 00.4 to set expectations before Track 1 (orbit) labs.
  • Optional: supervised local bench discussion only (no remote lab claims).

Common misconceptions (Track 0)

  • A digital twin is the same as real hardware.

    A digital twin is a software representation used to practice, plan, or explain. It can support learning, but it does not replace hardware integration, measurement noise, or operational constraints.

  • Teaching-grade simulation equals a flight simulator.

    These routes are designed for conceptual understanding and bounded estimates. They are not flight-certified simulators or operational mission design tools.

  • Mission objective and payload are the same thing.

    The objective is the mission goal (what you must accomplish). The payload is the instrument/service used to meet that goal, which then drives subsystem requirements.

  • If it’s in the browser, the teacher can see it later.

    Progress, assessment, and evidence are local to the learner’s browser only. There is no account, sync, roster, or submission workflow in this product phase.

Facilitation prompts (use across activities)

  • Ask: “Who benefits from this mission, and how would they know it worked?”
  • Ask: “What changes when the mission objective changes?”
  • Ask: “What would you test in software first, and what needs a supervised bench later?”
  • Prompt: “Make one claim, then point to the on-screen evidence that supports it.”

Expected evidence (by activity)

What is a CubeSat Mission?

15–20 min

Open →

Assessment prompt: In one sentence, what is your mission trying to accomplish, and name one design choice that follows from it?

  • Selected mission objective recorded in the activity
  • Top three subsystems identified for that objective
  • One-sentence mission objective stated in the mission brief panel
  • Self-check: payload, power, data/communication, and pointing considerations addressed

Subsystem Detective

35–45 min

Open →

Assessment prompt: Identify the subsystem from a clue or symptom and justify your reasoning with one piece of evidence.

  • Clue match summary (which clues mapped to which subsystem)
  • Selected mission symptom and subsystem diagnosis with reasoning
  • Reflection on subsystem vocabulary
  • Self-check summary and copied evidence artifact text

Mission / Subsystem Trade-off

40–50 min

Open →

Assessment prompt: Justify one trade-off created by your mission objective: which subsystem went up, which went down, and why is that defensible?

  • Mission type and example objective recorded
  • Subsystem priorities and budget used / remaining
  • Trade-off warning explanation and selected design strategy
  • Reflection on the accepted trade-off
  • Self-check summary and copied evidence artifact text

Digital Twin Before Hardware

15–20 min

Open →

Assessment prompt: What is one question you would ask an engineer to check whether a result came from a real simulator run vs a teaching estimate?

  • Selected test area recorded in the planner
  • Comparison notes: software digital twin vs physical hardware / classroom validation path
  • Evidence checklist selections captured in the exported or copied test plan summary
  • Reflection on what stays software-first vs optional classroom hardware

Boundary reminder: local-only learning experience (no accounts, no submissions, no grades), no public remote hardware control, and teaching-grade models (not a flight simulator).

Student path

Start here, then follow the four steps

Track 0 progress is stored locally in this browser only. There’s no login, sync, or submission system — you share evidence by copying or screenshotting what you create.

Step 1

What is a CubeSat Mission? (15–20 min)

What to do: Pick a mission objective and use the builder to connect objective → payload → subsystems. Write a one-sentence mission objective you can defend.

What to copy as evidence (local)

  • Selected mission objective recorded in the activity
  • Top three subsystems identified for that objective
  • One-sentence mission objective stated in the mission brief panel
  • Self-check: payload, power, data/communication, and pointing considerations addressed

Step 2

Subsystem Detective (35–45 min)

What to do: Match clues to subsystems, then diagnose one mission symptom. Write your reasoning in plain language, not just subsystem names.

What to copy as evidence (local)

  • Clue match summary (which clues mapped to which subsystem)
  • Selected mission symptom and subsystem diagnosis with reasoning
  • Reflection on subsystem vocabulary
  • Self-check summary and copied evidence artifact text

Step 3

Mission / Subsystem Trade-off (40–50 min)

What to do: Choose a mission type, allocate a limited priority budget, and read the trade-off warnings. Pick a strategy and explain one trade-off you accepted.

What to copy as evidence (local)

  • Mission type and example objective recorded
  • Subsystem priorities and budget used / remaining
  • Trade-off warning explanation and selected design strategy
  • Reflection on the accepted trade-off

Step 4

Digital Twin Before Hardware (15–20 min)

What to do: Choose a test area and compare what software can help with vs what needs a supervised bench. Build a simple test plan and boundary statement.

What to copy as evidence (local)

  • Selected test area recorded in the planner
  • Comparison notes: software digital twin vs physical hardware / classroom validation path
  • Evidence checklist selections captured in the exported or copied test plan summary
  • Reflection on what stays software-first vs optional classroom hardware

Local-only note: your evidence is visible to a teacher only if you choose to share it manually (copy/paste, export, or screenshot).

Evidence checklist

What “counts” as evidence in Track 0

Evidence is local-only: capture it by copying/exporting text or taking a screenshot of the evidence panels. This is not a submission system and not a gradebook.

What is a CubeSat Mission?

Open →
  • Mission brief (objective + who benefits)
  • Payload and top subsystems (objective → subsystem reasoning)
  • One-sentence objective in student’s own words
  • Reflection: why the objective changes which subsystems matter most

Subsystem Detective

Open →
  • Clue match summary (which clues mapped to which subsystem)
  • Symptom diagnosis (chosen subsystem + reasoning)
  • One “check first” step (what you would inspect next and why)
  • Reflection on subsystem vocabulary

Mission / Subsystem Trade-off

Open →
  • Selected mission type / example objective
  • Subsystem priorities and remaining budget (what went up/down)
  • Trade-off warning explanation + chosen design strategy
  • Reflection: one trade-off you accepted and why it’s defensible

Digital Twin Before Hardware

Open →
  • Chosen test focus area
  • Software-first vs supervised hardware comparison notes
  • Test plan checklist selections (what evidence you plan to collect)
  • Boundary statement: what today’s twin does not simulate/replace

Reminder: teachers can’t “see” evidence automatically — learners share it manually (copy/paste, export, screenshot).

Next step after Track 0

Track 1 — Launch, Gravity & Orbit Basics

When you’re ready, continue into Track 1 to learn orbit as free-fall, contact windows, and early orbit trade-offs.

← Back to all learning tracks