CubeSTEM Digital Twin · Track 2
Track 2 — Mission Design / Payload
A four-activity mini-course: choose an objective, see how payload drives subsystem needs, estimate payload data vs downlink capacity, and define measurable success criteria tied to evidence.
Local-only mini-course: no account, no submissions, no gradebook. Teaching-grade mission design — not a requirements database, not CAD, not automated mission verification.
What this mini-course teaches
Choose a mission objective, connect payload to subsystem needs, and define success criteria.
Mini-course flow
Four sessions, one mission story
Follow the sequence below to move from a mission objective → payload-driven subsystem needs → payload data constraints → measurable mission success criteria. Evidence and self-checks are local-only — copy, export, or screenshot if you want to share.
Recommended pacing: treat each activity as one session. Use the “Next” link inside the activity pages to continue in order.
Session 1
Choose a Mission Objective
Time estimate: 20–25 min
Learning goal: Student can state a clear mission objective and explain how it drives system requirements.
Expected evidence (local)
- Written objective + three derived needs
- Student explains why vague objectives fail in design reviews
Session 2
Payload Drives the Mission
Time estimate: 25–30 min
Learning goal: Student can explain how a chosen payload determines power, pointing, data, and thermal requirements.
Expected evidence (local)
- Dependency diagram: payload → ADCS/power/comms/thermal
- Student names one new requirement when swapping payload types
Session 3
Payload Data Generation
Time estimate: 25–30 min
Learning goal: Student can estimate data volume from a payload and relate it to downlink constraints.
Expected evidence (local)
- Rate × time calculation checked against template utilization
- Student explains backlog if generation > downlink
Session 4
Mission Success Criteria
Time estimate: 30–40 min
Learning goal: Student can write a set of mission success criteria and explain how they connect to telemetry evidence.
Expected evidence (local)
- Success criteria table with measurable thresholds
- Replay quote: chart snippet or metric tied to pass/fail
Teacher plan
Track 2 mini-course (facilitated, local-only)
Use this pack to teach mission design / payload thinking as a coherent four-activity mini-course. Evidence and self-check are local-only (copy/export/screenshot) — no submissions, rosters, or gradebook.
45-minute mission design demo
45 min
- 5 min: name the boundary (teaching model; local-only; not a requirements DB).
- 15–18 min: Activity 2.1 — students rewrite vague → testable objective (user, measure, threshold).
- 10 min: Activity 2.2 — show payload → subsystem dependency map and one trade-off sentence.
- 10 min: exit ticket — one measurable success criterion + what evidence would prove it.
90-minute class/lab
90 min
- 10 min: vocabulary + boundary (teaching-grade; no CAD; no automated verification).
- 20 min: Activity 2.1 + capture mission brief evidence.
- 20 min: Activity 2.2 + dependency map evidence.
- 20 min: Activity 2.3 Payload Data Generation — backlog Y/N and one mitigation.
- 20 min: Activity 2.4 Mission Success Criteria — bind each criterion to telemetry/evidence source.
Half-day workshop
3–4 hours
- Run all four activities in order with short debriefs after each.
- Anchor on Mission Design Lab budgets to compare student decisions against template flags.
- Add a gallery walk: teams compare exported evidence artifacts (manual share).
- End with the bridge to Track 3 (Power / Thermal / Budgets) framing only — no Track 3 implementation in this phase.
Common misconceptions (Track 2)
A mission objective is the same thing as a payload.
The objective is the goal (what success means for users on Earth). The payload is the instrument or service that helps achieve it.
The payload is just one component — it doesn’t affect anything else.
Payload choice drives power, ADCS pointing, comms data rate, OBC/data storage, and thermal needs. Swap the payload and other subsystems shift too.
If the camera generates more data, you just store it — no problem.
If average generation exceeds total downlink across contact windows, a backlog grows. Mitigate with lower duty cycle, more stations, higher rate, or prioritization.
Success criteria are vibes — ‘the mission worked.’
Useful criteria are measurable: who benefits, what is measured, with a threshold. Each criterion needs an evidence source (telemetry, run summary, or budget field).
Mission Design Lab is a flight-grade requirements tool.
It is a teaching estimate. No CAD, no requirements database, no automated mission verification, no certified payload simulation.
Facilitation prompts (use across activities)
- Ask: “Who is the user on Earth, and what changes for them when this mission succeeds?”
- Ask: “If you swap the payload, which subsystem is affected first? Why?”
- Ask: “Where does the data go after it is generated, and how much fits through the contacts you have?”
- Prompt: “Pick one criterion. What channel proves it passed?”
- Prompt: “Make one claim, then point to the evidence artifact that supports it.”
Expected evidence (by activity)
Choose a Mission Objective
Time estimate: 20–25 min
Assessment prompt: Rewrite a vague objective (“take pictures”) into a testable objective with who, what, and why it matters.
- Written objective + three derived needs
- Student explains why vague objectives fail in design reviews
Payload Drives the Mission
Time estimate: 25–30 min
Assessment prompt: If you switch from a low-rate beacon to a high-rate imager, name two subsystems that change and why.
- Dependency diagram: payload → ADCS/power/comms/thermal
- Student names one new requirement when swapping payload types
Payload Data Generation
Time estimate: 25–30 min
Assessment prompt: What happens to onboard storage if you collect data faster than you can downlink? Name one mitigation.
- Rate × time calculation checked against template utilization
- Student explains backlog if generation > downlink
Mission Success Criteria
Time estimate: 30–40 min
Assessment prompt: Pick one criterion and state exactly which telemetry or budget field would prove it passed.
- Success criteria table with measurable thresholds
- Replay quote: chart snippet or metric tied to pass/fail
Boundary reminder: local-only learning experience (no accounts, no submissions, no grades). Mission Design Lab is a teaching model — not a requirements database, not CAD, not automated mission verification.
After Track 2
Next is Track 3: Mass, Power, Energy, Data & Thermal Budgets. If there is no Track 3 hub route yet, use the Curriculum Map as the safe next step.
Student path
What to do, step by step
This is a guided path, not a submission system. Your evidence and self-check stay in your browser unless you copy/export or screenshot it to share manually.
Tip: after each activity, open the Evidence panel and copy/export your artifact (text or JSON) before moving on.
Step 1
Choose a Mission Objective
Time estimate: 20–25 min
- Pick one of three mission scenarios (imaging / communication / climate monitoring).
- Rewrite the vague objective into a testable one — name the user, the measure, and one threshold.
- List three derived needs (what the satellite must do).
- Capture evidence: scenario + rewritten objective + three needs + mission brief.
Step 2
Payload Drives the Mission
Time estimate: 25–30 min
- Choose a payload (imager / beacon / sensor / radio).
- Inspect the dependency map (which subsystems get heavier).
- Identify the two strongest affected subsystems and write one trade-off sentence.
- Capture evidence: payload + changed subsystems + trade-off explanation.
Step 3
Payload Data Generation
Time estimate: 25–30 min
- Adjust resolution, frame rate, and active duty per orbit.
- Read estimated MB/orbit and MB/day.
- Compare to total contact-window downlink capacity per day.
- Capture evidence: settings + MB generated vs MB downlinkable + backlog Y/N + one mitigation.
Step 4
Mission Success Criteria
Time estimate: 30–40 min
- Pick a mission scenario.
- Write minimum-success and full-success criteria with measurable thresholds.
- Bind each criterion to a telemetry channel, run summary, or budget field.
- Capture evidence: criteria table + thresholds + evidence sources + reflection.
After Track 2
Next, continue toward Power / Thermal / Budgets. Track 3 yardstick is not yet ready in this phase, so the Curriculum Map and Mission Design Lab help you decide what to look at next.
Evidence checklist
What to capture (local-only)
Evidence artifacts are local-only. There is no submission or teacher visibility workflow — copy/export (text+JSON) or screenshot to share manually.
Classroom routine tip: after each activity, copy/export one artifact per team and paste into a shared doc.
Choose a Mission Objective
⏱ 20–25 min
- Written objective + three derived needs
- Student explains why vague objectives fail in design reviews
Reflection prompt: Rewrite a vague objective (“take pictures”) into a testable objective with who, what, and why it matters.
Payload Drives the Mission
⏱ 25–30 min
- Dependency diagram: payload → ADCS/power/comms/thermal
- Student names one new requirement when swapping payload types
Reflection prompt: If you switch from a low-rate beacon to a high-rate imager, name two subsystems that change and why.
Payload Data Generation
⏱ 25–30 min
- Rate × time calculation checked against template utilization
- Student explains backlog if generation > downlink
Reflection prompt: What happens to onboard storage if you collect data faster than you can downlink? Name one mitigation.
Mission Success Criteria
⏱ 30–40 min
- Success criteria table with measurable thresholds
- Replay quote: chart snippet or metric tied to pass/fail
Reflection prompt: Pick one criterion and state exactly which telemetry or budget field would prove it passed.
Boundary reminder: Mission Design / Payload is a teaching model (not a requirements database, not CAD, not automated mission verification) and the experience is local-only (no accounts, no submissions, not a grade).
Assessment map
Self-check prompts (not a grade)
Use these prompts as a discussion guide or local self-check after each activity. There is no submission pipeline, no gradebook, and no teacher visibility unless learners share evidence manually.
Practical assessment move: ask teams to answer the prompt, then point to one exported evidence line that supports their answer.
Choose a Mission Objective
Time estimate: 20–25 min
Discussion / self-check prompt: Rewrite a vague objective (“take pictures”) into a testable objective with who, what, and why it matters.
Misconceptions to watch for
- “Objective = payload” (correct: objective is the goal; payload is the instrument that helps achieve it).
- “‘Take pictures’ is a mission objective” (correct: needs user, measure, threshold).
Look for (local evidence)
- Written objective + three derived needs
- Student explains why vague objectives fail in design reviews
Payload Drives the Mission
Time estimate: 25–30 min
Discussion / self-check prompt: If you switch from a low-rate beacon to a high-rate imager, name two subsystems that change and why.
Misconceptions to watch for
- “Payload swaps don’t affect other subsystems” (correct: payload drives power, ADCS, comms, OBC, thermal).
- “Pointing is the same for any payload” (correct: imagers usually demand tighter pointing than beacons).
Look for (local evidence)
- Dependency diagram: payload → ADCS/power/comms/thermal
- Student names one new requirement when swapping payload types
Payload Data Generation
Time estimate: 25–30 min
Discussion / self-check prompt: What happens to onboard storage if you collect data faster than you can downlink? Name one mitigation.
Misconceptions to watch for
- “If we generate more, we just store it” (correct: average generation > average downlink → backlog).
- “Bits and bytes are interchangeable” (correct: keep one unit per lesson; convert before comparing).
Look for (local evidence)
- Rate × time calculation checked against template utilization
- Student explains backlog if generation > downlink
Mission Success Criteria
Time estimate: 30–40 min
Discussion / self-check prompt: Pick one criterion and state exactly which telemetry or budget field would prove it passed.
Misconceptions to watch for
- “The mission worked” is a criterion (correct: criteria need who/what/threshold).
- “We can verify in our heads” (correct: each criterion needs an evidence source — telemetry, summary, or budget field).
Look for (local evidence)
- Success criteria table with measurable thresholds
- Replay quote: chart snippet or metric tied to pass/fail
Capability boundary
Track 2 is a teaching mini-course, not a flight-grade tool
- No requirements database — students write text and self-check.
- No CAD or mass-properties solver.
- No automated mission verification — pass/fail is student-judged.
- No certified payload simulator — payload behavior is taught with bounded estimates.
- Mission Design Lab budgets, orbit/contact labels, and risk flags are teaching estimates.
- Local-only experience: no accounts, no submissions, no gradebook, no teacher visibility.
- No public remote hardware control.
Four activities (quick view)
- Choose a Mission Objective — Select a mission objective and understand how it shapes all other decisions.
- Payload Drives the Mission — Understand how payload type and power/data needs shape every other subsystem decision.
- Payload Data Generation — Estimate how much data a payload generates and what that means for onboard storage and downlink.
- Mission Success Criteria — Define measurable mission success criteria and understand how they drive test and verification.
Next step after Track 2
Track 3 — Power / Thermal / Budgets
Continue into the Track 3 yardstick mini-course (power budgeting, eclipse energy balance, thermal hot/cold screening, and resource trade-offs). If a future track hub is missing, the Curriculum Map remains the safe bridge.