CubeSTEM Digital Twin

Teacher Mode

Plan sessions across the eight-track CubeSat journey (Tracks 0–7) with lesson packs, timing presets, and honest capability boundaries — browser-local; no login required.

Teacher Mode here is a local planning aid. It helps you choose activities, estimate timing, surface outcomes and misconceptions, and generate a copyable session plan text. It does not introduce a backend, teacher accounts, class rosters, student submissions, or auto-grading.

Timing

Lesson timing presets

Pick a pacing preset, then use a lesson pack or the activity planning table to build a session.

15-minute demo

15 min

Preset

Whole-class introduction, club demo, or a fast workshop opener.

Suggested flow

  • Hook (2 min): What is a mission objective?
  • Activity (8–10 min): run one interactive route together
  • Debrief (3–5 min): one misconception + one evidence prompt

Boundary: Use the interactive activities as teaching-grade models; avoid claims of flight-certified simulation or automatic grading.

45-minute class lesson

45 min

Preset

A full period with short discussion, activity time, and reflection/evidence capture.

Suggested flow

  • Warm-up + vocabulary (5–8 min)
  • Interactive activity (18–22 min)
  • Pair discussion + evidence prompt (10–12 min)
  • Exit ticket (3–5 min): one assessment prompt

Boundary: No student accounts or roster engine in MVP-final; evidence collection is teacher-facilitated and local.

90-minute lab/workshop

90 min

Preset

Deeper exploration with two activities, comparisons, and a structured debrief.

Suggested flow

  • Orientation + boundaries (10 min)
  • Activity A (25 min) + mini-debrief (10 min)
  • Activity B (25 min) + comparison (10 min)
  • Evidence + assessment prompts (10 min)

Boundary: Hardware work remains optional and local; no public remote hardware control is introduced in this phase.

University seminar extension

2–3 hours (extendable)

Preset

Seminar format: reasoning, assumptions, and model limitations with richer discussion.

Suggested flow

  • Model limitations discussion (10–15 min)
  • Run + critique (45–60 min)
  • Evidence-focused debrief (30–40 min)
  • Optional extensions and next steps (20–30 min)

Boundary: Keep language precise: teaching models and estimates; not STK/GMAT-grade orbit propagation, not a flight simulator.

Lesson packs

Recommended lesson packs

Each pack links only to routes that exist today, with honest notes when a dedicated activity page is not available yet.

Orientation starter

Start with Activity 00.1, then use the curriculum map to pick your next path.

All Levels

Outcomes

  • Students can explain what a CubeSat is and why missions start with objectives.
  • Students can name basic subsystems (power, comms, pointing, payload).

Available now

  • Activity 00.1 interactive route
  • Curriculum map planning surface

Boundary: Local planning aid only. No roster, no student accounts, and no automated assessment workflow in MVPF6.

Track 0 Orientation mini-course

A coherent four-activity onboarding sequence: mission → subsystems → trade-offs → digital twin boundaries (local-only).

All Levels

Outcomes

  • Students can explain mission objective vs payload in plain language.
  • Students can identify major subsystems and diagnose a clue/symptom with reasoning.
  • Students can justify one trade-off created by a mission objective using on-screen evidence.
  • Students can explain why teams practice in software before hardware and state one honest limitation of today’s twin.

Available now

  • All four Track 0 activity routes
  • Track 0 overview page with teacher/student/evidence surfaces
  • Student Mode foundation (local progress) and Teacher planning tools

Boundary: Local-only: no login, no roster, no submissions, and no official grading. Evidence is shared manually. No public remote hardware control.

Digital twin before hardware

Set expectations: what the twin helps with today, and what it does not replace.

All Levels

Outcomes

  • Students can define “digital twin” in plain language.
  • Students can state one benefit and one honest limit of the current CubeSTEM twin.

Available now

  • Interactive planning activity
  • Clear boundary routes

Boundary: This pack is about honest capability framing. Do not promise remote labs, accounts, or automatic grading.

Orbit/contact starter

Teach orbit as free-fall + why contact windows are brief and scheduled.

Middle School

Outcomes

  • Students explain orbit as continuous free fall (not “no gravity”).
  • Students explain line-of-sight and why contacts are not continuous.

Available now

  • Track 1 interactive routes

Boundary: Orbit/contact routes are teaching-grade models. Avoid claims of operational orbit propagation or RF link-budget truth.

Track 1 mini-course — Launch, Gravity, and Orbit

A coherent five-activity sequence: free-fall → altitude vs period → orbit-class trade-offs → ground track/coverage → contact windows (local-only).

Middle School

Outcomes

  • Students explain orbit as continuous free fall (not “no gravity”).
  • Students explain how altitude changes period and speed (estimates).
  • Students compare LEO vs higher orbits as a trade-off discussion (qualitative).
  • Students explain how inclination controls latitude coverage and ground track behavior.
  • Students explain why contact windows are brief and how they limit downlink/ops.

Available now

  • All five Track 1 interactive activity routes
  • Track 1 overview page with mini-course packaging
  • Local evidence capture with copy/export (text+JSON) on each activity
  • Local assessment self-check blocks on each activity

Boundary: Local-only: no login, no roster, no submissions, and no official grading. Evidence is shared manually. Teaching-grade orbit models only (not STK/GMAT, not certified propagator).

Track 2 mini-course — Mission Design and Payload Thinking

Coherent four-activity sequence: choose objective → payload drives mission → payload data generation → measurable success criteria (local-only).

High School

Outcomes

  • Students rewrite a vague mission idea into a testable objective with a user, a measurable outcome, and a threshold.
  • Students explain how payload choice drives power, ADCS pointing, communications, OBC/data, and thermal needs.
  • Students estimate payload data per orbit/day and compare against contact-window downlink capacity.
  • Students decide a mitigation if generation exceeds downlink (lower duty / more contacts / higher rate / prioritize).
  • Students write minimum and full success criteria with measurable thresholds bound to evidence sources.

Available now

  • All four Track 2 interactive activity routes
  • Track 2 overview page with mini-course packaging, teacher pack, student path, evidence checklist, assessment map, capability boundary
  • Local evidence capture with copy/export (text+JSON) on each activity
  • Local assessment self-check blocks on each activity
  • Mission Design Lab Track 2 learning-path bridge card

Boundary: Local-only: no login, no roster, no submissions, no official grading. Evidence is shared manually. Teaching-grade mission design only — not a requirements database, not CAD, not automated mission verification, not certified payload simulation.

Track 3 mini-course — Power / Thermal / Budgets

Four-session mini-course: power budgeting → eclipse energy balance → thermal hot/cold screening → resource trade-offs (local-only). Three extension items remain honest path-only entries.

High School

Outcomes

  • Students explain average vs peak power with duty-cycled loads.
  • Students relate sunlight/eclipse durations to stored energy using a teaching Wh estimate.
  • Students interpret simplified thermal risk language without claiming flight thermal analysis.
  • Students allocate finite resource points and defend an explicit trade-off.

Available now

  • Track 3 overview page with mini-course packaging, teacher pack, student path, evidence checklist, assessment map, capability boundary
  • Four interactive Track 3 core activity routes (power budget, day/night energy, thermal hot/cold, resource trade-off)
  • Local assessment self-check + evidence copy/export (text+JSON) on each activity
  • Three extension items preserved as honest path-only entries (no dedicated activity route)

Boundary: Teaching-grade models only — not certified power/thermal analysis, not battery safety certification, no remote hardware. Local-only evidence — no submissions or roster visibility.

Track 4 mini-course — Communication / Ground Link

Four-session mini-course: line-of-sight basics → data rate × contact time → link margin trade-off → command/telemetry flow (local-only).

High School

Outcomes

  • Students explain line of sight and minimum elevation as the gates for ground-station contact.
  • Students compute a teaching-grade daily downlink budget (data rate × contact time × passes) and identify backlog.
  • Students read a qualitative link-margin badge and explain one trade-off and one improvement.
  • Students prioritise uplink commands, telemetry, and payload data for short, lossy passes.

Available now

  • Track 4 overview page — complete mini-course packaging (teacher plan, student path, evidence checklist, assessment map, boundary note)
  • Four interactive activity routes: Line-of-Sight Communication, Data Rate × Contact Time, Link Margin Trade-off, Command / Telemetry Flow
  • Local Assessment Engine v0 self-check on each activity (2 quick checks + reflection + checklist; not a grade)
  • Local Evidence Engine v0 copy/export on each activity (text + JSON; no submission, no backend)
  • After-Track-4 bridge to Track 5 hub (attitude_control)

Boundary: Teaching-grade models only — not a certified RF link budget, not ITU/regulatory or licensed-radio analysis, no real satellite command, no SDR or remote hardware. Local-only evidence — no submissions or roster visibility.

Track 5 mini-course — ADCS / Attitude Control

Seven-session mini-course: why pointing matters → sensor estimation → step response → contact-window pointing → PID tuning → power-aware control → daylight vs eclipse evidence (local-only).

High School

Outcomes

  • Students distinguish orbit (trajectory) from attitude (pointing direction).
  • Students explain how sensor noise and drift affect attitude estimation quality.
  • Students read a step-response chart and identify overshoot, settling time, and wheel effort.
  • Students describe the acquire → track → handoff sequence for a contact-window pass.
  • Students compare gentle vs aggressive PID tuning and justify a choice for a mission scenario.
  • Students explain why power-aware control is used during eclipse or low-battery phases.
  • Students identify at least one telemetry difference between daylight and eclipse from evidence.

Available now

  • Track 5 overview page — complete mini-course packaging (teacher plan with 45/90/half-day presets, student path, evidence checklist, assessment map, boundary note)
  • Seven interactive activity routes: Why Pointing Matters, Attitude Hold Basics, Step Response, Contact Window Pointing, Gentle vs Aggressive, Power-Aware, Daylight vs Eclipse
  • Local Assessment Engine v0 self-check on each activity (2–3 MCQs + short-answer reflection + checklist; not a grade)
  • Local Evidence Engine v0 copy/export on each activity (text + JSON; no submission, no backend, no roster visibility)
  • After-Track-5 bridge to Track 6 hub (telemetry_evidence) when registered; safely falls back to Curriculum Map (no 404)

Boundary: Teaching-grade one-axis models only — not full 3-axis flight ADCS, not a reaction-wheel safety certification, not remote hardware control, not official attitude determination software, not a certified ADCS simulation tool. Local-only evidence — no submissions or roster visibility.

Track 6 mini-course — Telemetry / Evidence

Five-session mini-course: telemetry stream basics → subsystem health thresholds → replay timeline evidence → anomaly detective → mission evidence debrief (local-only).

High School

Outcomes

  • Students decode a telemetry packet: name, value, unit, and timestamp for each field.
  • Students interpret subsystem health bands (nominal / warning / critical) and state the first action per band.
  • Students use a replay timeline to build an evidence-backed debrief statement (channel + value + timestamp).
  • Students reason from multiple anomaly clues to a likely subsystem and a next check.
  • Students link telemetry evidence to pre-defined success criteria in a mission debrief.

Available now

  • Track 6 overview page — complete mini-course packaging (teacher plan with 15-min/45-min/90-min/half-day presets, student path, evidence checklist, assessment map, boundary note)
  • Five interactive activity routes in correct teaching order: Stream Basics → Health Thresholds → Replay Timeline → Anomaly Detective → Mission Debrief
  • Local Assessment Engine v0 self-check on each activity (3 MCQs per session; not a grade)
  • Local Evidence Engine v0 copy-text on each activity (no submission, no backend, no roster visibility)
  • After-Track-6 bridge to Track 7 hub (ai_ml_autonomy) when registered; safely falls back to Curriculum Map (no 404)

Boundary: Teaching-grade telemetry models only — not real satellite telemetry, not a ground-station command interface, not certified anomaly diagnosis, not flight operations. Local-only evidence — no submissions, no roster visibility, no official grading.

Track 7 mini-course — AI / ML Autonomy

Five-session mini-course: autonomy levels → features, labels & data quality → anomaly classifier → confidence and false alarms → human-in-the-loop decision review (local-only, teaching-grade).

High School

Outcomes

  • Students can describe three levels of spacecraft autonomy and explain human oversight at each level.
  • Students can define feature and label, select useful features from telemetry, and assign a correct label.
  • Students can run a teaching classifier, interpret confidence and contributing features, and compare rule-based vs preset ML.
  • Students can adjust sensitivity, read a confusion matrix, and justify a threshold setting by mission risk profile.
  • Students can review evidence cards, apply a safety rule check, choose an action, and write a decision debrief.

Available now

  • Track 7 hub: /twin/learn/tracks/ai_ml_autonomy
  • Session 1: /twin/learn/activities/ai_autonomy_basics
  • Session 2: /twin/learn/activities/aiml_assisted_classification
  • Session 3: /twin/learn/activities/aiml_normal_vs_abnormal
  • Session 4: /twin/learn/activities/aiml_simple_fault_rules
  • Session 5: /twin/learn/activities/aiml_autonomous_safe_mode
  • Local Assessment Engine v0 (self-check, 3 questions per session)
  • Local Evidence Engine v0 (copy/export text or JSON, browser-local)
  • Teacher delivery options map to 45-minute demo, 90-minute lab, and half-day workshop/seminar pacing

Boundary: Teaching-grade AI/ML models only — not certified AI, not flight software, not real onboard autonomy, not real satellite commands, not certified anomaly diagnosis. Local-only evidence — no submissions, no roster visibility, no LMS integration, no official grading.

Shell preview planning

Preview the standard activity shell used for future activity standardization.

All Levels

Outcomes

  • Teachers understand the standard activity structure (outcomes, vocab, primer, evidence, assessment preview).

Available now

  • Shell preview route
  • Reusable shell components for future migrations

Boundary: Preview-only: do not use the shell preview as the assignment route. Use Activity 00.1 as the yardstick route for a full end-to-end example, and use shell-preview only to inspect the reusable layout blocks.

Hardware boundary briefing

Set classroom expectations for hardware vs software, safely and honestly.

All Levels

Outcomes

  • Students can describe what is software-first today and what is hardware extension later.
  • Students can explain why “no public remote hardware control” is a safety boundary.

Available now

  • Hardware boundary route
  • Software-first learning tracks

Boundary: No remote hardware lab control or public hardware access is provided. Hardware use remains local and supervised.

Planning rows

Activity planning table

Derived from the mission learning path. Use it to choose what to teach today, estimate timing, and keep readiness honest.

What is a CubeSat Mission?

OrientationAll Levels15–20 min

Available

Outcome: Student can explain what a CubeSat is, why missions need planning, and what a mission objective means in plain language.

Teacher use: 15-minute whole-class orientation before opening Mission Design; emphasize objective-first thinking.

Readiness

Implemented

Maturity: Pilot Ready

Assessment prompt

In one sentence, what is your mission trying to accomplish, and name one design choice that follows from it?

Expected evidence

  • Selected mission objective recorded in the activity
  • Top three subsystems identified for that objective
  • One-sentence mission objective stated in the mission brief panel
  • Self-check: payload, power, data/communication, and pointing considerations addressed

Subsystem Detective

OrientationAll Levels35–45 min

Available

Outcome: Student can identify major CubeSat subsystems, explain each subsystem’s role, and justify which subsystem is involved when a mission clue or symptom appears.

Teacher use: After Activity 00.1, bridge mission objectives to architecture: students practice naming subsystems from clues before Digital Twin Before Hardware.

Readiness

Implemented

Maturity: Pilot Ready

Assessment prompt

Identify the subsystem from a clue or symptom and justify your reasoning with one piece of evidence.

Expected evidence

  • Clue match summary (which clues mapped to which subsystem)
  • Selected mission symptom and subsystem diagnosis with reasoning
  • Reflection on subsystem vocabulary
  • Self-check summary and copied evidence artifact text

Mission / Subsystem Trade-off

OrientationAll Levels40–50 min

Available

Outcome: Student can explain how a mission objective changes subsystem priorities and justify at least one engineering trade-off using evidence.

Teacher use: Run after Subsystem Detective to teach subsystem prioritization and how mission objectives drive engineering trade-offs.

Readiness

Implemented

Maturity: Pilot Ready

Assessment prompt

Justify one trade-off created by your mission objective: which subsystem went up, which went down, and why is that defensible?

Expected evidence

  • Mission type and example objective recorded
  • Subsystem priorities and budget used / remaining
  • Trade-off warning explanation and selected design strategy
  • Reflection on the accepted trade-off

+ 1 more evidence items in the activity.

Digital Twin Before Hardware

OrientationAll Levels15–20 min

Available

Outcome: Student can explain what a digital twin is, give one learning benefit, and name one honest limit of today’s CubeSTEM twin.

Teacher use: Frame as risk reduction: cheaper to fail in simulation; connect to classroom lab safety and iteration.

Readiness

Implemented

Maturity: Pilot Ready

Assessment prompt

What is one question you would ask an engineer to check whether a result came from a real simulator run vs a teaching estimate?

Expected evidence

  • Selected test area recorded in the planner
  • Comparison notes: software digital twin vs physical hardware / classroom validation path
  • Evidence checklist selections captured in the exported or copied test plan summary
  • Reflection on what stays software-first vs optional classroom hardware

From Launch to Orbit

Launch, Gravity & Orbit BasicsMiddle School20–25 min

Available

Outcome: Student explains orbit as continuous free fall: gravity plus sideways velocity, not “no gravity.”

Teacher use: Use the ball thought experiment before any numbers; stress language precision (microgravity vs no gravity).

Readiness

Implemented

Maturity: Pilot Ready

Assessment prompt

Why doesn’t the satellite fall straight down to Earth even though gravity pulls it toward Earth?

Expected evidence

  • Speed factor selected in the visualizer
  • Observed path class (falls back / near orbit / escape-like)
  • Evidence summary copied with gravity-inward, velocity-sideways, free-fall checklist

Orbit Speed and Altitude

Launch, Gravity & Orbit BasicsHigh School25–30 min

Available

Outcome: Student can estimate how period and speed change when altitude changes and compare two LEO cases.

Teacher use: Emphasize “estimate and reason” over memorizing exact km/s; disclose simplifying assumptions.

Readiness

Implemented

Maturity: Pilot Ready

Assessment prompt

If you raise orbit altitude, does period usually increase or decrease? Why, in one sentence tied to path length and speed?

Expected evidence

  • Altitude selected and recorded
  • Speed (km/s) and period (min) from the calculator
  • One-sentence explanation of altitude vs period trend

Low Earth Orbit vs Higher Orbit

Launch, Gravity & Orbit BasicsHigh School20–25 min

Available

Outcome: Student can describe at least two trade-offs between LEO and GEO (or MEO) for a CubeSat-class mission.

Teacher use: Anchor on student-built CubeSat realism: GEO is uncommon; focus on why LEO is the default classroom story.

Readiness

Implemented

Maturity: Pilot Ready

Assessment prompt

Name one mission goal that favors LEO and one that might favor a higher orbit — and what cost or constraint comes with the higher orbit.

Expected evidence

  • Orbit class selected
  • Two advantages and two disadvantages stated
  • Short mission justification for a stated goal

Ground Track and Coverage

Launch, Gravity & Orbit BasicsHigh School20–25 min

Available

Outcome: Student can explain ground track, inclination, and why revisits happen faster in LEO than in distant orbits.

Teacher use: Pair with Earth science: revisit time, storms, or imaging cadence narratives without claiming GIS product fidelity.

Readiness

Implemented

Maturity: Pilot Ready

Assessment prompt

What does orbital inclination tell you about which parts of Earth the mission can see over time?

Expected evidence

  • Inclination and max latitude coverage recorded
  • Observation of ground track on the map (screenshot or description)
  • Short explanation of why inclination matters for coverage

Contact Window Basics

Launch, Gravity & Orbit BasicsMiddle School20 min

Available

Outcome: Student can explain line-of-sight, why passes are brief, and how that limits downlink time.

Teacher use: Bridge to “operations realism”: short passes mean planning, queues, and sometimes partial downloads.

Readiness

Implemented

Maturity: Pilot Ready

Assessment prompt

Why can’t the ground station talk to the satellite all the time, even if both are powered on?

Expected evidence

  • Ground station selected
  • Passes and total contact time recorded
  • Whether data backlog occurs in the toy model
  • One mitigation (lower data rate, higher downlink, more stations, or prioritize data)

Choose a Mission Objective

Mission Design & Payload ThinkingMiddle School20–25 min

Preview

Outcome: Student can state a clear mission objective and explain how it drives system requirements.

Teacher use: Use three contrasting scenario cards; force students to pick one and defend trade-offs aloud.

Readiness

Partial / teaching

Maturity: Pilot Ready

Assessment prompt

Rewrite a vague objective (“take pictures”) into a testable objective with who, what, and why it matters.

Expected evidence

  • Written objective + three derived needs
  • Student explains why vague objectives fail in design reviews

Payload Drives the Mission

Mission Design & Payload ThinkingHigh School25–30 min

Preview

Outcome: Student can explain how a chosen payload determines power, pointing, data, and thermal requirements.

Teacher use: Give a concrete payload spec (e.g., 5 MP, 1 image/min); ask what breaks first in budgets.

Readiness

Partial / teaching

Maturity: Concept Ready

Assessment prompt

If you switch from a low-rate beacon to a high-rate imager, name two subsystems that change and why.

Expected evidence

  • Dependency diagram: payload → ADCS/power/comms/thermal
  • Student names one new requirement when swapping payload types

Payload Data Generation

Mission Design & Payload ThinkingHigh School25–30 min

Preview

Outcome: Student can estimate data volume from a payload and relate it to downlink constraints.

Teacher use: Use round numbers first (MB/orbit) before introducing Mbps; keep RF link abstract.

Readiness

Partial / teaching

Maturity: Concept Ready

Assessment prompt

What happens to onboard storage if you collect data faster than you can downlink? Name one mitigation.

Expected evidence

  • Rate × time calculation checked against template utilization
  • Student explains backlog if generation > downlink

Mission Success Criteria

Mission Design & Payload ThinkingUniversity30–40 min

Preview

Outcome: Student can write a set of mission success criteria and explain how they connect to telemetry evidence.

Teacher use: Capstone prep: require one criterion tied to ADCS chart evidence and one to Mission Design risk flag.

Readiness

Partial / teaching

Maturity: Concept Ready

Assessment prompt

Pick one criterion and state exactly which telemetry or budget field would prove it passed.

Expected evidence

  • Success criteria table with measurable thresholds
  • Replay quote: chart snippet or metric tied to pass/fail

Power Budget Basics

Power / Thermal / BudgetsHigh School25–35 min

Available

Outcome: Student can compare average and peak bus power, explain duty-cycle effects, and read a simple margin result (safe / warning / overloaded).

Teacher use: Contrast average vs peak before numbers: payload and radio are often duty-cycled; OBC/ADCS may be closer to always-on.

Readiness

Implemented

Maturity: Pilot Ready

Assessment prompt

Why is average power not the same as peak power, and why do teams still plan for peak?

Expected evidence

  • Selected mission preset and load / duty settings
  • Average power, peak power, and generation vs consumption summary
  • Margin status and one-sentence largest consumer or driver
  • Local self-check summary and copied evidence text

Day / Night Energy Balance

Power / Thermal / BudgetsHigh School25–35 min

Available

Outcome: Student can relate sunlight vs eclipse time, average load, and stored energy to a remaining-reserve warning (teaching estimate).

Teacher use: Emphasize energy = power × time; eclipse is the “night” problem even if sun charging looks strong.

Readiness

Implemented

Maturity: Pilot Ready

Assessment prompt

Why can a mission look power-positive in sunlight but still fail in eclipse?

Expected evidence

  • Sun / eclipse / orbit settings and average load
  • Energy generated in sun and energy drawn in eclipse (teaching Wh estimate)
  • Battery remaining or reserve status and one mitigation if low
  • Local self-check summary and copied evidence text

Thermal Hot / Cold Case

Power / Thermal / BudgetsHigh School20–30 min

Available

Outcome: Student can explain when a simplified model flags hot or cold risk and what an engineer would check first (teaching-grade).

Teacher use: Stress that both overheating and overcooling can happen; vacuum changes how heat leaves the spacecraft.

Readiness

Implemented

Maturity: Pilot Ready

Assessment prompt

Why can both overheating and overcooling be mission risks for the same spacecraft?

Expected evidence

  • Selected environment and duty/heater settings
  • Hot/cold/marginal risk flag and component limit focus
  • First engineering check statement
  • Local self-check summary and copied evidence text

Mass / Volume / Resource Trade-off

Power / Thermal / BudgetsHigh School25–35 min

Available

Outcome: Student can allocate limited resources, interpret warnings, and justify a chosen strategy with evidence (teaching exercise).

Teacher use: Make “margin is not waste” explicit: margin buys resilience against unknowns.

Readiness

Implemented

Maturity: Pilot Ready

Assessment prompt

Why can’t payload, battery, radio, and engineering margin all be maximized at once?

Expected evidence

  • Allocation table and remaining budget
  • Warnings triggered and accepted trade-off
  • Selected strategy and one-sentence justification
  • Local self-check summary and copied evidence text

Solar Power Generation

Power / Thermal / BudgetsMiddle School20–25 min

Preview

Outcome: Student can explain what determines how much solar power a CubeSat receives and why eclipse is a problem.

Teacher use: Tie to climate/energy: same physics as rooftop solar, different environment.

Readiness

Partial / teaching

Maturity: Pilot Ready

Assessment prompt

Name two reasons generated solar power drops during eclipse even if payloads are off.

Expected evidence

  • Hand calculation vs template comparison
  • Student explains eclipse impact on bus power

Data Budget and Downlink Limit

Power / Thermal / BudgetsHigh School20–25 min

Preview

Outcome: Student can explain what data utilization means and what happens when data generation exceeds downlink capacity.

Teacher use: Emphasize bits vs bytes; use one consistent unit for the lesson.

Readiness

Partial / teaching

Maturity: Pilot Ready

Assessment prompt

If utilization stays above 100% for a week, what operational symptom would operators likely see?

Expected evidence

  • Data utilization label interpretation
  • Student compares generation vs downloadable data over a day

Risk Flags and Engineering Decisions

Power / Thermal / BudgetsUniversity30–40 min

Preview

Outcome: Student can read mission risk flags, explain their severity, and propose at least one mitigation per flag.

Teacher use: Role-play review board: each student defends one mitigation under time pressure.

Readiness

Partial / teaching

Maturity: Pilot Ready

Assessment prompt

Pick the highest-severity flag and explain whether it is a mass, power, data, thermal, or ops issue first.

Expected evidence

  • Ranked risk list with mitigations
  • Before/after template comparison showing flag clearance

Line-of-Sight Communication

Communication / Ground LinkHigh School20–25 min

Available

Outcome: Student can explain why ground-station contact depends on satellite visibility above the horizon and on a minimum elevation angle.

Teacher use: Stress that contact is not continuous — a CubeSat sees most ground stations only during short passes when the satellite is above the horizon at sufficient elevation.

Readiness

Implemented

Maturity: Pilot Ready

Assessment prompt

Why does a ground station only see a CubeSat for a short window each pass, and why does minimum elevation matter?

Expected evidence

  • Ground station + pass scenario + minimum elevation chosen
  • Visible / not visible result with reason (below horizon / low elevation / good pass)
  • Approximate contact-duration label
  • Local self-check summary and copied evidence text

Data Rate × Contact Time

Communication / Ground LinkHigh School20–25 min

Available

Outcome: Student can compute a teaching-grade data budget (data rate × contact time × passes) and identify when payload data exceeds available downlink.

Teacher use: Reinforce that data rate alone is not enough — contact time and number of passes per day are the real bottleneck for many CubeSat missions.

Readiness

Implemented

Maturity: Pilot Ready

Assessment prompt

Why can a CubeSat with a fast radio still fail to downlink all its payload data in a day?

Expected evidence

  • Data rate, contact duration, passes, and payload volume chosen
  • Total downlinkable data and backlog status (within capacity / over capacity)
  • One mitigation if backlog exists (lower rate of capture, schedule more passes, prioritise data, etc.)
  • Local self-check summary and copied evidence text

Link Margin Trade-off

Communication / Ground LinkHigh School25–30 min

Available

Outcome: Student can read a teaching-grade margin badge (safe / weak / failed) and explain one trade-off and one improvement (teaching-grade only).

Teacher use: Use the lab to surface that pushing one knob (e.g. data rate) too far breaks margin — link planning is a balance, not a single optimization.

Readiness

Implemented

Maturity: Pilot Ready

Assessment prompt

If the teaching margin is weak, name two changes (one operational, one design) that could push it back to safe — and a cost or downside of each.

Expected evidence

  • Selected distance, transmit power, antenna gain, data rate, and noise preset
  • Teaching margin score with safe / weak / failed badge
  • One trade-off explanation in plain language
  • One suggested improvement and the local self-check summary

Command / Telemetry Flow

Communication / Ground LinkHigh School20–30 min

Available

Outcome: Student can explain what gets sent first when contact time is short and what happens to lost packets in a teaching priority queue (no real radio).

Teacher use: Stress that uplink and downlink are different — small command bytes go up; big science data goes down — and short passes force prioritization.

Readiness

Implemented

Maturity: Pilot Ready

Assessment prompt

When contact time is short and packet loss is non-zero, why must operators set priorities for what gets sent first?

Expected evidence

  • Selected command type, telemetry priority, packet loss / retry, and payload queue
  • Command-response timeline and ordered priority queue result
  • What gets sent first and what is dropped or deferred
  • Local self-check summary and copied evidence text

Why Pointing Matters

Attitude Control & PointingMiddle School15–20 min

Preview

Outcome: Student can explain why attitude control is needed and what pointing error means.

Teacher use: Tie beam-on-target demo to antenna gain, camera framing, and solar wing illumination.

Readiness

Partial / teaching

Maturity: Concept Ready

Assessment prompt

Name two mission functions that fail or degrade if pointing error stays large for minutes.

Expected evidence

  • Live chart: target vs actual angle
  • Student names mission harm from large pointing error (comms, power, science)

Attitude Hold Basics

Attitude Control & PointingHigh School25–30 min

Available

Outcome: Student can describe the target angle, actual angle, and error trend from a real simulator run.

Teacher use: Have students predict overshoot before showing chart; compare prediction to evidence.

Readiness

Implemented

Maturity: Pilot Ready

Assessment prompt

From your run, how do you know the spacecraft reached the target within acceptable error?

Expected evidence

  • Telemetry chart showing error → small
  • Replay artifact with timestamped settle
  • Optional 3D scene showing body vs target ghost

Step Response to +10 Degrees

Attitude Control & PointingHigh School25–30 min

Available

Outcome: Student can measure overshoot and settling time from a step response chart and relate them to controller tuning.

Teacher use: Bridge to tuning ethics: fast but gentle on actuators and power.

Readiness

Implemented

Maturity: Pilot Ready

Assessment prompt

If you increase proportional gain, what usually happens to overshoot and why might operators care?

Expected evidence

  • Chart with overshoot peak marked
  • Numeric or estimated settling time
  • Wheel effort trace if shown

Contact Window Pointing

Attitude Control & PointingHigh School25–30 min

Available

Outcome: Student can explain the pointing requirement for a contact window and observe it in the simulator.

Teacher use: Emphasize ops story: acquire → track → handoff; relate to download planning.

Readiness

Implemented

Maturity: Pilot Ready

Assessment prompt

Why might operators care about pointing error even if the radio is technically transmitting?

Expected evidence

  • Telemetry during window showing tracking error
  • Replay compare of good vs poor pointing during pass segment

Gentle vs Aggressive Control

Attitude Control & PointingUniversity30–35 min

Available

Outcome: Student can compare settling time, overshoot, and wheel effort for gentle and aggressive control settings.

Teacher use: Use A/B replay to teach evidence-based tuning debates, not guesswork.

Readiness

Implemented

Maturity: Pilot Ready

Assessment prompt

When would you accept slower settling to protect power and mechanical wear?

Expected evidence

  • Two replays with overshoot and wheel effort contrasted
  • Student-written ops recommendation for contact prep

Power-Aware Attitude Control

Attitude Control & PointingUniversity30–35 min

Available

Outcome: Student can explain how a power-limited scenario changes controller behaviour and mission safety.

Teacher use: Explicitly separate wheel torque limits from true bus voltage collapse physics.

Readiness

Implemented

Maturity: Pilot Ready

Assessment prompt

What is one observable telemetry sign that the spacecraft is being gentler on actuators in power-aware mode?

Expected evidence

  • Chart showing reduced wheel effort or longer settle under power-aware rules
  • Mission narrative line in replay if present

Daylight vs Eclipse Response

Attitude Control & PointingUniversity30–35 min

Available

Outcome: Student can explain why eclipse changes power availability for control and what the system must do differently.

Teacher use: Connect to energy logistics: less solar → less aggressive ADCS unless mission-critical.

Readiness

Implemented

Maturity: Pilot Ready

Assessment prompt

Why might operators schedule non-critical maneuvers outside eclipse if power margin is thin?

Expected evidence

  • Phase markers or battery trend differences daylight vs eclipse
  • Control effort comparison across phases in replay

Telemetry Dashboard Basics

Telemetry, Evidence & OperationsMiddle School20–25 min

Available

Outcome: Student can identify at least four telemetry channels and explain what each one measures.

Teacher use: Pair with vocabulary wall: angle, error, rate, wheel, power indicators.

Readiness

Implemented

Maturity: Pilot Ready

Assessment prompt

Which channel would you watch first to know if pointing is improving, and why?

Expected evidence

  • Screenshot or description of five channels with correct meaning
  • Student identifies converging error visually

Subsystem Interpretation Walkthrough

Telemetry, Evidence & OperationsUniversity35–40 min

Available

Outcome: Student can interpret each telemetry subsystem channel and explain its mission-level significance.

Teacher use: Use jigsaw groups: each group masters one subsystem, then teaches the class.

Readiness

Implemented

Maturity: Pilot Ready

Assessment prompt

Pick one yellow flag: which subsystem owns it first, and what confirming channel would you check next?

Expected evidence

  • Subsystem health table with yellow/red examples
  • Risk observation per subsystem backed by a channel

Replay and Mission Debrief

Telemetry, Evidence & OperationsHigh School25–30 min

Available

Outcome: Student can replay a run, identify key events in the telemetry, and write a short mission debrief statement.

Teacher use: Require evidence quotes: timestamp + channel + value trend.

Readiness

Implemented

Maturity: Pilot Ready

Assessment prompt

What is one claim you would not make without replay evidence, and what chart proves it?

Expected evidence

  • Replay artifact reference (run id or screenshot)
  • Three-sentence debrief citing at least two chart moments

Telemetry Trust and Stale Data

Telemetry, Evidence & OperationsUniversity25–30 min

Available

Outcome: Student can identify stale telemetry, explain its risk, and describe a mitigation strategy.

Teacher use: Discuss cyber-physical trust: stale data is an ops problem, not only a math problem.

Readiness

Implemented

Maturity: Pilot Ready

Assessment prompt

Why is acting on stale attitude data sometimes worse than having no data?

Expected evidence

  • Stale or flat-line segment identified on chart
  • Student proposes mitigation (watchdog, redundancy, operator procedure)

Mission-Based STEM Capstone

Telemetry, Evidence & OperationsUniversity50–60 min

Preview

Outcome: Student can complete a full mission journey and produce an evidence-based report connecting all tracks.

Teacher use: Schedule as 2-session block: design + run, debrief + revision.

Readiness

Partial / teaching

Maturity: Pilot Ready

Assessment prompt

What single chart would you show a reviewer to prove your spacecraft met its pointing goal?

Expected evidence

  • Mission report with budget summary + control chart + debrief
  • Explicit pass/partial/fail on student-stated criteria

What Does Autonomy Mean?

AI / ML & AutonomyHigh School20–25 min

Available

Outcome: Student can describe three levels of spacecraft autonomy, explain what each level is allowed to do, and state why human-in-the-loop review matters even in the highest autonomy mode.

Teacher use: Use as entry point to the AI/ML mini-course. Ask: 'What would go wrong if a satellite acted on every alert without a human check?'

Readiness

Implemented

Maturity: Pilot Ready

Assessment prompt

Describe one situation where 'recommend action' autonomy is safer than 'execute' autonomy, and explain why.

Expected evidence

  • Chosen autonomy level and scenario recorded
  • List of allowed vs disallowed actions at the selected level
  • One-sentence reflection on why human oversight matters
  • Self-check summary and copied evidence text

Features, Labels and Training Data

AI / ML & AutonomyHigh School30–35 min

Available

Outcome: Student can define feature and label, select useful features from telemetry, assign a correct label to a given example, and explain how a biased dataset degrades classifier performance.

Teacher use: Position as a data ethics conversation: what happens when training data is mostly nominal? Ask students to find a dataset gap.

Readiness

Implemented

Maturity: Pilot Ready

Assessment prompt

Name two ways a biased or incomplete training dataset could cause a fault classifier to make dangerous mistakes.

Expected evidence

  • Selected features and justification for each choice
  • Assigned label and explanation for a given telemetry example
  • Note on one way a missing or biased feature would harm the model
  • Self-check summary and copied evidence text

Anomaly Classifier

AI / ML & AutonomyHigh School25–30 min

Available

Outcome: Student can run a classifier on a telemetry example, interpret the predicted class and confidence score, identify the key contributing features, and explain the difference between rule-based and ML-based detection.

Teacher use: Use think-aloud protocol: ask students to narrate what each top feature tells the classifier. Stress that confidence ≠ correctness.

Readiness

Implemented

Maturity: Pilot Ready

Assessment prompt

Give one example of a telemetry pattern that looks abnormal but is actually expected during a planned mode change — and explain how a classifier might incorrectly flag it.

Expected evidence

  • Chosen scenario and classifier type recorded
  • Predicted class and confidence score
  • Top contributing features with brief explanation
  • One observation where rule-based and ML classifiers differ

+ 1 more evidence items in the activity.

Confidence and False Alarms

AI / ML & AutonomyHigh School30–35 min

Available

Outcome: Student can explain the trade-off between sensitivity and false alarm rate, read a confusion matrix, and justify a sensitivity setting based on mission risk tolerance.

Teacher use: Ask: 'Would you rather miss one real fault, or get seven false alarms per day?' Use student answers to surface the mission-specific trade-off.

Readiness

Implemented

Maturity: Pilot Ready

Assessment prompt

A detector has TP=14, FP=7, TN=13, FN=1. Calculate precision and recall, and state which setting would cause alarm fatigue and why.

Expected evidence

  • TP/FP/TN/FN counts at chosen sensitivity level
  • Chosen sensitivity setting with operational justification
  • One-sentence explanation of alarm fatigue risk
  • Self-check summary and copied evidence text

Human-in-the-Loop Decision

AI / ML & AutonomyHigh School35–40 min

Available

Outcome: Student can review telemetry evidence cards, apply a safety rule check to a proposed action, choose an appropriate response, and write a one-paragraph decision debrief.

Teacher use: Run as a structured debate: two students choose different actions for the same anomaly, then each explains their evidence-card reasoning. Stress that asking for human review is always valid.

Readiness

Implemented

Maturity: Pilot Ready

Assessment prompt

Give one scenario where entering safe mode too early wastes science, and one where waiting too long risks the spacecraft bus — explain how evidence cards would help you decide.

Expected evidence

  • Chosen predicted anomaly and confidence level
  • Evidence card review — supporting / neutral / contradicting classification
  • Chosen action and safety rule check result
  • One-paragraph decision debrief

+ 1 more evidence items in the activity.

Guidance

Misconception guide

A small set of high-frequency misconceptions to watch for in early CubeSat lessons.

Misconception

Orbit means there is no gravity.

Correction (teacher-friendly)

In orbit, gravity is still strong. The spacecraft is in continuous free-fall with high sideways velocity, so it keeps “missing” Earth.

Misconception

A digital twin is the same as real hardware.

Correction (teacher-friendly)

A digital twin is a software representation used to practice, plan, or explain. It can support learning, but it does not replace hardware integration, measurement noise, or operational constraints.

Misconception

Teaching-grade simulation equals a flight simulator.

Correction (teacher-friendly)

These routes are designed for conceptual understanding and bounded estimates. They are not flight-certified simulators or operational mission design tools.

Misconception

Mission objective and payload are the same thing.

Correction (teacher-friendly)

The objective is the mission goal (what you must accomplish). The payload is the instrument/service used to meet that goal, which then drives subsystem requirements.

Misconception

A contact window means you can communicate continuously.

Correction (teacher-friendly)

Contact windows are brief periods of line-of-sight. Operations require planning, prioritization, and sometimes partial downloads.

Misconception

Satellites are floating — not falling.

Correction (teacher-friendly)

Satellites are falling the entire time. They keep falling around Earth because their sideways velocity carries them forward as gravity pulls inward.

Facilitation

Class flow guide

A simple structure to help you pace a session without relying on rosters, submissions, or backend tools.

Before class

  • Choose a lesson pack and timing preset.
  • Open the route(s) in advance and confirm they load on your classroom network.
  • Pick one misconception to foreground and one piece of evidence to collect.
  • Decide whether students work individually, pairs, or whole-class guided.

During class

  • Start with the boundary statement: teaching-grade model, not operational simulation.
  • Run the interactive route with a shared vocabulary wall (2–5 terms).
  • Pause for one think-pair-share prompt to turn observation into reasoning.
  • Collect one evidence artifact (sentence, checklist, or screenshot).

Debrief

  • Ask one assessment prompt as an exit ticket (written or verbal).
  • Revisit the misconception and ask students to correct it in their own words.
  • Connect to the curriculum map: what track or next activity would build on today’s idea?

Optional extension

  • Run a second activity as comparison (same concept, different framing).
  • Add a “model limitations” discussion: what assumptions did the activity make?
  • Optional: connect to supervised local hardware later — without implying remote control or student submissions.

Copyable plan

Session plan builder

Choose a lesson pack and timing preset to generate a plan you can copy into slides, a doc, or your lesson notes. This stays local in your browser (no backend).

Orientation starter — Teacher session plan

Audience / level: All Levels
Duration: 15 min (15-minute demo)

Learning outcomes:
- Students can explain what a CubeSat is and why missions start with objectives.
- Students can name basic subsystems (power, comms, pointing, payload).

Activities and routes:
- What is a CubeSat Mission?: /twin/learn/activities/orientation_what_is_cubesat
- Curriculum map: /twin/learn/curriculum
- Choose mode: /twin/modes

Before class:
- Pick one misconception to foreground (see Misconception guide).
- Decide what evidence you want students to produce (one artifact, one sentence, or one screenshot).
- Open the route(s) on the projector and confirm the page loads in your classroom network.

During class (suggested flow):
- Hook (2 min): What is a mission objective?
- Activity (8–10 min): run one interactive route together
- Debrief (3–5 min): one misconception + one evidence prompt

Facilitation prompts:
- Ask: “Who benefits from your mission, and how would they know it worked?”
- Prompt: “Objective first — hardware later. What changes once the objective changes?”

Expected evidence:
- One-sentence mission objective (student or group)
- Top three subsystems identified for the objective
- One trade-off stated (e.g., power vs data, payload vs pointing)

Assessment prompts:
- In one sentence, what is your mission trying to accomplish, and name one design choice that follows from it?

Boundary note:
Local planning aid only. No roster, no student accounts, and no automated assessment workflow in MVPF6.

Capability boundary (product):
No backend, no login, no roster, no automatic assessment submission in MVP-final Phase 6. Assessment engine (Phase 8) and evidence engine (Phase 9) arrive later.

Quick links

Jump to existing foundations

Teacher Mode pairs with the curriculum map, Demo Pack, and yardstick activity routes across Tracks 0–7.

Student route note: Student Mode foundation is now available on /twin/student (local progress only; no accounts).

Capability boundary

What Teacher Mode is — and is not

  • Local only: planning aid and copyable session plan text. No account system, no login, and no server persistence.
  • No classroom backend: no roster engine, no student submissions, and no gradebook.
  • Assessment / evidence scope: Assessment Engine v0 and Evidence Engine v0 run locally on yardstick activities—practice-only self-checks and copy/export artifacts, not a gradebook and not teacher visibility unless learners share work manually.
  • No official alignment claims: suggested pathways and outcomes are planning aids, not accreditation or standards certification.