AI / ML & Autonomy
Explore what autonomy means for a CubeSat — from alert-only to recommended actions to simulated execution — and why human oversight remains essential at every level.
Student can describe three levels of spacecraft autonomy, explain what each level is allowed to do, and state why human-in-the-loop review matters even in the highest autonomy mode.
Explore what autonomy means for a CubeSat — from alert-only to recommended actions to simulated execution — and why human oversight remains essential at every level.
Open What Does Autonomy Mean? at `/twin/learn/activities/ai_autonomy_basics` — autonomy level selector and mission scenario picker with boundary explanations (teaching-grade; not flight software, not real satellite command).
Draw a three-row table: autonomy level, what system can do, what human must still do. Fill in one row per level.
Teaching-grade software activity slot — not a flight simulator or certified propagator.
Step 1 — Choose an autonomy level
Select a level to see what the system is and is not allowed to do.
Description
The system detects a condition and raises an alert. A human operator decides what to do next. Nothing changes automatically.
Allowed system actions
Human oversight
Human makes every decision. The system is purely a sensor and notification tool.
Boundary: Safest mode — zero autonomous action. Always requires human acknowledgement.
Step 2 — Choose a mission scenario
Description
Battery state-of-charge drops below the warning threshold during eclipse. Solar generation is insufficient to recover before the next pass.
Affected subsystem
EPS (Electrical Power System)
Telemetry signal
battery_soc < 30% for > 60 s
Rule-based response at Alert Only level
Alert: reduce payload duty cycle; defer non-critical operations.
ML-assisted note
An ML model trained on previous eclipse profiles could predict low-SoC risk earlier — but requires labelled historical data.
Self-check · Local only
Local-only. No submission, no grade. Answers revealed here only.
What does 'autonomy level' describe in a spacecraft operations context?
Why is human oversight important even in 'execute' autonomy mode?
In the teaching model, what does 'Execute (Simulated)' autonomy mode mean?
Evidence capture · Local only
Local-only. No submission, no backend, no grade. Copy or screenshot to share.
Expected outputs learners should be able to show after the lab (Phase 9 evidence engine preview available).
Choose an autonomy level and a mission scenario; read what the system is and is not allowed to do; reflect on why boundaries matter.
Responses are not persisted in this preview unless a specific activity component adds storage later.
Describe one situation where 'recommend action' autonomy is safer than 'execute' autonomy, and explain why.
Use as entry point to the AI/ML mini-course. Ask: 'What would go wrong if a satellite acted on every alert without a human check?'
Suggested progression from the mission learning path. Links avoid missing activity routes.