AI / ML & Autonomy

What Does Autonomy Mean?

Explore what autonomy means for a CubeSat — from alert-only to recommended actions to simulated execution — and why human oversight remains essential at every level.

High school
Time estimate
20–25 min
Complexity
developing
Maturity
pilot ready
Simulator readiness
implemented
Software available now
Implemented as interactive autonomy level and scenario explorer at `/twin/learn/activities/ai_autonomy_basics` — teaching-grade; no real satellite commands, no certified onboard AI.

Learning outcomes

Student can describe three levels of spacecraft autonomy, explain what each level is allowed to do, and state why human-in-the-loop review matters even in the highest autonomy mode.

  • Define three levels of spacecraft autonomy: alert only, recommend, execute (simulated).
  • Explain what each level is allowed to do without waiting for human approval.
  • State why human oversight remains important even in the highest autonomy mode.

Concept primer

Explore what autonomy means for a CubeSat — from alert-only to recommended actions to simulated execution — and why human oversight remains essential at every level.

Open What Does Autonomy Mean? at `/twin/learn/activities/ai_autonomy_basics` — autonomy level selector and mission scenario picker with boundary explanations (teaching-grade; not flight software, not real satellite command).

Draw a three-row table: autonomy level, what system can do, what human must still do. Fill in one row per level.

Interactive lab

Teaching-grade software activity slot — not a flight simulator or certified propagator.

Step 1 — Choose an autonomy level

Autonomy Level

Select a level to see what the system is and is not allowed to do.

Description

The system detects a condition and raises an alert. A human operator decides what to do next. Nothing changes automatically.

Allowed system actions

  • Generate alert message
  • Log anomaly flag
  • Increment counter

Human oversight

Human makes every decision. The system is purely a sensor and notification tool.

Boundary: Safest mode — zero autonomous action. Always requires human acknowledgement.

Step 2 — Choose a mission scenario

Mission Scenario

Description

Battery state-of-charge drops below the warning threshold during eclipse. Solar generation is insufficient to recover before the next pass.

Affected subsystem

EPS (Electrical Power System)

Telemetry signal

battery_soc < 30% for > 60 s

Rule-based response at Alert Only level

Alert: reduce payload duty cycle; defer non-critical operations.

ML-assisted note

An ML model trained on previous eclipse profiles could predict low-SoC risk earlier — but requires labelled historical data.

Self-check · Local only

3 questions — 0/3 answered correctly

Local-only. No submission, no grade. Answers revealed here only.

What does 'autonomy level' describe in a spacecraft operations context?

Why is human oversight important even in 'execute' autonomy mode?

In the teaching model, what does 'Execute (Simulated)' autonomy mode mean?

Evidence capture · Local only

Your evidence — What Does Autonomy Mean?

Local-only. No submission, no backend, no grade. Copy or screenshot to share.

Autonomy level selected
Alert Only
Scenario
Power Low
Allowed system actions
Generate alert message; Log anomaly flag; Increment counter
Human oversight note
Human makes every decision. The system is purely a sensor and notification tool.
Boundary note
Safest mode — zero autonomous action. Always requires human acknowledgement.
Rule-based response for scenario
Alert: reduce payload duty cycle; defer non-critical operations.

Evidence capture

Expected outputs learners should be able to show after the lab (Phase 9 evidence engine preview available).

  • Chosen autonomy level and scenario recorded
  • List of allowed vs disallowed actions at the selected level
  • One-sentence reflection on why human oversight matters
  • Self-check summary and copied evidence text

Reflection

Choose an autonomy level and a mission scenario; read what the system is and is not allowed to do; reflect on why boundaries matter.

Responses are not persisted in this preview unless a specific activity component adds storage later.

Assessment / quick check

Describe one situation where 'recommend action' autonomy is safer than 'execute' autonomy, and explain why.

Teacher notes

Use as entry point to the AI/ML mini-course. Ask: 'What would go wrong if a satellite acted on every alert without a human check?'

Next activity

Suggested progression from the mission learning path. Links avoid missing activity routes.