Mission Design & Payload Thinking

Mission Success Criteria

Define measurable mission success criteria and understand how they drive test and verification.

University intro
Time estimate
30–40 min
Complexity
advanced
Maturity
concept ready
Simulator readiness
partial
Software available now
Available now as replay/evidence workflow — link criteria to catalog experiments and run summaries; not automated requirement verification.

Student flow

1) Pick a mission scenario

Imaging / communication / climate. Pre-fills sample criteria you can edit.

2) Write minimum + full criteria

Each criterion needs a measurable threshold and an evidence source.

3) Self-review

Use the pass/fail-style checklist to confirm your criteria are verifiable.

Evidence and self-check are local-only. Copy/export or screenshot if you want to share.

Learning outcomes

Student can write a set of mission success criteria and explain how they connect to telemetry evidence.

  • Distinguish minimum success from full success criteria.
  • Connect each criterion to a telemetry channel or evidence source.
  • Explain how the Digital Twin can partially verify criteria and where it falls short.

Concept primer

Define measurable mission success criteria and understand how they drive test and verification.

Review mission debrief outputs; assess whether telemetry evidence satisfies stated criteria.

Fill in a success criteria table: criterion, measurement method, threshold, pass/fail.

Interactive lab

Teaching-grade software activity slot — not a flight simulator or certified propagator.

Pick a scenario

Switching scenarios reloads sample criteria; you can still edit each row.

Success criteria table

Criterion 1
ready
Criterion 2
ready
Criterion 3
ready

Self-review summary: 3 of 3 criteria are measurable and bound to evidence.

Pass/fail-style review checklist

  • ✅ Minimum vs full distinction stated: Yes
  • ✅ Criteria with measurable thresholds: 3 / 3
  • ✅ Each criterion bound to an evidence source: 3 / 3

Local self-check

Assessment (practice only)

Use this as a self-check and discussion starter. It is local-only and not a grade.

Optional: attaches a local summary (completed / quick checks / checklist count).

Quick check

Multiple choice self-check

This is a local self-check to support discussion. It is not a grade.

Quick check: Which is the most measurable success criterion?

Quick check: Each criterion should connect to…

Discussion prompt

Short answer (local only)

Write notes for yourself or your group. Nothing is submitted.

Short answer: Pick one of your criteria and state exactly which telemetry, run output, or budget field would prove it passed.

Checklist

Local checklist self-check

Use this to verify you covered key ideas. Nothing is submitted.

Checklist: I can write usable success criteria

0 / 4 checked

Local summary

Assessment summary (practice only)

Completion

0 / 4 sections complete

Quick checks

0 / 2 correct

Shown only to support self-check.

Checklist

0 / 4 items checked

Reminder

Local-only practice summary. Not a grade and not submitted anywhere.

What this preview is / is not

Assessment engine v0 boundary note

  • Student view (local practice): use this as a self-check and discussion starter.
  • Local-only preview/practice: your answers are not submitted.
  • No backend, no accounts, no roster, and no LMS integration.
  • Not a grade. No credential or official scoring is implied.
  • Teacher visibility into student answers is not implemented in MVPF8.
  • Evidence runtime engine arrives in Phase 9 (not in this preview).

Capture

Evidence capture (local-only)

Capture what you did, what changed, what you observed, and how you explain it. This stays in your browser unless you copy/share it manually.

Selected inputs

  • Scenario: Coastal flood imaging
  • One-line description: Image a small set of coastal cities at flood-relevant cadence and resolution.

Generated outputs

  • Total criteria: 3
  • Minimum-success count: 1
  • Full-success count: 2
  • Ready (measurable + evidence-bound): 3 / 3
  • Criterion 1 (minimum): Deliver at least 1 usable image per pilot city per week. | threshold: ≥1 usable image / city / week | evidence: Image catalog summary + pointing telemetry during the relevant pass.
  • Criterion 2 (full): Pointing error stays within ±2° during contact windows for 80% of imaging passes. | threshold: P80 pointing error ≤ ±2° | evidence: Attitude error chart from contact-window simulator runs.
  • Criterion 3 (full): Daily generation does not exceed daily downlink capacity for 95% of days. | threshold: Utilization ≤ 100% on ≥95% of days | evidence: Mission Design data budget label + per-day utilization log.
  • Reflection: (empty)

Checklist

Evidence checklist

0/4 checked

Evidence artifact (local-only)

Mission Success Criteria

Captured: 2026-05-16T07:38:32.777Z · Level: middle_school · Track: mission_design_payload

Summary

Copyable class summary

Copy a readable summary for class notes, or copy JSON for a structured record. Local-only: nothing is submitted.

Evidence artifact (v1)
Activity: Mission Success Criteria
Track: mission_design_payload
Learner level: middle_school
Captured: 2026-05-16T07:38:32.777Z

Mission brief:
Define minimum and full success criteria for a mission scenario. Bind each criterion to an evidence source and self-judge whether it is verifiable.

Selected inputs:
- Scenario: Coastal flood imaging
- One-line description: Image a small set of coastal cities at flood-relevant cadence and resolution.

Generated outputs:
- Total criteria: 3
- Minimum-success count: 1
- Full-success count: 2
- Ready (measurable + evidence-bound): 3 / 3
- Criterion 1 (minimum): Deliver at least 1 usable image per pilot city per week. | threshold: ≥1 usable image / city / week | evidence: Image catalog summary + pointing telemetry during the relevant pass.
- Criterion 2 (full): Pointing error stays within ±2° during contact windows for 80% of imaging passes. | threshold: P80 pointing error ≤ ±2° | evidence: Attitude error chart from contact-window simulator runs.
- Criterion 3 (full): Daily generation does not exceed daily downlink capacity for 95% of days. | threshold: Utilization ≤ 100% on ≥95% of days | evidence: Mission Design data budget label + per-day utilization log.
- Reflection: (empty)

Checklist:
- [ ] I distinguished minimum success from full success.
- [ ] Each criterion has a measurable threshold.
- [ ] Each criterion is bound to a telemetry/evidence source.
- [ ] I treated this as local self-check (no automated requirement verification, no submission, not a grade).

Observations:
(not provided)

Reflection:
(not provided)

Model boundary note:
Local-only teaching model. Not a requirements database, not CAD, not automated mission verification. Mission Design Lab is a teaching estimate. Evidence is not submitted anywhere and is not a grade.

Policy reminder:
- Local-only capture. Not submitted anywhere. Not a grade.

Boundary note

Local-only teaching model. Not a requirements database, not CAD, not automated mission verification. Mission Design Lab is a teaching estimate. Evidence is not submitted anywhere and is not a grade.

Evidence capture

Expected outputs learners should be able to show after the lab (Phase 9 evidence engine preview available).

  • Success criteria table with measurable thresholds
  • Replay quote: chart snippet or metric tied to pass/fail

Reflection

Write three success criteria for a chosen mission scenario, each with a measurable threshold.

Responses are not persisted in this preview unless a specific activity component adds storage later.

Assessment / quick check

Pick one criterion and state exactly which telemetry or budget field would prove it passed.

Teacher notes

Capstone prep: require one criterion tied to ADCS chart evidence and one to Mission Design risk flag.

Teacher guide

Mission Success Criteria

Use this block as facilitation guidance for the Mission Design / Payload mini-course. There is no roster, submission, or teacher visibility workflow in this phase — evidence is shared manually.

Facilitation moves

  • Anchor on minimum vs full: minimum keeps the mission worthwhile; full is what you hope for.
  • Require each criterion to point at a real evidence source (telemetry, run summary, budget field).
  • Connect to capstone-style work: criteria become the rubric for the final mission report.

Misconceptions to watch for

  • ‘The mission worked’ is a success criterion.

    A criterion needs a measurable threshold (numbers + units) and a way to check it (telemetry, summary, or budget field).

  • Verification is automatic in this lab.

    There is no automated verification. Pass/fail is judged locally by the student against the stated evidence.

Boundary reminder: Mission Design / Payload is a teaching model (not a requirements database, not CAD, not automated verification) and the experience is local-only (no accounts, no submissions, not a grade).

Next activity

Suggested progression from the mission learning path. Links avoid missing activity routes.