Back to blog
SignalJanuary 5, 2026·7 min read

Confidence Scoring: A Better Way to Track Risk

Percentage progress answers the wrong question. Leaders don't need to know "how much have we done?" — they need to know "how likely are we to succeed?" Confidence scoring answers that directly.

"We're 70% done."

You hear this in every weekly review. But what does it actually mean? In most cases, it means the visible work is complete but the risky work hasn't started. It means early progress looked fast, but integration and edge cases are about to arrive. It means the number got inflated to signal momentum.

Percentage progress is the default way we track goals, but it answers the wrong question. Leaders don't need to know "how much have we done?" They need to know "how likely are we to succeed?"

Confidence scoring answers that directly.

The core problem with % progress

Percentage progress implicitly assumes work is linear, predictable, and evenly distributed. This breaks down in software and knowledge work.

Common failure modes:

  • False precision. "70% done" often means the visible work is complete, not the risky work.
  • Front-loaded optimism. Early progress looks fast; integration and edge cases arrive late.
  • Gaming behavior. Percentages get inflated to signal momentum or safety.
  • Misleading aggregation. Averaging percentages across teams or KRs is largely meaningless.
Percentage progress answers: "How much have we done?"
Leaders usually need: "How likely are we to succeed?"

What confidence scoring gets right

Confidence reframes progress as probability: "Given what we know right now, how likely are we to hit this goal?"

Why this works better:

  • Naturally incorporates risk. Dependencies, unknowns, and quality all affect confidence.
  • Forces real conversations. A drop from 80% → 60% confidence demands explanation.
  • Tracks trajectory, not activity. Direction matters more than raw completion.
  • Harder to fake. High confidence requires resolved risks.

Confidence is not vibes

Confidence scoring fails when it becomes emotional or political. It requires shared calibration.

A practical confidence scale:

ConfidenceMeaning
90%+Highly likely; only exceptional events stop delivery
70-85%Likely, but dependent on a few known variables
50-65%Genuinely uncertain; meaningful risks unresolved
<50%Unlikely without intervention, re-scope, or resources

Key rule

Confidence must be justified in terms of risks and assumptions, not feelings. When someone says "I'm at 60% confidence," they should be able to explain: "The main risk is X. If we resolve it by Y date, confidence goes up. If not, we need to re-scope."

Where % progress is still useful

Percentage progress works best for:

  • Intra-team execution tracking
  • Checklists and milestones
  • Genuinely linear work (migrations, content pipelines, infrastructure rollouts)

Percentage progress should rarely be used:

  • In executive reviews
  • Across teams
  • As a proxy for success

It answers a different question.

The strongest operating pattern

Confidence externally. % progress internally.

  • Teams use % progress to plan and execute.
  • Leaders review confidence to decide when to intervene.
  • Narrative explains changes in confidence: "Confidence dropped due to X" or "Confidence increased after Y was de-risked."

This avoids false certainty and hand-wavy optimism.

Use caseMetric
Team needs to know what to work on tomorrow% progress
Leadership needs to decide whether to interveneConfidence

Confidence improves accountability

With percentage progress:

  • Teams look "on track" until late
  • Failure feels sudden

With confidence:

  • Risk surfaces early
  • Intervention becomes normal
  • Misses are rarely surprises
Confidence enables course correction instead of postmortems.

A simple litmus test

Use this rule:

If leadership needs to decide whether to intervene, re-scope, or add resources, use confidence.
If a team needs to know what to work on tomorrow, use % progress.

Most companies invert this and pay for it.

How to implement confidence scoring

  1. Add confidence to check-ins. Every OKR update includes a confidence score (0-10 or 0-100%) alongside any notes.
  2. Calibrate as a team. In your first few reviews, discuss what different confidence levels mean. Build shared understanding.
  3. Track confidence over time. A goal that drops from 80% → 50% over three weeks is a different story than one that was always at 50%.
  4. Require explanation for changes. When confidence moves significantly, ask why. What changed? What's the risk?
  5. Focus reviews on low confidence. Items at 70%+ confidence don't need airtime. Items below 60% do.

The bottom line

  • % progress is a local execution tool
  • Confidence scoring is a strategic signaling tool
  • % progress creates false certainty at leadership level
  • Confidence without calibration devolves into vibes

Done well, confidence scoring doesn't reduce rigor. It exposes where rigor is missing.

Learn more

Runsheet tracks both confidence and progress for every key result. See how check-ins work, or explore more in our Signal: Credibility vs Intent series.

This article is part of our Signal series.

Enjoyed this post?

Subscribe to get more insights on running your company with clarity.