Forcing Functions

#system-architecture #practical-application

What It Is

A forcing function is physical constraint that makes undesired behavior impossible or drastically more expensive, altering the probability space directly rather than relying on mental reminders or intention. When your boss asks "what are you going to do next time?" after a failure, the wrong answer is "I'll remember to check" or "I'll be more careful" (mental todo list, costs willpower, fails under load). The right answer is "I'll implement forcing function X that makes this error physically impossible" (alters system architecture, zero ongoing cost, cannot fail through forgetting).

The distinction: mental reminders operate in user space (subject to depletion, attention limits, competing priorities). Forcing functions operate in the environment (physical reality, no cognitive load, always active). Mental commitment: "I will do X" (probability ≈ 0.3-0.6 depending on willpower state). Forcing function: "System prevents not-X" (probability ≈ 0.95-0.99, limited only by physical bypass).

This is prevention architecture made precise: don't make failure less likely through better intentions—make failure physically impossible through architectural constraints that alter the state space.

Mental Todo List vs Physical Forcing Function

The failure pattern:

graph TD
    A[Mistake occurs] --> B[Boss: What next time?]
    B --> C[Answer: I'll remember]
    C --> D[Mental note added]
    D --> E[Next deployment:<br/>Stressed, WM full]
    E --> F[Forget to check]
    F --> G[Same mistake]
    G --> A
    style G fill:#ff9999

The forcing function pattern:

graph TD
    A[Mistake occurs] --> B[Boss: What next time?]
    B --> C[Answer: I'll add<br/>forcing function]
    C --> D[Checklist implemented<br/>in CI/CD]
    D --> E[Next deployment]
    E --> F{Logs verified?}
    F -->|No| G[Deploy blocked]
    F -->|Yes| H[Deploy succeeds]
    G --> E
    style H fill:#99ff99
    style D fill:#99ccff

Comparison table:

ApproachCognitive LoadProbability of SuccessFailure ModeSustainability
Mental reminderHigh (occupies working memory)30-60% (depends on attention/stress)Forget under loadDegrades over time
Habit/trainingMedium (requires 30 reps)70-85% (after installation)Stress reverts to old patternsGood if maintained
Forcing functionZero (externalized)95-99% (limited by physical bypass)Deliberate circumvention onlyPermanent until removed

Altering Probability Space

Forcing functions don't make you "more likely" to do the right thing—they remove wrong options from the action space entirely or make them so expensive that Boltzmann distribution selects against them.

The probability transformation:

Without forcing function:

P(correct_action) = willpower_available × attention_on_task × time_not_stressed
                  ≈ 0.6 × 0.7 × 0.5 = 0.21 (21% success rate)

With forcing function:

P(correct_action) = 1 - P(deliberate_bypass)
                  ≈ 1 - 0.02 = 0.98 (98% success rate)

The mechanism:

State SpaceAvailable ActionsSelection
Before forcing function[correct_action, 5 incorrect_actions]Depends on willpower/attention (probabilistic)
After forcing function[correct_action] or [correct_action, very_expensive_bypass]Deterministic (correct) or very unlikely (bypass)

The forcing function removed incorrect actions from possibility space. Not "made you more likely to choose correctly" but "made choosing incorrectly physically impossible or prohibitively expensive."

Categories of Forcing Functions

1. Physical Removal (Strongest)

Remove the problematic option from environment entirely.

Examples:

ProblemMental Solution (Fails)Forcing Function (Works)
Oversleeping"Set alarm, get up when it rings"Phone charges across room (must stand to silence)
Checking phone during work"I'll resist the urge"Phone locked in drawer (4-unit cost to access)
Late-night eating"I'll use willpower after 8pm"Kitchen closed, eating window ended at 2pm (temporal constraint)
Deploying buggy code"I'll remember to run tests"CI/CD blocks merge until tests pass (cannot merge untested)

The mechanism: Wrong action has infinite cost (literally impossible) or very high cost (>>5 willpower units). Thermodynamics selects correct action automatically.

2. Temporal Constraints (Strong)

Make undesired action impossible during specific time windows.

Examples:

ProblemForcing FunctionMechanism
All-day grazingEating window closes at 2pmAfter 2pm, eating = breaking system (high psychological cost)
Endless meetingsNo meetings after 2pmCalendar constraint makes deep work possible
Late deploymentsDeploy freeze after 3pmPrevents rushed evening deploys with degraded willpower

The mechanism: Time-based prevention removes option from certain periods, channels behavior into constrained windows.

3. Procedural Locks (Strong)

Require specific sequence before action possible.

Examples:

ProblemForcing FunctionImplementation
Hasty decisionsMandatory 24-hour waiting period for purchases >$500Cannot execute immediately, cooling-off prevents impulse
Incomplete deploymentsPre-deploy checklist (CI/CD gates)Each gate must pass: tests → review → staging → logs → prod
Skipping gymClothes laid out, gym bag by door, calendar blockSequential cues make skipping require active dismantling
Forgetting medicationPill organizer + phone alarm + visible placementMust actively ignore multiple cues

The mechanism: Correct action requires lower activation energy (path prepared) or incorrect action requires higher activation energy (must bypass multiple gates).

4. Social Commitment Devices (Moderate)

External accountability creates cost for violation.

Examples:

ProblemForcing FunctionCost Type
Skipping workoutsPrepaid trainer (money lost if miss)Financial + social shame
Not shippingPublic ship date announcementReputation cost
Abandoning goalAccountability partner with stakesRelational cost

The mechanism: Violation creates external cost (money, reputation, relationship) beyond internal willpower cost. Works but weaker than physical forcing functions (can still violate, just expensive).

5. Information Forcing Functions (Moderate)

Make failure visible immediately, creating feedback loop.

Examples:

ProblemForcing FunctionFeedback
Weight gainDaily weigh-in posted publiclyImmediate visibility prevents drift
Low productivityPublic tracking (GitHub commits, word count)Cannot hide low output
Budget overspendReal-time spending tracker on home screenEvery purchase visible immediately

The mechanism: Feedback loop tightened from monthly/yearly to daily/immediate. Harder to ignore data when it's in your face. Works through information not physical constraint (weaker—can ignore data).

Forcing Function Design Protocol

When failure occurs, diagnose and externalize:

Step 1: Diagnose the Mechanism

Not "why did I fail?" (moral) but "what mechanism allowed failure?"

Question sequence:

  • What was the failure mode? (Be specific: "deployed without checking logs")
  • What made this action possible? (Logs check was optional, relied on memory)
  • What would make it impossible? (System requirement: verify logs before deploy button active)

Step 2: Design Physical Constraint

The best answer moves up this hierarchy:

Weakest → Strongest:
1. Mental reminder ("I'll remember next time")
2. Checklist (can skip if stressed)
3. Social commitment (expensive to violate but possible)
4. Information forcing (make failure visible)
5. Procedural lock (requires sequence)
6. Temporal constraint (impossible during window)
7. Physical removal (option doesn't exist)

Aim for level 5-7. Levels 1-2 fail under load. Levels 3-4 rely on social/psychological costs that deplete over time.

Step 3: Implement and Test

  • Install the forcing function
  • Attempt to bypass it (red team your own system)
  • If bypass is easy (<2 units activation cost), strengthen the constraint
  • If bypass requires >5 units or is physically impossible, forcing function is adequate

Step 4: Maintain

Most forcing functions require zero maintenance (physical constraints persist). But check:

  • Are apps staying deleted? (Could reinstall)
  • Are procedural locks still enforced? (Could disable)
  • Are temporal constraints respected? (Could ignore)

If you're bypassing your own forcing functions, you need stronger constraints or you're fighting the wrong battle (maybe the constrained behavior actually serves a function—debug that first).

The Boss Question: Proper Answers

Scenario: Made mistake at work, boss asks "What will you do next time?"

Bad answers (mental commitments):

AnswerWhy It FailsProbability of Repeat
"I'll be more careful"Vague, no mechanism80% (will forget)
"I'll remember to check"Mental load, competes with other priorities60% (fails under stress)
"I'll try harder"Moralistic, no architecture change90% (trying ≠ system change)
"I won't let it happen again"Promise without mechanism70% (good intentions insufficient)

Good answers (forcing functions):

AnswerForcing Function TypeProbability of Repeat
"I'll add automated test that catches this"Physical (cannot merge if test fails)<5% (bypass requires deliberate override)
"I'll create pre-deploy checklist in CI/CD"Procedural lock<10% (must actively skip steps)
"I'll pair program for these changes"Social commitment<20% (partner catches errors)
"I'll add this to code review requirements"Information forcing<15% (reviewer sees issue)

The template:

"I'll implement [specific forcing function] that makes [error type]
physically impossible/very expensive by [mechanism]."

Not "I will do better" but "I will change the system so I cannot make this error even if I try."

Externalizing vs Mental Load

The fundamental shift: externalize constraints into environment, don't internalize them as cognitive burden.

Internal (fails):

Your brain's job:
├─ Remember to check logs
├─ Remember to run tests
├─ Remember to update docs
├─ Remember to notify team
├─ Remember to backup data
├─ Remember to verify staging
└─ [10+ other items competing for <WikiLink href="/wiki/working-memory">4-7 slots</WikiLink>]

Result: Items get dropped, especially under stress

External (succeeds):

Environment's job:
├─ CI/CD: Cannot deploy without passing tests
├─ Pre-commit hook: Cannot commit without formatted code
├─ Calendar: Meeting reminder pops up automatically
├─ Linter: Cannot merge with warnings
├─ Script: Backs up before destructive operations
└─ Checklist: Visibly incomplete until all items done

Your brain's job: Execute what system requires
Result: Cannot forget because system enforces

The resource equation:

Cognitive_load = Internal_constraints × Stress_multiplier

Where:
  Internal_constraints = things you must remember
  Stress_multiplier = 1.5-3× under pressure

External constraints have:
  Cognitive_load = 0 (environment handles it)

This is why prevention architecture works: move the constraint from your head into the world. Your brain has enough to do without maintaining a list of "things I should remember not to do."

Forcing Functions for Common Failure Modes

Failure: Skipping important routine

Mental ApproachForcing Function
"I'll remember to do morning routine"Calendar alarm + physical cues (journal visible, coffee preset) + braindump template opened automatically
"I'll remember to work out"Prepaid trainer (money lost if skip) or gym bag blocks doorway (cannot leave without confronting)

Failure: Making decision when depleted

Mental ApproachForcing Function
"I'll avoid big decisions when tired"Rule: No decisions after 6pm or below Tier 3 willpower. Calendar blocks "decision time" to morning only.
"I'll sleep on it"Mandatory 24-hour delay for decisions >$X or impact >Y. Enforce through requiring manager approval or automated delay.

Failure: Consuming time-wasting content

Mental ApproachForcing Function
"I'll limit social media to 30 min/day"Apps deleted entirely (must reinstall = 6 units activation cost)
"I'll only check during breaks"Website blockers active during work hours (physically cannot access)

Failure: Breaking diet/eating window

Mental ApproachForcing Function
"I won't eat after 2pm"Kitchen closed (food put away, lights off) + eating window tracked publicly
"I'll eat healthy"Don't buy junk food (not present = cannot eat, prevention costs 0 units)

Failure: Procrastinating on important task

Mental ApproachForcing Function
"I'll work on this first thing"Calendar: 9-11am blocked, notifications off, single tab open with task, phone in drawer
"I'll finish by Friday"Public commitment + pairing session scheduled + deliverable required for next meeting

The Pre-Mortem Protocol

Before implementing solution, run pre-mortem: "How could this forcing function fail?"

Example - Phone in drawer:

Potential bypasses:

  • Walk to drawer and retrieve phone (costs 4 units, but possible)
  • Use computer for same distractions (alternative route)
  • Forget to put phone in drawer initially (forcing function not activated)

Strengthening:

  • Drawer → locked drawer (higher cost)
  • Block social media on computer too (close alternative route)
  • Morning checklist: "Phone in locked drawer" (gate on work session)

The test: Try to bypass your own forcing function. If bypass costs <3 willpower units, strengthen. If bypass costs >5 units or is physically impossible, forcing function is adequate.

Meta-Causality: Be Causal Once About the Thing That Will Be Causal Forever

Forcing functions represent the deepest move in agency as the ability to be causal: being causal at the architectural level rather than the instance level.

Consider the difference:

LevelDescriptionCost StructureExample
Instance-level causalityForce each correct action individuallyOngoing, depleting, scales linearlyResist checking phone 50× daily (50 × 2 units = 100 units/day)
Architectural causalityInstall forcing function onceOne-time, then zeroLock phone in drawer (10 units once, 0 thereafter)

The math is stark:

Instance strategy:
  Day 1: 100 willpower units
  Day 30: 3000 willpower units cumulative
  Day 365: 36,500 willpower units cumulative
  → Exhaustion, failure, reversion

Architecture strategy:
  Day 1: 10 willpower units (install forcing function)
  Day 30: 10 willpower units total
  Day 365: 10 willpower units total
  → Sustainable indefinitely

This is why forcing functions embody Level 4 agency: you're not predicting behavior or even influencing individual decisions—you're engineering the probability distribution itself. The forcing function removes certain states from possibility space entirely.

The principle generalizes beyond forcing functions to all architectural interventions:

Instance Causality (Expensive)Architectural Causality (Cheap)
Resist bad food dailyDon't buy bad food (prevention)
Remember morning routineAutomatic trigger sequence
Stay focused on workEnvironment blocks distractions
Choose healthy responseDefault script loads healthy response

The shift: from "I must be causal at every decision point" to "I was causal once about the architecture, now the architecture is causal for me."

This is the mechanism behind the 30x30 pattern. You're causal for 30 days to install triggers, defaults, and prevention. Then the architecture handles ongoing execution. Your one-time causal investment compounds into permanent probability shift.

The meta-question when facing any behavioral challenge: "What forcing function, installed once, would make this behavior automatic forever?"

Integration with Existing Frameworks

Prevention Architecture: Forcing functions ARE prevention architecture—specific implementations of the general principle (remove option vs resist option).

Superconsciousness: Forcing functions minimize need for kernel mode. If system prevents error automatically, you don't need conscious override. Reserve kernel mode for installation and emergencies, let forcing functions handle routine constraints.

Activation Energy: Forcing functions work by altering energy landscape. Correct action: low activation energy (easy path). Incorrect action: high activation energy (blocked or expensive) or removed entirely (infinite cost).

Boltzmann Distribution: P(behavior) ∝ e^(-E/kT). Forcing functions increase E for undesired behavior dramatically. Even with high available energy (T), the exponential makes P(undesired) ≈ 0.

Cybernetics: Forcing functions are control system constraints that bound actuator behavior. Cannot execute action outside constrained space regardless of control signal.

Common Objections and Responses

Objection: "This seems rigid/controlling"

Response: Mental self-control is MORE rigid—requires constant vigilance, depletes willpower, fails under stress. Forcing functions free you from self-monitoring. The phone in drawer means you DON'T have to resist checking it 50 times per day. That's liberation, not constraint.

Objection: "What if I need to bypass for emergency?"

Response: Good forcing functions have emergency escape at high cost (>5 units). Locked drawer can be unlocked if truly urgent. But "truly urgent" rarely happens. If you're bypassing weekly, it's not emergency protocol—it's inadequate forcing function. Strengthen or redesign.

Objection: "I should be able to control myself without external crutches"

Response: Moralistic thinking. You ARE the system designer. Installing forcing functions IS control—architectural control that's sustainable. "Raw willpower" control depletes and fails. Which produces better outcomes: working system or exhausted human?

Examples from Will's Systems

DoorDash deletion (not "limited use"):

  • Problem: Wasting money and time on delivery food
  • Mental solution: "I'll only use it on weekends" (failed)
  • Forcing function: App deleted, redirect installed if attempt reinstall
  • Result: Zero DoorDash spending for 70+ days, zero willpower spent resisting

Eating window closure at 2pm:

  • Problem: Late-night snacking while tired (low willpower)
  • Mental solution: "I'll resist evening cravings" (failed repeatedly)
  • Forcing function: Kitchen closed, last meal 1:30pm, 12-hour fast to next meal
  • Result: Cannot eat after 2pm without breaking system (high psychological cost), evening resistance eliminated

Phone locked during work:

  • Problem: Constant distraction, fragmented focus
  • Mental solution: "I'll check phone only during breaks" (failed)
  • Forcing function: Phone physically locked in drawer, requires standing/walking/unlocking to access
  • Result: Checking cost increased from 0.1 units to 4 units, thermodynamics selects for continued work

Morning mantra as gate:

  • Problem: Skipping morning sequence, going directly to reactive mode
  • Mental solution: "I'll remember to do sequence" (failed)
  • Forcing function: Work session cannot begin until mantra logged (200+ day streak)
  • Result: Mantra becomes prerequisite, work naturally follows

Designing Your Own Forcing Functions

The design process:

  1. Identify the failure mode precisely

    • Not "I'm undisciplined" but "I check phone 50× daily during work"
    • Specific behavior, specific context, specific frequency
  2. Determine what currently makes it possible

    • Phone is visible and accessible (0.1 unit cost to check)
    • No barrier between impulse and action
    • Low activation energy
  3. Design constraint that removes or dramatically increases cost

    • Options: a) Remove phone from environment (infinite cost) b) Lock phone in drawer (4 unit cost) c) Install app blocker (moderate cost) d) Social commitment (reputational cost)
  4. Implement strongest feasible forcing function

    • Strongest: Phone off and locked away
    • Feasible compromise: Phone in drawer during work blocks
    • Weakest acceptable: App time limits (still bypassable but creates friction)
  5. Test bypass difficulty

    • Actually try to bypass it
    • If bypass costs <3 units, strengthen
    • If bypass costs >5 units or is impossible, adequate
  6. Monitor and iterate

    • Are you bypassing? (Forcing function too weak)
    • Is it causing problems? (Forcing function too rigid for actual needs)
    • Adjust until it's strong enough to prevent failure but flexible enough for legitimate use

The Forcing Function Mindset

When failure occurs, the automatic question shift:

Old mindset:

  • "What's wrong with me?" (moral)
  • "How do I be better next time?" (vague)
  • "I'll try harder" (no mechanism)

Forcing function mindset:

  • "What system allowed this?" (diagnostic)
  • "What forcing function makes this impossible?" (design)
  • "I'll implement constraint X that alters the probability space" (engineering)

This is mechanistic thinking applied to failure analysis. Not character improvement but architectural improvement. The system that fails is the system that allows failure to be possible. Redesign the system.

Key Principle

Design systems that make failure impossible, not willpower that makes success likely - Mental reminders occupy working memory, cost willpower, fail under stress. Forcing functions externalize constraints into environment, operate automatically, succeed regardless of cognitive state. When boss asks "what next time?" answer with forcing function that alters probability space, not promise to try harder. Physical removal strongest (option doesn't exist), temporal constraints strong (impossible during window), procedural locks strong (sequence required), social/information moderate (bypassable but costly). The mechanism: remove incorrect actions from state space or increase activation energy so thermodynamics selects correct action automatically. Don't fight yourself with conscious override repeatedly—engineer environment so correct behavior is path of least resistance. Test by attempting bypass: <3 units = too weak, >5 units = adequate. Forcing functions free you from self-monitoring burden, enabling sustained attention on priorities that matter.


The best self-control is designing systems where you don't need self-control. Make the right action automatic and the wrong action impossible.