The Update Protocol — a craftsperson carefully rebuilding a mechanism
ToolkitThe Update Protocol

The Update Protocol

Turns belief revision from a vague aspiration into a structured, pre-committed practice.


Onramp · Foundation · Core Method

01 // The Codex Lens

The Codex Lens

Understanding bias does not fix bias. Wanting truth does not produce truth.

This is the gap where most intellectual frameworks die. People who understand confirmation bias still do not change their minds when the evidence demands it. People who value honest inquiry still hold positions that the evidence no longer supports. The knowledge is there. The behavior does not follow. The reason is simple and uncomfortable: belief revision happens under pressure, in real time, with identity and social cost attached. Without a method, the default wins. And the default is to stay where you are.

Control recruits here. If you cannot revise your positions, you are calcifying, regardless of how intelligent or well-intentioned you are. The certainty feels like clarity. The refusal to update feels like consistency. But consistency without responsiveness to evidence is calcification. It just does not feel that way from the inside. This is how institutions become rigid, how leaders become blind, how entire fields maintain conclusions that the evidence abandoned years ago.

Decay also recruits here, from the opposite direction. If you refuse to hold any position firmly enough that updating would cost something, you have given up on the project of knowing anything at all. If you never commit, you never have to update. That is not intellectual flexibility. That is dissolution dressed as sophistication.

The Update Protocol holds the range between these failures. It is the practice of committing to beliefs honestly and revising them honestly, with a method that survives the pressure you put on yourself when revision becomes uncomfortable.

Knowing you should change your mind and actually changing it are separated by everything in your psychology. The Update Protocol closes that gap with structure: specific conditions, written down in advance, that hold you to your own standard of honesty when the pressure to stay comfortable is strongest.

02 // The Concept

The Concept

The Update Protocol is the practice of pre-committing to specific conditions under which you will revise a belief, recording those conditions before encountering the evidence, and following through when the conditions are met.

Three things make this more than common sense.

First, the pre-commitment. You decide what counts as disconfirming evidence before the motivated reasoning activates. This matters because in the moment, the mind will find reasons to dismiss anything that threatens its current position. The evidence will feel weaker than it is. The methodology will seem flawed. The source will appear biased. These reactions are automatic, fast, and convincing. The protocol bypasses them by having the conditions already locked in. The question is no longer "Is this evidence strong enough to change my mind?" The question becomes "Have the conditions I already specified been met?"

Update Protocol Timeline: Three stages from calm conditions to structural decisionConditions writtenBefore pressureEvidence arrivesMind resistsConditions met?The record decidesCALMPRESSURESTRUCTUREThe protocol bypasses motivated reasoning by locking the decision point in advance.

Second, the written record. Writing it down is the mechanism, not a nice-to-have. Verbal commitments to openness evaporate under pressure. Memory edits itself to protect existing beliefs. A written statement ("I will revise my position on X if I observe Y") creates a fixed point that the mind cannot quietly rewrite. It creates accountability to yourself, and if shared, to others.

Third, graduated updating. The protocol is not binary. It does not ask you to believe something completely or abandon it entirely. It asks: what evidence would move you from high confidence to moderate confidence? From moderate to uncertain? From uncertain to "I was probably wrong"? This matters because the false choice between total commitment and total abandonment is one of the reasons people resist updating in the first place. If the only options are "I was right all along" and "Everything I believed was wrong," most minds will choose the first regardless of the evidence. The protocol creates a middle path: honest, incremental adjustment.

Confidence gradient: Evidence moves belief through graduated states90%Highly confident70%Moderate50%Uncertain30%Probably wrongEvidence moves the needleUpdating is not all-or-nothing. It is incremental, honest adjustment.

There is one more distinction worth making. "Being open-minded" is a self-description. The Update Protocol is a procedure. Self-descriptions are aspirational. They depend on willpower, mood, and social context. Procedures work even when willpower fails, because they operate on structure rather than character. This is the same distinction the Codex draws between a philosophy you agree with and an operating system you run.

03 // The Practice

The Practice

The diagnostic question is this: "What would change my mind about this? Can I write it down, specifically, right now?"

If you cannot answer that question for a belief you hold with conviction, you have found a belief that is no longer connected to evidence. It may still be correct. But it is no longer responsive to reality, and that is a problem regardless of whether the conclusion happens to be right.

Three practices build the protocol into a working method:

The pre-mortem commitment. Before entering a debate, a decision, or an investigation, write down what evidence would cause you to revise your position. Be specific. Not "strong evidence against" but concrete, observable conditions: "If the retention data shows less than a 5% improvement after eight weeks, this feature is not working." "If three independent sources I trust report the opposite, I will downgrade my confidence to uncertain." The specificity is the safeguard. Vague commitments ("I would change my mind if the evidence was really compelling") are indistinguishable from no commitment at all, because "really compelling" will always mean "more compelling than whatever I just saw."

The confidence check. Assign a rough confidence level to the belief. Not formal probability. Just honest self-assessment: Am I 90% sure about this? 70%? 50%? Then ask what it would take to move that number down by 20 points. This turns the abstract aspiration of "being open to being wrong" into a concrete target. You are not asking yourself to abandon your position. You are asking yourself what would make you slightly less certain. That is a question the mind can answer without triggering an identity crisis.

The public record. When possible, share your update conditions with someone else. This inverts the default social incentive. Normally, changing your mind in public looks like weakness. It signals inconsistency. People avoid it even when the evidence is overwhelming. But if you pre-commit to conditions publicly, following through on an update becomes evidence of integrity rather than evidence of failure. You said what would change your mind. The conditions were met. You followed through. That earns trust rather than losing it. This is where the protocol connects to cooperation: it turns private honesty into a public signal that others can rely on.

The hardest part is following through when the conditions are met. This needs to be said plainly. Everything in your psychology will resist. The evidence will feel weaker than it looked when you wrote down your conditions. You will find reasons why this case is special, why the data needs more time, why the threshold should have been set differently. The protocol does not eliminate this resistance. It makes the resistance visible. You wrote down what would change your mind. The conditions were met. You are now choosing not to follow through. That visibility is the intervention. You can still refuse to update. But you can no longer pretend that the evidence was insufficient.

04 // In the Wild

In the Wild

A startup team believed their new feature would increase user retention. Before launch, the product manager wrote down a commitment: "If 30-day retention does not improve by at least 5% within eight weeks, we kill the feature and reallocate the engineering team." Eight weeks later, retention was up 2%. The team spent three meetings arguing the data was preliminary, the sample size was too small, the timing coincided with a holiday. The product manager pointed to the written commitment. They killed the feature. Six months later, the engineers they freed built the feature that actually moved the number. The protocol cost them a project they loved. It saved them six months of drift on a project that was not working.

A person who held strong views on a specific policy question asked themselves: "What would I need to see to change my mind about this?" They struggled to answer. They could not name a single piece of evidence that would shift their position. That inability was the diagnostic. It did not mean the position was wrong. It meant the position had stopped being a conclusion from evidence and had become a piece of identity. They did not update that day. But they noticed something about how they were holding the belief, and that noticing changed what happened the next time evidence arrived.

A researcher hypothesized that a particular compound would show an effect in a specific assay. Before running the experiment, she wrote: "If the effect size is below 0.3 at p < 0.05 with n > 100, this hypothesis is wrong and I move to the alternative." The result came back at 0.22. Her collaborator wanted to adjust the threshold, run a subgroup analysis, try a different assay. She pointed to the pre-registration. They moved to the alternative hypothesis. The alternative turned out to be the one worth pursuing. The protocol did not just save them from a wrong answer. It saved them from the months they would have spent defending a wrong answer because they had already invested in it.

Each of these situations is ordinary. None requires unusual intelligence or unusual courage. What they require is a method that operates when intelligence and courage are not enough.

05 // Closing

The next time you find yourself certain about something that matters, try this: write down what would change your mind. Be specific enough that a stranger could check whether your conditions were met. Then keep that piece of paper where you will see it again. When the evidence arrives, and it will, you will discover whether you were practicing honesty or just describing yourself as honest. That discovery, uncomfortable as it may be, is where the real work starts.

ROOTS
Where This Comes From

Where This Comes From

The Codex did not invent the Update Protocol. It assembled and framed a practice that others built over decades. What follows is the intellectual history: where these ideas originated, who developed them, and where to go if you want to study them beyond what this page covers.

Karl Popper's falsificationism, developed in The Logic of Scientific Discovery (1934), established the foundational principle: meaningful claims must specify what would falsify them. A belief that no evidence could contradict is not a strong belief. It is an unfalsifiable one. The Update Protocol takes this out of the philosophy of science and applies it to personal belief. If you cannot say what would change your mind, your belief is not functioning as a claim about reality. Popper remains the starting point for anyone interested in the epistemology underneath the protocol.

Eliezer Yudkowsky and the rationalist community on Less Wrong (beginning around 2007) translated Popper's principle into personal practice. Yudkowsky's framing of "making beliefs pay rent" captured the operational point: beliefs should generate predictions, and when predictions fail, the beliefs should update. The Center for Applied Rationality (CFAR), founded in 2012, developed "Trigger Action Plans," a formalized structure for pre-committing to specific responses triggered by specific conditions. The Update Protocol adapts this format directly. For readers who want the practical rationalist framework in depth, Yudkowsky's Rationality: From AI to Zombies and CFAR's workshop materials are the primary sources.

Pre-registration in scientific methodology formalized the same logic at the institutional level. Researchers declare their hypotheses and methods before collecting data, specifically to prevent post-hoc rationalization. The Update Protocol applies pre-registration to personal belief. The replication crisis in science (intensifying through the 2010s) demonstrated at institutional scale what happens when the update mechanism fails: entire fields maintained conclusions that the evidence no longer supported because the social, career, and publishing costs of revision were too high. The crisis was not primarily a failure of method. It was a failure of willingness to follow through on what the methods revealed.

Philip Tetlock's research, published in Expert Political Judgment (2005) and Superforecasting (2015), provided the empirical proof. Tetlock tracked thousands of forecasters over years and found that the strongest predictor of accuracy was not domain expertise or intelligence. It was the frequency and honesty of updating. Superforecasters treated every new piece of information as an occasion to adjust. The worst forecasters treated their initial judgment as something to defend. For anyone who wants to see the evidence that update discipline produces measurably better results, Tetlock's work is the place to go.

Two limitations are worth naming. The protocol is strongest for empirical questions where evidence can be specified in advance. It handles "What would change my mind about whether this drug is effective?" well. It handles "What would change my mind about whether this painting is beautiful?" poorly. The protocol is not universal, but its domain covers more territory than most people assume. And the protocol requires honest initial commitment. A person who sets deliberately impossible conditions ("I would change my mind if every physicist in the world signed a letter") is performing the protocol without practicing it. If your conditions would embarrass you when read by someone reasonable, they are probably not honest.