Most physical theories begin by choosing mathematics — equations, symmetry groups, particles — and then asking whether nature agrees. If predictions match experiment, the theory survives. If not, it’s revised.
But what if we flipped that logic?
Instead of starting with equations, suppose we begin with the most unavoidable requirement in science: experiments must produce repeatable outcomes. If something is measured today and measured tomorrow, there must be a physical record of that result. That single demand — that experiments leave stable records — turns out to have deep consequences.
If records exist, physics must contain stable physical structures capable of storing information. Stability requires tolerance to microscopic fluctuations, which means many microscopic states must correspond to the same macroscopic record — a process called coarse-graining. And if the microscopic dynamics is reversible (as in both classical mechanics and quantum mechanics), then coarse-graining inevitably produces macroscopic irreversibility. In plain terms: entropy isn’t an added rule — it is forced by the existence of records.
From there, the structure tightens. If microscopic evolution is reversible but records are irreversible, there must be a conversion between the two: a ratio linking microscopic progression to irreversible information commitment. In the BCB framework, this appears as Ticks-Per-Bit (TPB) — the amount of substrate evolution required to lock in one bit of physical record. Any theory that allows repeatable experiments must contain something equivalent to this conversion. It isn’t a modelling choice; it’s structurally necessary.
Finite observational resolution then pushes the theory toward an effective field theory description at accessible scales. And if the particle-like structures we observe are stable, localised excitations in three spatial dimensions, mathematics forces the presence of a specific stabilising term — a Skyrme-type structure — to prevent collapse. Even gauge symmetry emerges not as an aesthetic assumption, but as a consequence of redundancy in microscopic description.
Why This Matters for BCB
In the earlier BCB Fold Lagrangian paper, the BCB framework was constructed explicitly as a working quantum field theory. It wrote down the action, computed particle properties, reduced free parameters, and showed how masses and couplings could be derived from fold geometry and information-theoretic principles. That paper answered the question:
“What does BCB predict?”
This inevitability programme answers a different question:
“Why should a theory like BCB exist at all?”
The remarkable result is that the core structural features of BCB — irreversible record commitment, TPB, entropy-weighted time, stable fold excitations, compact gauge symmetry, and an EFT structure at accessible scales — are not arbitrary design choices. They are precisely the structures any empirically meaningful theory must contain if it allows repeatable experiments with stable records.
That doesn’t yet prove BCB is the only possible theory in all of physics. But it shows something powerful: BCB sits inside a tightly constrained region of theory space defined by unavoidable logical requirements. The earlier Lagrangian paper showed BCB works mathematically. This work shows why something with BCB’s architecture may be unavoidable in the first place.
In short:
- The Lagrangian paper demonstrated consistency and predictive power.
- The Inevitability Programme demonstrates structural necessity within the admissible theory class.
The overlap between the two is not accidental.
If physics must allow experiments to have outcomes, and if those outcomes must be stored physically, then much of the architecture we associate with modern field theory may not be optional at all.
And that is a very different starting point for fundamental physics.