The Number That Physics Cannot Explain

There is a number woven into the fabric of nature:

α ≈ 1/137.036

It has no units — it is a pure number, a ratio, the same regardless of how you measure it or what units you use. Physicists call it the fine-structure constant, denoted by the Greek letter α (alpha).

This number controls how strongly light interacts with matter. It determines the colours that atoms emit when excited, the precise energy levels of electrons, and whether materials are transparent or opaque. Change it even slightly and atoms, chemistry, and life as we know them would cease to exist.

It is one of the most precisely measured quantities in all of science — and one of the least understood.

In our best current theory of physics, α is not calculated. It is measured from experiment and simply written into the equations by hand. There is no deeper explanation for why it takes this particular value. Feynman called it “one of the greatest damn mysteries of physics.” Pauli reportedly said he would ask God to explain it when he died.

This paper takes a different approach. Rather than asking why α ≈ 1/137, it asks a prior question:

What basic requirements must a universe satisfy in order to produce stable, distinguishable, and consistent physical facts — and what does that imply for the strength with which light and matter interact?

The answer, developed below, is that once minimal conditions on information, geometry, and consistency are imposed, the value 1/α ≈ 137 is not a free choice. It is a consequence. This is the same logical structure by which Einstein derived gravitational dynamics from the geometry of spacetime, and by which Boltzmann derived the laws of heat and gases from the statistical behaviour of atoms.


A Note on Scope

This paper presents the core argument in accessible terms. The full technical development — including detailed mathematical derivations, formal proofs of the structural claims, connections to quantum field theory, and worked extensions to other physical constants — is available in the companion papers published through the VERSF Theoretical Physics Programme at www.versf-eos.com. Readers with a background in physics are encouraged to consult those papers for the rigorous underpinnings of each step made here.

Two companion papers are of particular relevance. The first, Completing the Interface Bridge: Phase Resolution, Symmetry Allocation, and Second-Order Selection in the VERSF Framework, addresses the question of whether the structural values K = 7, the six interaction channels, and the correction factor (N+1)/N are genuinely derived from first principles or inserted by hand. That paper proves all three follow from five primitive postulates and a mathematical theorem about optimal geometry (the Honeycomb Theorem, proved by Hales in 2001).

The second, Finite Distinguishability and Local Capacity Competition: A Structural Basis for Per-Channel Interaction Dynamics, goes one level deeper still. It establishes the foundational principle behind the correction factor itself: when multiple physical processes share a finite resource — as the six interaction channels here share the constraint capacity — they must compete locally rather than fluctuate globally. This competition has a unique mathematical form, governed by what is known as the inverse participation ratio. The correction factor (N+1)/N in the coupling formula is a direct consequence of this principle applied to six equal channels. Nothing in this paper is assumed or fitted — the form of the correction is forced by the requirement that physical quantities remain individually attributable.

A fourth companion paper, Closing the Structural Gaps: Marginal Compositional Consistency and Algebraic Closure in Fact-Producing Universes, operates at the deepest level of all. It addresses the question of why the five foundational requirements of Section 2 are the right ones — not merely plausible, but necessary. That paper is part of a broader programme showing that any universe capable of producing definite, irreversible facts at all is forced into the mathematical structure of quantum mechanics. The five requirements in this paper are the accessible face of that deeper result: they are the minimal conditions that any fact-producing universe must satisfy, and from which everything else follows.

A fifth companion paper, From Interface Structure to Physical Coupling: Closure of the VERSF Programme, addresses the remaining foundational questions: why the structural formula derived at the interface level should agree with the fine-structure constant as measured in low-energy experiments, whether the last remaining free parameter in the framework is uniquely determined, and whether the five foundational requirements of Section 2 are genuinely necessary or merely convenient assumptions. That paper shows that when the first-order and second-order results are combined, the derived value improves to α⁻¹ ≈ 137.034 — within 0.002 of the measured value, a relative accuracy of approximately 15 parts in a million.

This paper is intended to make the central idea visible to anyone willing to follow the logic — regardless of their technical background.


1. On the Nature of Derivation in Physics

A common objection to any proposed derivation of a fundamental constant is that it must rest on assumptions — and therefore is not truly a derivation at all.

This objection, while superficially compelling, misunderstands how physics actually works.

Every major physical theory is built on foundational assumptions that are not themselves derived within that theory:

  • Quantum mechanics — the theory of atoms and subatomic particles — assumes that physical states can be described using a specific type of mathematical space, and that probabilities arise from a particular rule about that space
  • General relativity — Einstein’s theory of gravity — assumes that space and time form a smooth, curved four-dimensional geometry
  • Thermodynamics — the science of heat and energy — assumes that large collections of particles explore all their possible arrangements with equal probability over time

None of these frameworks explain why their foundations are true. They show what follows if those foundations hold. This is the logical structure of a conditional derivation — an “if this, then that” argument:

If certain foundational conditions hold, then specific results necessarily follow.

The real question is not whether assumptions are present — they always are — but whether those assumptions are:

  1. Physically motivated — grounded in requirements that any coherent description of reality must satisfy
  2. Minimal — introducing only what is strictly necessary, without adding extra machinery to get the desired answer
  3. Broadly applicable — not secretly tailored to reproduce a specific result

This paper constructs a conditional derivation of α satisfying all three criteria.


2. Minimal Requirements for a Physical Universe

We begin from a small set of structural requirements. These are not arbitrary starting points — they are conditions that any physical theory capable of producing observable facts must satisfy.

2.1 Finite Distinguishability

A bounded region of space cannot encode an infinite amount of information. This is not speculation — it follows from well-established results in theoretical physics (including the Bekenstein bound, which relates the maximum information content of a region to its surface area). In practice, this means that physical reality has a fundamentally discrete, granular character at the deepest level: there is a limit to how finely any region can be divided or distinguished. The companion paper Finite Distinguishability and Local Capacity Competition develops the precise mathematical consequences of this requirement, showing that it uniquely determines the form of second-order physical interactions.

2.2 Binary Commitment

Every definite measurement outcome ultimately comes down to a binary distinction — a yes or no, a click or no click, a spin-up or spin-down. This is not a limitation of our instruments — it is a logical necessity. The most basic unit of information is a single binary choice: one bit. Any physical fact, no matter how complex, can ultimately be broken down into a series of such binary answers.

2.3 Local Observability

A measurement must correspond to something that happens at a specific place. You cannot detect something that is happening “everywhere at once” with no definable location. This requirement — that physical events are associated with identifiable places in space — is the foundation of all experimental physics, and underlies the principle that causes must precede effects and cannot travel faster than light.

2.4 Consistency and Closure

The rules governing physical interactions must be self-consistent — they must not generate contradictions. More precisely, any complete set of rules for how things interact must “close”: applying the rules repeatedly must never lead to a logical impossibility. A universe whose rules contradict themselves could not produce stable, repeatable facts.

2.5 Symmetry Under Relabelling

The laws of physics should not secretly favour one arbitrary choice of labels, directions, or names over another. If two interaction channels or orientations are physically equivalent, the laws must treat them identically. This is the requirement of symmetry: the physics does not depend on which equivalent thing you call “number one.”

These five requirements are not exotic or controversial. They are the minimal conditions that any coherent physical framework must satisfy. Together, as the following sections show, they constrain the possible structure of physical interactions far more tightly than is generally appreciated. The companion paper Closing the Structural Gaps takes this further still, showing that these five conditions are not merely plausible starting points but are necessary features of any universe capable of producing definite, irreversible facts at all — and that they point uniquely toward the mathematical structure of quantum mechanics.


3. What the Constraints Imply: Structural Geometry

When the five requirements above are applied specifically to electromagnetic interaction — the force between light and electrically charged matter — a precise geometric picture emerges.

3.1 Effective Dimensionality

The requirements of local observability and finite distinguishability, when applied to how light couples to charged particles, lead to an important conclusion: the physically meaningful quantities describing electromagnetic interaction — those that do not depend on arbitrary choices of mathematical representation — are associated with loops in space rather than lines. Any loop defines a surface it encloses. This means the effective structure of the interaction is fundamentally two-dimensional: it lives on surfaces, not in volumes or along lines.

This is not a creative assumption — it is a mathematical consequence of gauge invariance, a deep symmetry of electromagnetism that has been experimentally verified to extraordinary precision. It also matches the known fact that light has exactly two independent polarisation states (think of the two directions a wave can wiggle as it travels forward).

3.2 Optimal Packing and Hexagonal Structure

Given a two-dimensional surface and the requirement that no direction should be preferred over any other (§2.5), the most efficient — and most symmetric — way to arrange discrete interaction points is the hexagonal pattern. This is not a design choice; it is a mathematical theorem. The Honeycomb Theorem — proved rigorously by mathematician Thomas Hales in 2001 — establishes that the hexagonal arrangement is the unique way to tile a flat surface with equal-sized regions while minimising the total shared boundary between them. In other words, if you need to divide a surface into equal cells as efficiently as possible, hexagons are the only answer. Nature uses this too: honeycombs, graphene, and many crystal structures are hexagonal for exactly this reason. The hexagonal structure here is not chosen to make the numbers work — it is forced by the geometry.

3.3 Independent Constraints: K = 7

For a hexagonal interaction structure to satisfy the consistency requirement (§2.4), a specific number of independent rules — constraints — must be imposed. The argument proceeds as follows: a hexagonal cell has six boundary positions. Six boundary constraints can only enforce relative consistency between neighbours; they leave one overall degree of freedom undetermined, meaning the system as a whole is not yet fully pinned down. One additional, interior constraint is needed to close this gap and produce a fully self-consistent, non-contradictory structure. This gives:

K = 7 independent constraints

This is not chosen to match a target — it is the minimum number required for full consistency. The K=7 Counting Theorem, proved in full in the companion papers using the Honeycomb Theorem and orbit-stabilizer enumeration, establishes that K = 7 is not merely sufficient but uniquely necessary: no smaller number works, and no larger number is independent. The proof further shows that the hexagonal cell has exactly six independent boundary channels and one interior hub — seven sites in total, each contributing exactly one constraint. There is no room for an eighth.

3.4 Interaction Channels: N = 14

Each of the 7 constraint positions participates in two independent interaction pathways — one for each binary choice available at that position (recall §2.2: every physical fact ultimately reduces to a binary distinction). This gives:

N = 2 × K = 2 × 7 = 14 interaction channels

These values emerge from the geometry and the consistency requirement alone. They are not tuned or chosen.


4. From Structure to the Coupling Constant

With the structural parameters K = 7 and N = 14 established, an expression for 1/α — the inverse of the fine-structure constant, which measures how weak the electromagnetic interaction is, rather than how strong — follows naturally.

The expression is:

α⁻¹ ≈ 2ᴷ · (N+1)/N

Reading this in plain terms: the first part, 2ᴷ, reflects the fact that a structure with K = 7 binary constraint levels has 2⁷ = 128 total distinguishable configurations. The second part, (N+1)/N, is a small geometric correction arising from the way the six interaction channels share the available constraint capacity.

Why does it take this particular form? Two companion papers provide the full answer. The paper Finite Distinguishability and Local Capacity Competition establishes the foundational principle: when multiple processes draw on a shared finite resource, they must compete locally — and this competition has a unique mathematical structure called the inverse participation ratio (IPR). The paper Completing the Interface Bridge then applies this to the hexagonal geometry: since all six channels are geometrically identical, each must receive an equal share (1/6) of the constraint capacity. The IPR of six equal shares is exactly 1/6, and substituting this into the coupling formula produces the (N+1)/N correction. Nothing here is fitted or assumed — the form of the correction is forced by symmetry and the requirement that physical quantities remain individually attributable to specific channels.

Substituting K = 7 and N = 14 gives a first-order result:

α⁻¹ ≈ 2⁷ · 15/14 = 128 · 1.0714… ≈ 137.14

It is worth noting what happens if the correction factor is flipped — if we use 14/15 instead of 15/14:

α⁻¹ ≈ 128 · 0.933 ≈ 119

This is well outside the observed value. The correction factor is not arbitrary — it reflects the specific directional structure of closure and interaction channels. Flipping it produces a physically wrong answer, which confirms that the formula is expressing something real about the structure, not just a convenient arithmetic trick.

When the second-order competition correction derived in the companion papers is included, the result improves substantially. The correction enters inside the geometric factor rather than as a separate addition:

α⁻¹ ≈ 128 · [15/14 − 1/1176] ≈ 137.034

The experimentally measured value, determined by precision experiments and officially recorded by the international standards body for physical constants (CODATA, 2018), is:

α⁻¹ = 137.035 999 084

The combined result differs from the measured value by approximately 0.002 — a relative accuracy of roughly 15 parts in a million — achieved with no free parameters and no fitting of any kind.


5. Assessing the Result

5.1 How Precise Is This?

The first-order calculation gives α⁻¹ ≈ 137.14. When the second-order correction is included, this improves to α⁻¹ ≈ 137.034 — within 0.002 of the measured value. The remaining gap is consistent with higher-order contributions not yet computed, in the same way that an approximate engineering calculation gives the right answer to several significant figures but not to unlimited precision. The companion paper From Interface Structure to Physical Coupling identifies the independent determination of the interface energy scale as the key remaining step for closing this gap fully.

5.2 No Adjustable Parameters

The derivation contains no tunable or fitted parameters anywhere. The values K = 7 and N = 14 are fixed entirely by geometry and the consistency requirement — they are not chosen to match the answer. This is a necessary condition for the result to count as a genuine derivation rather than a coincidence dressed up as reasoning.

5.3 Naturalness

The factor 2⁷ = 128 arises because the structure has 7 binary constraint levels — no more, no less. The correction 15/14 arises because the total number of distinguishable states (15) exceeds the number of active channels (14) by exactly one. Both values come directly from the structure. Neither is inserted by hand. In physics, when a result emerges this way — without needing special justification — it is called natural, and naturalness is considered strong evidence that the underlying structure is correct.


6. Scope, Limitations, and Open Questions

This is a conditional derivation — meaning its conclusion follows given the structural picture developed in Sections 2 and 3. In this respect, it is no different in kind from how general relativity derives the bending of light from the curvature of spacetime, or how thermodynamics derives the behaviour of gases from the statistical behaviour of molecules. Neither of those theories proves its own foundations — they show what follows if those foundations hold. The same is true here.

The conclusion — that α⁻¹ ≈ 137 emerges from the minimal consistent structure capable of supporting stable electromagnetic interaction — holds given the five requirements of Section 2. The strength of the argument depends on how compelling those requirements are. The case for each of them is made independently of the conclusion they lead to.

Key open questions include:

  • Uniqueness: Is the hexagonal, 7-constraint structure the only minimal solution, or one of several? If it is unique, the derivation is even stronger.
  • Higher-order refinement: The combined first- and second-order result achieves a relative accuracy of approximately 15 parts in a million. The companion paper From Interface Structure to Physical Coupling identifies the independent determination of the interface energy scale as the key remaining step — once that scale is established, higher-order corrections can be computed and compared directly with the residual.
  • Other forces: Does the same approach apply to the other fundamental forces — the weak nuclear force and the strong nuclear force — and if so, what structural parameters correspond to their coupling strengths?
  • Energy dependence: In quantum field theory, the effective strength of α changes slightly at different energy scales (it is slightly larger at very high energies). How does this energy dependence emerge from a discrete structural picture?

These are open research questions, not weaknesses in the present argument. The argument stands or falls on whether the structural requirements of Section 2 are genuinely necessary — and each of them can be motivated independently.


7. Broader Significance

If the argument developed here is substantially correct, its implications reach further than the specific value of α.

The conventional view in physics is that the fundamental constants — numbers like 1/137 — are simply facts about the universe. They happen to have the values they do. Perhaps they vary across a vast ensemble of universes we cannot observe (the multiverse hypothesis), or perhaps they just are what they are and no deeper explanation is possible. This view is not unreasonable — it reflects honest uncertainty.

But there is another possibility: that at least some fundamental constants are structural necessities — numbers that could not take any other value in a universe capable of producing stable, observable physical facts at all. In this view, the constants are not arbitrary — they are the only values consistent with reality being coherent.

This paper provides evidence for that second view, at least in the case of α.

If the argument holds, the implication is significant: the universe’s most fundamental numbers may be far less arbitrary than they appear. They may be, to a meaningful degree, logical consequences of the conditions required for a universe to exist and be knowable at all.


8. Conclusion

The fine-structure constant α ≈ 1/137 has been, for nearly a century, a number that physics could measure with extraordinary precision but could not explain. This paper has argued that an explanation may be within reach.

Starting from five minimal requirements — that space can only hold finite information, that physical facts reduce to binary distinctions, that measurements must be locatable in space, that physical rules must be self-consistent, and that equivalent things must be treated equally — a geometric picture of electromagnetic interaction emerges. That picture has two derived structural values: K = 7 independent constraints and N = 14 interaction channels. Combining these via a natural formula yields:

α⁻¹ ≈ 2⁷ · 15/14 ≈ 137.14

as a first-order result. When the second-order correction from the companion papers is included, this improves to α⁻¹ ≈ 137.034 — within 0.002 of the measured value, a relative accuracy of approximately 15 parts in a million, with no free parameters.

This is a conditional derivation. It does not resolve every question about the origin of the constants of nature. It claims something more specific, and more verifiable:

Given the minimal conditions for a physically coherent universe, the strength of electromagnetism is not a free parameter — it is a structural consequence of those requirements.

If that claim holds under scrutiny, the number 1/137 is not a mystery. It is an answer — and one that follows from structure rather than measurement.

Spread the love