New! Sign up for our email newsletter on Substack.

Cracks Are Appearing in the Geometric Assumption Underlying All of Modern Cosmology

Feed the right numbers into the right formula and the universe should confess its geometry. That is the logic behind a class of tests cosmologists have been running for two decades: combine measurements of how fast the universe is expanding with how far away things appear to be, and you can verify whether large-scale spacetime matches the model modern cosmology is built on. For a smooth, uniformly curved universe, those measurements slot together in a particular way. The test statistic capturing this relationship, known as C, should equal exactly zero at every distance. But something odd is happening. Across a trio of new papers, researchers find that C refuses to reach zero, deviating from expectation by somewhere between two and four standard deviations depending on which data you use and how you analyze it.

That range of sigmas might sound modest; cosmology is littered with two-sigma blips that quietly evaporated. What makes this different is what the deviations are actually testing, and what they would mean if they turn out to be real.

The assumption being tested is called FLRW geometry, after Friedmann, Lemaître, Robertson, and Walker, who developed it in the 1920s and 1930s. The FLRW metric describes a universe that, on the largest scales, looks the same in every direction from every location: no special places, no special directions, smooth expanding space with averaged-out density. It is the bedrock on which the Lambda-CDM model sits, the same model that tells us the universe is 13.8 billion years old and consists largely of dark matter and dark energy. Nobody expects it to be exact at small scales, where galaxies and voids are obviously lumpy. The question is whether it holds across billions of light years. For decades, yes. The new results introduce a statistically meaningful reason to reconsider.

Asking the Question Properly

The test at the heart of all three papers was first proposed in 2008 by Chris Clarkson, Bruce Bassett, and Ta-Hsi-Charles Lu. If spacetime really has FLRW geometry, a particular combination of the angular diameter distance (roughly, how large things appear) and the Hubble parameter (how fast space is expanding at a given distance) must satisfy a specific relationship. No matter what dark energy is doing, no matter what the spatial curvature is, C equals zero. The test is independent of all of those unknowns. Which means that if C is not zero, something much more fundamental has gone wrong, not with the physics filling the universe but with the shape of spacetime itself.

The difficulty has always been applying this test without sneaking in the very assumptions you’re trying to test. Previous studies using Gaussian Process reconstruction have found only mild tensions with FLRW. But Gaussian Process reconstruction has a hidden conservatism: the choice of smoothing kernel tends to enforce regularity, nudging results toward the well-behaved curves FLRW predicts. Results look reassuring partly because the method is, in a subtle sense, pre-loaded to find reassurance.

Signe Maj Koksbang at CP3-Origins, University of Southern Denmark, and Asta Heinesen at Queen Mary University of London and the Niels Bohr Institute set out to do the reconstruction differently. They used symbolic regression, a machine-learning technique that searches for mathematical expressions fitting the data without assuming any functional form in advance. String 200 bootstrap samples of the supernova and baryon acoustic oscillation data through the algorithms, take the median and the spread, and you get a model-independent reconstruction of angular diameter distance and Hubble rate and, crucially, their derivatives. Those derivatives are the problem: C involves second derivatives of distance with respect to redshift, which Gaussian Processes notoriously struggle to pin down.

What the Data Are Saying

Applied to Pantheon+ supernova data and baryon acoustic oscillation measurements from BOSS, eBOSS, and DESI, the method shows persistent non-zero values of C across most of the redshift range examined, between roughly z = 0.4 and z = 1.4. The integrated version of the test, O, reaches just above four standard deviations at the lower end of the range with DESI Data Release 1. With the newer DESI DR2, the violation drops to three or four sigma, still substantial. Koksbang and Heinesen describe this as consistent with being the first significant detection of a violation of FLRW self-consistency, though they flag that the significance depends on choices made in the symbolic regression procedure, and different selection criteria shift it by roughly a sigma.

A companion paper by Heinesen and Timothy Clifton, also at Queen Mary University of London, adds a theoretical frame. The two standard explanations for why FLRW might appear to fail without being fundamentally wrong are the Dyer-Roeder effect and cosmological back-reaction. The first involves light travelling mostly through the voids between galaxies rather than averaged cosmic density, making distant objects appear slightly farther than standard calculations predict. The second is the idea that matter clumping into large-scale structures feeds back on the overall expansion rate in ways the FLRW equations don’t capture. Heinesen and Clifton have derived precise predictions for what violations each mechanism would produce, with signatures distinct enough to distinguish from each other and from modified gravity or exotic dark energy.

Reading the Shape of Space

What makes the Koksbang and Heinesen framework genuinely new is that previous FLRW tests, when they found something anomalous, had no way to interpret it. A non-zero C was an abstract flag, pointing to trouble without identifying its source. The new approach derives the test statistics for completely general spacetimes, so if C deviates from zero in a particular way, you can read off which kinematic or curvature quantities are responsible. The papers also introduce a separate observable combination, M, which yields an estimate of total matter density without assuming FLRW and without fitting parameters to the Friedmann equations. With current data, M is not tightly constrained, but the framework is ready for when better data arrive.

There is a not-small caveat. Symbolic regression is still a young tool in cosmological analysis, and the specific criteria used to select which fitted expressions to retain introduce a degree of subjectivity that Gaussian Process methods don’t have. Koksbang and Heinesen test three different selection criteria on the same data and find that the qualitative picture, non-zero C and O across much of the redshift range, persists, but the precise significance slides around. An automated criterion weighted equally on accuracy and complexity brings some violations within two sigma of zero, which is considerably less alarming than four.

The broader community will want to see this reproduced with different methods and, more importantly, with more data. The BAO datasets feeding the Hubble rate reconstruction are sparse; symbolic regression struggles with sparse data in ways that can inflate uncertainty. The Vera Rubin Observatory and the Euclid space telescope will eventually provide expansion-rate measurements precise enough to tighten the constraints on C and O by an order of magnitude. Whether the deviations harden or dissolve at that point is the question that will actually settle this.

The stakes, if the violations are real, go well beyond a technical correction. Almost every proposed solution to the Hubble tension assumes FLRW geometry as its starting point. If the framework itself is wrong, proposals involving new dark energy, interacting dark matter, or modifications to gravity are all addressing the wrong problem. A universe that genuinely departs from FLRW on cosmological scales would require not just a revision of the standard model but a rethink of its conceptual architecture, a shift from a smooth background spacetime to something more complicated, more textured, harder to calculate on. Whether that reading of the data is correct remains unclear. What is clear, looking at the numbers, is that the question is open in a way it wasn’t a year ago.

Source: Koksbang & Heinesen, Diagnostic Consistency Tests of the Concordance Cosmology (2026), arXiv: 2604.05836; Koksbang & Heinesen, Model-independent constraints on generalized FLRW consistency relations with bootstrap-based symbolic regression (2026), arXiv: 2604.05822; Heinesen & Clifton, Observational Tests for Distinguishing Classes of Cosmological Models (2026), arXiv: 2604.07244


Frequently Asked Questions

Does this mean the Big Bang model is wrong?

Not necessarily, and not yet. These papers are testing a specific geometric assumption within modern cosmology, namely that the universe is smooth and uniform on the largest scales. The Big Bang itself refers to the early hot, dense phase of cosmic expansion and is supported by many independent lines of evidence, including the cosmic microwave background and the abundances of light elements. What is under scrutiny here is a much more specific claim about the shape of spacetime, and the deviations detected so far are preliminary, dependent on data selection, and not yet independently confirmed.

Why does it matter if spacetime geometry deviates from the standard model?

Because almost every current explanation for the so-called Hubble tension, a persistent disagreement between different ways of measuring the universe’s expansion rate, assumes the standard FLRW geometry as its starting point. If that geometry is actually wrong, all those proposed fixes (new forms of dark energy, interacting dark matter, tweaked gravity) are solving the wrong equation. A genuine departure from FLRW would redirect cosmological research toward an entirely different class of models, ones where the lumpy structure of the universe on large scales feeds back on the expansion rate in ways the standard math doesn’t capture.

How is symbolic regression different from the usual way of analyzing cosmological data?

The standard approach, called Gaussian Process reconstruction, smooths observational data using a predefined mathematical kernel that tends to favor well-behaved, regular functions. The problem is that this built-in smoothness can subtly bias results toward the smooth, regular predictions of FLRW. Symbolic regression instead searches openly for whatever mathematical expression best fits the data, without specifying any functional form in advance. The tradeoff is that it requires careful selection of which candidate expressions to retain, introducing a different kind of subjectivity, though the researchers tested multiple selection criteria and found the broad pattern held.

What would it actually look like if the universe isn’t FLRW?

Two specific alternatives are described in the companion theory paper. In one, light from distant supernovae travels mostly through emptier-than-average regions of space, making those objects appear farther away than they should in a perfectly uniform universe. In the other, the formation of large-scale cosmic structures feeds back on the average expansion rate of the universe in a way the standard smooth-background equations miss. Both mechanisms would leave distinct signatures in the curvature-consistency tests, and the new theoretical framework can, for the first time, distinguish between them using observational data rather than treating any deviation as an uninterpretable anomaly.


Quick Note Before You Read On.

ScienceBlog.com has no paywalls, no sponsored content, and no agenda beyond getting the science right. Every story here is written to inform, not to impress an advertiser or push a point of view.

Good science journalism takes time — reading the papers, checking the claims, finding researchers who can put findings in context. We do that work because we think it matters.

If you find this site useful, consider supporting it with a donation. Even a few dollars a month helps keep the coverage independent and free for everyone.


1 thought on “Cracks Are Appearing in the Geometric Assumption Underlying All of Modern Cosmology”

  1. Cracks in the cosmic geometry?Try gaping holes!
    All of these websites profess to be seeking the truth, but when the truth is presented to you, you delete the post.
    Over one hundred years of chasing bogus theories has placed Mankind on the brink of extinction. All of the Mysteries of the Universe can be explained with one small shift in perspective.
    ALL THE STARS IN ALL THE SPIRAL GALAXIES ARE NOT ORBITING THEIR CENTERS, THEY ARE ALL SPIRALING INTO THEIR CENTERS.
    Every spiral Galaxy clearly shows this motion. I can’t understand how a person can look at a spiral Galaxy and not come to this conclusion. Yet, over 100 years, nobody once has thought to consider this.
    First, this explains the constant speed of the Milky Way stars. There is no Dark Matter.
    Second, this implies that the centers of galaxies are eternal; only the matter spiraling inward is less than 14 billion years old. The centers continuously feed on the inbound material, and continually eject elementary particles back into space. There was no Big Bang, and the Universe is not expanding.
    Since the centers continually eject matter, by definition they cannot be black holes. Quasars and the Fermi Bubbles are direct visual evidence against black holes.
    Every spiral Galaxy is a perpetual cycle of creation and destruction. Unfortunately, we are at the destruction end of the cycle. Global Warming is not anthropogenic, it’s a direct result of increasing proximity to the center of the Milky Way.
    Sea level rising isn’t the problem – the entire ocean is rising up. Once the polar ice caps disappear, the oceans will quickly become too warm to remain liquid, and will inundate the atmosphere.
    But nobody wants to believe it. I don’t expect this post to make it to the comments…

    Reply

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.