For any hospital chief executive watching the balance sheet erode, the pitch from a management consulting firm must be hard to resist. Here, the partners promise, are people who have seen every operational failure, every cost spiral, every revenue leak in the industry, people who can fix things (in a matter of months, they promise) what internal staff have struggled with for years. The consultants arrive with laptops and frameworks. They run workshops. They deliver decks. They leave with very large cheques. And according to the most rigorous study ever conducted on the practice, they leave hospitals almost exactly as they found them.
A paper published this week in JAMA has done what a remarkable number of people apparently did not think to do: it actually looked at the numbers.
Joseph Dov Bruch, a health policy researcher at the University of Chicago, had a practical reason to want an answer. Students coming through his programme kept asking him whether a career in healthcare management consulting was a meaningful way to improve the system. He found, to his frustration, that he had no good evidence either way. So he and his colleagues went looking. They combed through IRS Form 990 filings, the detailed financial disclosures nonprofits are required to submit each year, and used machine learning to identify hospital contracts with management consulting firms across a twelve-year period. What they found was, depending on your perspective, either entirely unsurprising or quite extraordinary.
A $7.8 Billion Shrug
More than one in five American nonprofit hospitals hired a management consultant at some point between 2010 and 2022. Across the sector, the total bill came to at least $7.8 billion over roughly a decade, with the average hospital paying $15.7 million for its engagement. That is money that might otherwise have gone toward patient care, capital improvements, or the community health programmes that nonprofit status is nominally supposed to encourage.
The researchers compared 306 hospitals that initiated their first consulting contract during the study period against 513 carefully matched hospitals that did not, then tracked both groups across a battery of financial, operational, and clinical metrics. Net patient revenue. Operating margins. Days of cash on hand. Inpatient length of stay. Staffing levels. Executive compensation. Thirty-day mortality and readmission rates for heart attacks, pneumonia, and stroke.
Across virtually every measure, the result was the same: nothing. No statistically significant, systematic improvement attributable to the consulting engagement. Operating margins didn’t meaningfully shift. Revenue didn’t climb. Hospitals didn’t become leaner or more efficient. “It’s not necessarily a waste,” Bruch said, “but we don’t have evidence of meaningful improvements.”
The one exception was small and unwelcome: a modest increase in thirty-day readmissions among stroke patients. It was statistically significant, just barely, but the researchers note it was not robust when they tested alternative model specifications, so it is probably noise. Still. Not the direction you would hope for.
Why the Evidence Gap Lasted This Long
What makes the study genuinely odd, in retrospect, is how long it took for someone to do it. Management consultants have been a fixture of American healthcare for decades, wielding influence that Bruch’s paper notes is higher than in almost any other sector of the economy. Hospitals have been handing over billions in tax-subsidized dollars to these firms throughout a period when American healthcare was under intense political scrutiny for its costs and outcomes. And yet, as Bruch’s team documents, there was no prior large-scale empirical attempt to measure what hospitals actually got in return.
Part of the explanation is practical: the data didn’t exist in a usable form until someone went digging through IRS filings with machine learning. Part of it may be something a bit more awkward. Management consulting is a diffuse, relationship-dependent industry where firms rarely publicize detailed records of what they recommended or why, and hospitals are disinclined to trumpet the cases where advice didn’t pan out. The whole arrangement runs on reputation and trust rather than documented outcomes, which is an unusual posture for an industry advising organizations whose core mission is evidence-based medicine.
Bruch is measured in his conclusions. “This initial analysis suggests that consultants may deliver neither the dramatic efficiencies they promise nor the harms that critics sometimes fear,” he said. The framing matters: he is not claiming consultants are useless, only that the evidence of usefulness is, so far, absent. It is possible, he acknowledges, that consulting engagements affect things the study couldn’t capture, or that benefits take longer to materialize than the study window allows, or that some hospitals gain and others lose in ways that cancel out in aggregate. But the null result, across this many hospitals and this many metrics, is hard to dismiss.
What the Numbers Don’t Capture
The paper’s scope is also deliberately narrow: it looked only at management consultants, defined specifically, not the broader ecosystem of external expertise that hospitals buy. When the researchers widened the definition to include HR and IT consultants, total spending by nonprofit hospitals over the study period climbed past $25 billion. That figure raises the obvious question of whether similar analyses of those adjacent industries would yield similar shrugs, and Bruch thinks it probably should prompt someone to look.
There is also the question of what “meaningful improvement” would even look like in a system as complex as an American hospital. Consulting firms typically frame their value in terms of strategic alignment, organizational culture change, and positioning for future growth: outcomes almost definitionally resistant to measurement in Bruch’s study timeframe. Whether that resistance to measurement is a genuine feature of complex organizational change, or a convenient property for an industry whose outputs are hard to audit, is a question the data cannot answer.
What the data can say is that $7.8 billion bought no detectable improvement in any metric that hospital administrators, policymakers, and patients would normally care about. For Bruch, the more immediate hope is that the finding changes some individual calculations. His students, the ones considering consulting careers, kept asking whether the work could actually move the needle on healthcare’s deep inefficiencies. “Answering those questions has been difficult because the evidence has been so limited,” he said. Now the evidence exists. The answer is a little more specific, and a little less comfortable.
DOI / Source: 10.1001/jama.2026.5027
Frequently Asked Questions
Why haven’t hospitals been tracking whether consultants actually help?
Partly because the data required to do so wasn’t easily accessible until researchers started systematically mining IRS financial filings with machine learning. But there’s also a structural problem: consulting firms don’t publish outcome data, and hospitals that receive poor advice have little incentive to advertise that fact. The entire industry has operated on reputation rather than documented results, which is a curious arrangement for organizations meant to practice evidence-based medicine.
Is it possible the benefits just take longer to show up?
That’s one of the study’s acknowledged limitations. The researchers tracked hospitals for several years after their consulting engagement began, but some organizational changes take a decade or more to feed through into measurable outcomes. What the study can say is that no detectable benefit appeared across a wide range of financial, operational, and clinical metrics in the timeframe examined. Whether patience would eventually be rewarded remains an open question.
Could some hospitals benefit while others lose, and the effects cancel out?
Possibly, yes. The study measures average effects across hundreds of hospitals, and it’s conceivable that some engagements are highly effective while others are counterproductive, leaving aggregate results near zero. Breaking down which hospital types or which consulting firms might produce better outcomes is exactly the kind of follow-on research the authors say is needed, though it would require the consulting industry to share far more data than it currently does.
Does this mean consultants are a bad idea for all nonprofits, not just hospitals?
This study looked specifically at hospitals, which operate in a particularly regulated and outcome-tracked environment. Generalizing to other nonprofit sectors would require separate research. What the findings do suggest is that the default assumption that external expertise reliably translates into measurable improvement may deserve more scrutiny across sectors than it currently receives.
ScienceBlog.com has no paywalls, no sponsored content, and no agenda beyond getting the science right. Every story here is written to inform, not to impress an advertiser or push a point of view.
Good science journalism takes time — reading the papers, checking the claims, finding researchers who can put findings in context. We do that work because we think it matters.
If you find this site useful, consider supporting it with a donation. Even a few dollars a month helps keep the coverage independent and free for everyone.
