Your smartphone knows how you drive. Every hard brake, every burst of speed, every glance at your screen whilst weaving through traffic: it’s all being tracked by the accelerometer and GPS tucked inside your pocket. And if you’re one of the nearly 40 million Americans now enrolled in usage-based car insurance, that data determines what you pay.
The promise is simple enough: drive safer, pay less. But does it actually work? Jeffrey Ebert at the University of Pennsylvania’s Perelman School of Medicine wanted to find out. “All of this should make usage-based insurance customers safer drivers—and earlier research found evidence that it does,” he says. “But we wanted to definitively test this and ways to strengthen programs.”
So Ebert and his colleagues recruited 1,449 drivers across the United States through Facebook and Instagram adverts, equipped them with a smartphone app that monitored their driving habits, and randomly assigned them to different feedback programmes. Some received weekly text messages about four risky behaviours: speeding, phone use, hard braking, and rapid acceleration. Others were told to focus on just one behaviour at a time; either assigned by an algorithm or chosen by the driver themselves. A control group simply had their driving monitored without any feedback at all. Everyone could earn up to $100 for safe driving over twelve weeks.
The results, published in Accident Analysis & Prevention, suggest these programmes genuinely work. Drivers who received feedback and financial incentives reduced their speeding by 11 to 13 per cent compared with the control group. Hard braking dropped by 16 to 21 per cent, and rapid acceleration fell by 16 to 25 per cent. The effects persisted even after the incentives ended and feedback stopped—suggesting people had actually changed their habits rather than just temporarily behaving for the reward.
If scaled up nationally, the numbers become striking. The US sees more than 6 million vehicle crashes each year, leading to roughly 2 million injuries. If everyone enrolled in a programme like this, Ebert estimates there could be 300,000 fewer crashes and 100,000 fewer injuries.
What surprised the researchers was that one common intervention—focusing drivers on a single behaviour at a time—didn’t seem to boost effectiveness much. “Behavior change is difficult and takes time,” Ebert says. “We were concerned that standard insurance apps overwhelm drivers with too much feedback, making it hard to even know where to start.” But in practice, giving people feedback on all four behaviours at once worked about as well as narrowing their focus. Roughly 95 per cent of participants completed an exit survey, and most reported trying to improve on multiple behaviours regardless of which group they’d been assigned to.
There was one glaring exception, though. Handheld phone use, arguably the most dangerous behaviour measured, didn’t budge. Participants were handling their phones an average of 6.3 per cent of their driving time at the study’s start, and that figure remained stubbornly high throughout. Ebert suspects the culprit was the scoring system itself. “The average participant was basically told by the program that they had an ‘A’ grade for phone use despite handling their phone 6 percent of the time,” he explains. “Given how dangerous this is, the average should have been a ‘C.’ Drivers likely ignored this behavior and focused on improving in the other areas they considered more important.”
This points to a broader tension in usage-based insurance: the programmes need to be strict enough to actually change behaviour, but lenient enough that people want to sign up. Insurance companies tread carefully here. They’re “happy to give discounts to customers who drive safer because it means they will have fewer crash claims later on,” Ebert notes. But if the app constantly criticises your driving, you might just delete it.
The research also revealed something more hopeful—that even without perfect programme design, people retain safer habits once they’ve formed them. Six weeks after the feedback and money stopped, participants were still speeding less and braking more gently than the control group. Previous research on usage-based insurance had found that drivers often backslide once their rating period ends and they’ve secured their discount. Not here.
Ebert reckons there’s still room for improvement, though. “For example, we have seen before that giving drivers small, weekly rewards can be much more motivating than one big reward at the end of a 12-week program,” he says. The structure of incentives matters as much as the amount.
About one in four US drivers now opts into these programmes, and every major insurer offers them. The technology feels unobtrusive—most people already carry their phones whilst driving anyway. What’s less clear is whether there are equity concerns lurking beneath the data. The study found that the interventions worked equally well across age, sex, and racial groups, which is reassuring. But drivers in rural areas responded less well to the feedback than those in urban or suburban settings, perhaps because rural driving involves fewer risky situations where habits can be retrained.
The bigger question is whether we’re comfortable with this level of surveillance in the first place. Your insurer knowing every detail of your daily commute might feel like a reasonable trade-off for lower premiums—or it might feel like one more intrusion in an already over-monitored life. Either way, the data suggests it’s making roads safer, one tracked journey at a time.
ScienceBlog.com has no paywalls, no sponsored content, and no agenda beyond getting the science right. Every story here is written to inform, not to impress an advertiser or push a point of view.
Good science journalism takes time — reading the papers, checking the claims, finding researchers who can put findings in context. We do that work because we think it matters.
If you find this site useful, consider supporting it with a donation. Even a few dollars a month helps keep the coverage independent and free for everyone.
