Deep in the African night, it turns out that lions are not just roaring, they are speaking in two different roaring voices that computers can now tell apart with remarkable precision.
In a new observational study published in Ecology and Evolution, researchers led by the University of Exeter used machine learning to show that African lions produce two distinct types of roars within a single roaring bout, recorded in Tanzania and Zimbabwe. Working with conservation partners and computer scientists, they identified a previously unclassified “intermediary roar” alongside the classic full-throated roar, and built an artificial intelligence system that can automatically sort these calls and identify individual lions with up to 95.4 percent accuracy.
This redefinition of what a lion’s roar actually is arrives at a critical time. Lions are listed as vulnerable on the IUCN Red List, with only an estimated 20,000 to 25,000 individuals left in the wild across Africa, and that number has fallen by roughly half over the last 25 years. Conservationists need better tools to track where lions are, how many remain, and whether hard-won protections are working. The new work suggests that sound, not just camera traps or footprint surveys, could become a powerful way to count and follow individual big cats across vast, hard to reach landscapes.
Two Roars Hidden Inside One Iconic Sound
To most human ears, a lion’s roar is a single, rolling blast of sound. Bioacousticians see something more complicated. Lions often produce a “roaring bout” made of several call types strung together, historically labeled as moans, full-throated roars, and grunts. By closely examining hundreds of high quality recordings from Nyerere National Park in southern Tanzania, the team found that this standard three part picture was incomplete.
When they looked at the sound as a spectrogram, a visual map of frequency over time, the researchers saw four distinct phases. The early moans build up, the classic full-throated roars erupt at high amplitude and longer duration, then a series of slightly shorter, lower frequency calls appears before the bout ends in short grunts. Those middle calls behaved differently enough that the team argued they deserve their own name, “intermediary roars,” and showed that they can be automatically separated from full-throated roars.
Crucially, the full-throated roars carry the acoustic “signature” that identifies each lion. Earlier work had already shown that these roars are individually unique and could, in principle, underpin statistical methods to estimate population density using sound alone. The problem was that selecting which bits of a roaring bout counted as full-throated roars relied on expert judgment and sometimes arbitrary rules, such as only using the first three roars in a sequence. That opened the door to inconsistency and bias.
“Lion roars are not just iconic – they are unique signatures that can be used to estimate population sizes and monitor individual animals. Until now, identifying these roars relied heavily on expert judgment, introducing potential human bias. Our new approach using AI promises more accurate and less subjective monitoring, which is crucial for conservationists working to protect dwindling lion populations.”
Instead of handpicking features from a long list of acoustic descriptors, the team focused on two simple, measurable properties for each call in a roaring bout: its maximum frequency in hertz and its length in seconds. They then applied K-means clustering, an unsupervised machine learning method, to group the calls into categories. Once obvious moans were removed, those two variables were enough to sort full-throated roars, intermediary roars, and grunts with high accuracy, reaching 95.4 percent in the Tanzanian field recordings.
To test whether this streamlined, data driven approach could improve real conservation tasks, the researchers revisited a separate dataset from Zimbabwe, where biologgers attached to wild lions had previously been used to show that roars can identify individuals. When full-throated roars were selected manually, the model distinguishing five male lions achieved an F1 score of 0.80. When the same task was repeated with full-throated roars defined by the clustering algorithm, performance improved to an F1 score of 0.87, with more usable roars per lion and higher recall and precision across individuals.
Turning Roars Into A Conservation Tool
The researchers emphasize that their system is “human in the loop” rather than fully automated. Moans are still manually labeled, and technicians must decide where each vocalization begins and ends in the recordings. But once those basic selections are made, the rest of the process, from clustering call types to feeding full-throated roars into individual identification models, runs on clear, reproducible rules.
That simplicity matters, because lion roars are not constant background noise. Lions are most vocal in the hours before dawn, may stay silent for long stretches, and nomadic males in particular may avoid roaring altogether to reduce detection risks. Deploying large networks of autonomous recording units across wild landscapes demands methods that can handle rare events efficiently and can be adopted by conservation teams that lack specialist acoustic training or high powered computing clusters.
The study also hints that roars may carry traces of geography. In the Zimbabwe dataset, one male lion known to have dispersed from Botswana had full-throated roars that were harder to classify using the Tanzanian based criteria, possibly because of differences in maximum frequency or duration. Historical observations that lions from different regions can have shorter or otherwise distinct roars suggest there may be something like “lion accents,” although the authors argue that much more work is needed to understand how local variation affects automated classification and population estimates.
Despite these complexities, the core message is clear: roars are rich data, and relatively straightforward machine learning methods are already good enough to unlock their conservation value. Compared with spoor surveys and camera traps, passive acoustic monitoring can cover larger areas, run continuously at night, and detect animals that stay out of sight. If combined with spatially explicit capture recapture models, the individual identity embedded in full-throated roars could one day feed directly into robust density estimates for lions and other large carnivores that call across the savannah.
“We believe there needs to be a paradigm shift in wildlife monitoring and a large-scale change to using passive acoustic techniques. As bioacoustics improve, they’ll be vital for the effective conservation of lions and other threatened species.”
For now, the work offers a new way to listen to a familiar sound. What once seemed like a single, iconic roar is revealed as a structured sequence of four call types, with full-throated and intermediary roars playing distinct acoustic roles. By letting the data define those categories, rather than intuition alone, the authors argue that they have taken an important step toward making lion monitoring more objective, more scalable, and more inclusive for practitioners across Africa.
If that shift happens, future population counts may depend less on what we see on camera and more on what quietly stacks up on hard drives each night: hour after hour of lions announcing themselves to one another, their roars already carrying the information conservationists need to keep them from falling silent.
Ecology and Evolution: 10.1002/ece3.72474
Discover more from Wild Science
Subscribe to get the latest posts sent to your email.
