Quantcast

A clear-cut approach to autism screening and diagnosis

Approximately one in 68 people are on the autism spectrum. Experts are unanimous on this: Early intervention is critical for improving communication skills and addressing behavioral issues. But how can researchers expedite the identification of children in need of help and also provide a more clear-cut map for intervention and support?

Researchers from the USC Signal Analysis and Interpretation Laboratory (SAIL) at the USC Viterbi School of Engineering’s Ming Hsieh Department of Electrical Engineering, along with autism research leaders Catherine Lord (of Weill Cornell Medical College) and Somer Bishop (of the University of California, San Francisco), are now exploring whether machine learning might play an important role in helping screen for autism and guide intervention by caregivers and practitioners.

Their most recent interdisciplinary collaboration and research is documented in a paper published in The Journal of Child Psychology and Psychiatry.

Established tests

Study authors Daniel Bone, Bishop, Matthew Black, Matthew Goodwin, Lord and Shrikanth Narayanan looked at two established industry tests: the Autism Diagnostic Interview-Revised (ADI-R) and the Social Responsiveness Scale (SRS), two exams in which parents are interviewed about their children’s behaviors.

The scholars then applied machine-learning techniques to analyze how parents’ responses on individual items and combinations of items matched up with the child’s overall clinical diagnosis of ASD vs. non-ASD.

One of the fundamental questions that drove this research project, said co-author and SAIL Director Narayanan, was, “How can we support and enhance experts’ decision-making beyond human capability; how can we make sense of data and patterns not able to be detected by a single person?”

The researchers, eager to provide parents or caregivers and evaluators with “tools for better decision-making,” studied test scores of more than 1,500 individuals, comparing the results of those individuals with autism spectrum disorder to those with other non-ASD diagnoses.

Redundant questions

By using machine learning to analyze thousands of caregiver responses, the researchers were able to identify redundancies in the questions asked to caregivers.

By eliminating these redundancies, the authors identified five ADI-R questions that appeared to be capable of maintaining 95 percent of the instrument’s performance. While it is unclear how these questions would function if they were administered separately from the overall interview, they suggest that certain diagnostic constructs — when reported by parents — may be particularly important for predicting clinical diagnosis.

Further clinical testing is needed to understand the practical utility of these particular results, but use of these types of techniques could ultimately serve to reduce administrative time and to customize questions to identify the unique challenges that warrant intervention for a particular individual.

The authors also believe they can use machine learning to provide another lens on autism, offering a picture that is clearer, more distilled and overall more data-informed for caregivers and practitioners. This, the authors believe, could be revolutionary in that it “takes out the guesswork or subjectivity involved even in trusted, industry-wide instruments.”

“Machine learning can make a diagnosis more effective, more systematic,” said Bone, the study’s lead author from USC. Beyond early intervention, increased detail could also reduce the frequency of misdiagnoses that deny individuals access to services from the states or public schools.

A holistic approach

Researchers at SAIL want to take a holistic approach to autism. Aside from targeting screening and diagnosis instruments with machine learning, the researchers are working to create quantitative measures of human behavior based on audio, video and physiological sensors through signal processing.

One primary target for the researchers has been to quantify what sounds atypical about the speech melody of many individuals with autism, since objective measures from a computer may supplement clinicians in this difficult judgment. Eventually, the scholars would like to train specialists to use audio and signal-processing tools on a more regular basis to identify and monitor specific behavioral patterns and develop interventions to address these patterns.

In addition, one of the projects this multidisciplinary team of engineering scholars from USC and psychologists from UCSF are planning (along with experts in adolescent social development from Cincinnati Children’s Hospital) will address social challenges that people with autism may experience.

The researchers will record an individual’s behavior to try to understand speech patterns or gestures that may unknowingly be off-putting to peers and strain friendships. They would like to provide data-driven insights that can be used for therapeutic interventions to improve the quality and quantity of friendships for children on the spectrum.

“We are building the science first, then translating science back into useful technology — all through interdisciplinary partnerships,” Narayanan said.




The material in this press release comes from the originating research organization. Content may be edited for style and length. Want more? Sign up for our daily email.