Quantcast

It’s Not Magic, It’s Science: Predicting the Future

It could be argued that scientists create superpowers in their labs. If Aram Galstyan, director of the Artificial Intelligence Division at the USC Viterbi Information Sciences Institute (ISI) had to pick just one superpower, it would be the ability to predict the future. What will be the daily closing price of Japan’s Nikkei 225 index at the end of next week? How many 6.0 or stronger earthquakes will occur worldwide next month? Galstyan and a team of researchers at USC ISI are building a system to answer such questions.

For the past two years, Galstyan has led a group of researchers at ISI on a project named Synergistic Anticipation of Geopolitical Events, or SAGE, to attempt to predict the future using non-experts. The SAGE project relies on human participants to interact with machine learning tools to make predictions about future events. Their goal is for the forecasts borne from the combination of human + AI to be more accurate than those of humans alone.

Their research has proved quite useful and people’s predictions largely on target. ISI’s Fred Morstatter, a USC Viterbi research assistant professor of computer science, said that non-experts accurately predicted in April that North Korea would launch its missile test before July; North Korea launched in May.

It was the country’s first missile launch in seven months, taking place just days after the question appeared on SAGE. “That was something I don’t think any of us thought was going to happen,” Morstatter said.

SAGE is funded by the Intelligence Advanced Research Projects Activity (IARPA), which invests in high-risk, high payoff research projects to benefit the U.S. intelligence community.

IARPA is interested in developing forecasting technology that makes predictions, based on a large set of human users, that are more accurate and faster than a single human subject expert. Having the ability to predict geopolitical events could potentially help the intelligence community make better, more informed national security decisions.

The agency has hosted many competitions related to forecasting, including the Aggregative Contingent Estimation project, which crowdsourced humans to make predictions.

SAGE expands on this previous study, instead asking people to make predictions based on information provided by various machine learning methods.

In 2017, the ISI team received a four-year, multimillion-dollar grant under IARPA’s Hybrid Forecasting Competition, a new project encouraging researchers to combine human forecasting with machine learning models to generate more accurate predictions than either method could on its own. ISI and Raytheon’s BBN Technologies are the finalists.

Users, known as “forecasters,” self-select what they’d like to predict. Topics range from the geopolitical, “Will any G7 nation engage in an acknowledged national military attack against Syria before 1 December 2018?” to economic, “How much crude oil will Venezuela produce in October 2019?” Users can also ask questions to fellow forecasters on discussion boards, comment on forecast results, and view the leadership rankings, which are decorated with digital badges users can earn by making accurate forecasts.

The non-expert forecasters recruited to participate on SAGE have accurately predicted real-life, geopolitical events, Morstatter said. “We believe that’s the case because the numbers we’re seeing indicate we are outpacing a system that uses only humans.”

Indeed this was verified in a competition held last year to test the accuracy of forecasting systems. Throughout 2019, SAGE was tested against two competing systems. All systems were given the same set of over 400 forecasting questions. SAGE was able to generate forecasts for these questions that were more accurate than those from the competing systems.

The first word in SAGE’s acronym, “synergistic,” hints at how this human forecasting relates to machine learning. Synergy describes how two or more objects — in this case human and machine — come together to create something greater than the sum of its parts. The SAGE team is determined to find out how to combine crowdsourced predictions with machine learning tools to generate more accurate predictions.

Teaching non-experts how to make accurate predictions with the help of machine learning is one of the project’s main goals, and it’s working.

“Thanks to the machine models we have in our system,” Morstatter said. “Forecasters are doing better than the control system which only has human forecasters.”

SAGE features some interesting machine models on its site for users to make informed forecasts. This includes time series charts— a series of historical data points to show trends, along with a machine-made prediction — to help with quantitative predictions, such as the value of a stock over time. By combining human- and machine-generated forecasts on the SAGE platform, ISI researchers have discovered the benefits of hybridization, Galstyan said.

In addition to ISI’s Galstyan and Morstatter, the team includes Pedro Szekely, a USC Viterbi research associate professor of computer science, who knows how to store all of SAGE’s data; Professors Emilio Ferrara and Ali Abbas;  research programmer Gleb Satyukov, who develops the front-end, or what users see on the SAGE website; computer scientist Andres Abeliuk, whose expertise in bias and computer science complements the work of postdoc Daniel Benjamin; and project manager Lori Weiss, the team’s first line of defense when users have questions about the platform. The team also includes external members from University of California at Irvine, Columbia University, Stanford University, and Fordham University.

So far, they’ve been able to show that mixing machine intelligence and human decision making does generate lower Brier scores than human forecasters alone, he added. “We’re outperforming what has been done in the past.”

Said Morstatter: “SAGE works because humans have one side of the coin, and machines have the other side.”

But it isn’t just intelligence analysts that could find predictive technology useful. Who wouldn’t like to predict the future?




The material in this press release comes from the originating research organization. Content may be edited for style and length. Want more? Sign up for our daily email.

1 thought on “It’s Not Magic, It’s Science: Predicting the Future”

  1. Probability and statistics do not demonstrate cause and effect. One or two experiments or investigations doesn’t “prove” anything.

    Why not? Most such investigations are calculated into some sort of bell curve of percentile statistics. Think bell curve or standard deviation numbers. Keep in mind the difference between mode, median, and mean (average).

    Your bell curve could have 100 or 10,000 numbers. The most popular numbers will be at the peak of the curve. But the chart doesn’t tell sequence of the experiments. The median number applies to the entire set but doesn’t tell you anything about individual experiment.

    Say you are taking a simple opinion poll of 100 people with only yes and no answers. If 80% say “yes,” the first 5 and last five answers have an equal chance of being a “no” answer.

    That’s why a professional pole taker would rather have 10 runs of 100 ballots than 1 run of 1,000 ballots.

Comments are closed.