You might have heard the stories about the oracles of Greek antiquity, who were able to make prophetic predictions. You might be familiar with the name Nostradamus, whose prophecies are even nowadays picked up by those who predicted that 2012 would be the end of the world. You probably made fun of these people’s beliefs in prophets, prophecies or anything related to inferring “facts” about the future because, as educated people, we know that we live in a chaotic world where the future cannot be predicted accurately.

   Still, our brains do not seem to have the same reactions when we talk about predictions. You can find article after article on how you can “How to predict the NBA with a Machine Learning system written in Python,” and paper after paper on an “improved model for predicting presidential election outcomes.” Stop using gods, and start using numbers and variables, change the word “prophecy” to “prediction” or, even better, “forecasting” and you suddenly have a “fact” about the future, that is worthy of academic papers and headlines. There is something about the idea of certainty that makes us shiver with delight. That need for certainty has brought us back to the same mindset that made us believe centuries ago in Pythia, the priestess of the Temple of Apollo at Delphi.

The problem with this mindset is not inherent in predictions. After all, scientists use models to make predictions to ease the process of understanding the world. Even though these techniques aren’t always accurate, they are modeling biological patterns, not the unpredictable behavior of humans or societies. Our fascination with peeking into the future, though, pushes us to use predictive models in psychology, trading or even politics. One algorithm was built to predict revolutions, but while it was capable of predicting an insurgency in Paraguay, it didn’t foresee the uprising in Ukraine. The people involved in building the model argued that the project’s end result wasn’t the prediction itself, but the testing of geopolitical theories. It was a way for researchers to rethink their theories, to reanalyze dogmas.

Predictive algorithms, though, have a way of creating new dogmas. The complexity of the models, the usually untraceable prediction process, combined with people’s greater trust in numbers than in fellow humans, makes us take the prediction’s result for granted.

The Correctional Offender Management Profiling for Alternative Sanctions algorithm predicts a defendant’s risk of committing another crime. After several years of being used by many states across the U.S., a 2018 study showed that the COMPAS model was no better at predicting a person’s likelihood of recidivism than volunteers randomly selected from the internet. Still, back in 2013, Paul Zilly was convicted of stealing a push lawnmower, leading the prosecutor to recommend a year in county jail. Instead, the judge suddenly overturned the plea deal, imposing two years in a state prison. His reasoning? He had seen Paul Zilly’s high-risk score from COMPAS. He had allowed his judgment to be guided by a predictive model that was no better in its predictions than a random group of volunteers. 

Today’s predictive models are probably better equipped in making sense of the future than the prophets of past centuries. After all, these kinds of algorithms are fed immense amounts of historical data from which they start observing certain patterns. But what if particular human behaviors, economic fluctuations, and revolutions are bound to happen outside of historical patterns? That is where these algorithms fail.

The problem is when we forget that these predictions are doomed to have their failures. We forget because we rejoice in the idea that algorithms give us certainty in a world of uncertainties. We allow ourselves to get trapped in the same idea that made people seek out an oracle’s prophecies, the same idea that made them analyze Nostradamus’s writing to predict the 2012 apocalypse.

If we are so afraid of the coming revolution of artificial intelligence and algorithms that will take over our jobs, we should take a moment to reflect on the inherent imperfection of these systems, and on our own imperfections as humans. After all, adding these highly complex models into our daily lives only leads to one more element of uncertainty. The same uncertainty that we have been dealing with since the beginning of humanity. Once we can understand that technology will most likely add a layer of uncertainty to our lives, instead of playing into the false sense of certainty we’ve craved for centuries, we can start grasping the way our humanity will merge with algorithms.

Leave a comment

Your email address will not be published. Required fields are marked *