This piece is inspired by Westworld, one of the most thought-provoking shows about what the future might look like. Whether you’ve seen it or not, I think you’ll enjoy this look into some of the scarier possibilities of AI and algorithms.
Warning — this article contains no spoilers for the plot of Westworld Season 3, but does discuss some of the season’s themes and setting. If you prefer to avoid all coverage before you watch a show, best to close this post and get streaming.
Though the sci-fi blockbuster Westworld is set in 2058, it’s not easy at first to spot the difference between the futuristic cities depicted on screen and our own. Season 3 of the show portrays a society glued to their smartphones, trying to climb corporate ladders and popping pills when it all gets too much. The biggest difference appears to be that driverless cars are seen on actual roads, not just empty deserts.
But all is not as it seems. In the first episode of the season, we see Caleb (Westworld’s strapping new male lead) informed by robo-call that his job application had been denied — his ‘score’ was not high enough. Later, the former soldier is warned not to drop out of mandatory counselling, doing so would drop his score. It turns out that Caleb’s score dictates everything in his life: his occupation, where he can live, even who he can date and whether he can have children. This is a degree of control even China can only dream about.
Westworld depicts an entire society moulded into order by these overlapping layers of app-based incentives. Just a single superintelligent AI and masses of data is responsible for creating this new social order, ruling not through force of arms but overwhelming network effects. Westworld poses a grim question: what is stopping today’s society from descending into a technological dystopia, ruled by algorithms?
I’ll pose one more — what if we are there already?
Totalitarianism starts where free will ends
Way before the advent of such delightful modern inventions as Facebook (described by an insider as ‘behavioural cocaine’) and Tinder (whose use is ironically linked to increasing loneliness), some philosophers were already turning their minds to what the end of free society would look like.
Philosopher Hannah Arendt wrote in ‘The Origins of Totalitarianism’ that:
The last stage of the laboring society, the society of jobholders, demands of its members a sheer automatic functioning, as though individual life had actually been submerged in the overall life process of the species…
In Westworld, the superintelligent AI named Rehoboam is obsessed with order. The society in Westworld is so unavoidably trapped under the yoke of Rehoboam’s app-based incentives that free will has effectively ceased to exist. Even if nobody is technically holding a gun to Caleb’s head to make him raise his score, what choice does he really have? If he doesn’t follow the whims of Rehoboam’s algorithms, he has no prospect of a career, a relationship, or a place to live. That is no real choice.
Rehoboam, like any self-respecting autocrat, thinks that the greater good of humanity justifies its manipulation and control. But Westworld makes very clear that Rehoboam represents a fundamentally different kind of threat to history’s most perverse dictators like Hitler and Stalin. Rehoboam creates the score-based platform, but humanity’s blind addiction to that platform is a product of its own social instincts. Rehoboam has no need for a Gestapo or secret service in order to keep its population in line. Arendt got it right: we humans are happy to automate away our own free will.
Think about just how much basic decision making in the modern era has been delegated (often at great expense) to algorithms. Algorithms already run financial markets, welfare systems and chunks of the criminal justice system. But they touch countless aspects of our personal lives as well, including the areas we supposedly hold sacred. For instance, Tinder’s algorithms help decide who you date and fall in love with; Facebook’s algorithms influence who your friends are and how you stay in touch with them. In the name of making our lives easier and more connected, we have all embraced algorithmic decision making even if we don’t think about it in those terms.
As the developers of tech platforms know much better than we do, humans are social creatures who will follow the herd if given half a chance. We use Tinder because it’s addictive and simple, but also because everybody else is. As a friend sadly noted (after yet another Tinder date letdown), “you have to be on the dating apps, because all the other single people are”. These network effects give a platform like Tinder a gravitational pull that is difficult to escape. If all your friends are on it, you want to be on it too — this is the same dynamic that sees so many of us on Facebook, Instagram and WhatsApp.
Swipe right for civil society
One big problem with this modern trend of running our lives through apps is that a huge amount of power is given to a select few engineers designing the algorithms that power those apps. Instagram’s AI decides whose posts are displayed on your feed at any given moment. Tinder’s AI decides your options to swipe on any given evening. Is this really so different than Westworld’s Rehoboam, an AI that decides whether Caleb gets a job, or whether he gets a counselling session?
Of course, partly because it makes for better TV, Westworld’s example is an exaggerated one. We have far more actual choice than Caleb does, and no platform controls every element of everyday life like Rehoboam does (though Google is trying its best). However, the creepy willingness of society in Westworld to fit into Rehoboam’s score-based system is but a logical extension of the real world, where we have proven happy to delegate our decisions to AI.
Remember as well that the era of artificial intelligence has only just begun. Every year, algorithms get more precise, more predictive, and more popular. All of us will probably continue to turn over more decisions in our lives to newfangled platforms and the AI that run them, whether for convenience, enjoyment or because we want to keep up with the Joneses. Westworld’s grim warning is that this tendency may have consequences. As one character notes about Rehoboam:
“The people who built our world shared one assumption: human beings don’t have free will.”
If the year is 2058, and humans have had 50 years of delegating more and more of their lives to algorithms, who’s to say that that Rehoboam’s assumption is wrong? At what point do we relinquish our free will, and collapse into Arendt’s “sheer automatic functioning”?
It is quite conceivable that the modern age, which began with such an unprecedented and promising outburst of human activity — may end in the deadliest, most sterile passivity history had ever known…
Putting aside the question of free will, the power dynamics at play in an algorithm-driven technocracy are simple. The more decisions delegated to AI, the more power is transferred to the engineers responsible for that AI. If a small group of people control most of the algorithms, and those algorithms dictate and govern much of our lives, the comparison with a totalitarian system becomes blurred.
We must be alive to the risk of digital autocrats, and the power they wield through technologies that have quickly become indispensable to us. These dangers, stemming from the control of algorithmic influence, are close to invisible in today’s world. Technology platforms exist in an environment that is essentially unregulated, with governments scrambling to understand even the contours of this new digital world. They must scramble faster.
In the 21st century, totalitarians no longer have to ride in on tanks to establish control. As Westworld demonstrates, it will be much more effective to give the population a decision-making tool they just can’t resist.
Another great piece, this one highlighting the more subtle forms of manipulation at play than the overt political messaging of Cambridge Analytica et al. I have a few thoughts, the algorithms in the world that we clearly identify as eroding free will share a lot of similarities with neural mechanisms formed by evolution, developmental context or habit, shouldn't we then see our cognitive architecture as antithesis to free will too? Second, a totalitarian regime that is universal and inescapable is one of the existential threats to humanity cataloged by those who study these things. It's a genuine risk that we probably ought to begin acting now to understand and mitigate. Thirdly, William Gibson's latest novel 'Agency' deals with a lot of this stuff too, the willingness to follow AI generated recommendations, those in power pulling the strings etc. But adds an additional layer of reverse causation, where those in the future manipulate the actions in the past too...