Being classically liberal can mean a lot of things, but probably not among the spectrum for most is being ruled by robotic overlords.
I, for one, welcome them. Sort of.
There’s a strong argument to be made for the inclusion of the almighty algorithm, the magic wand of the digital age, into our systems of governance. This isn’t to cede executive or legislative authority to machines or nerds; I’m not arguing for a technocracy here. Rather, known human irrationalities can be compensated for by the creation of specific, targeted algorithms designed to balance them. Applied in targeted, limited ways, algorithms can bypass some of our innately human foibles. They are not a replacement for representative governance. What they are is a tool, with a limited range of use, but a superior one in those uses to our brains.
But that’s a lot of words, and I’m getting ahead of myself. What is an algorithm?
It’s a bit of code, commonly used to perform a calculation, that produces a definite output. A very basic example of a real-world algorithm is a recipe. You follow the instructions and get a brownie. Ditto in the digital world, only the computer is following the instructions you give it.
Imagine trying to bake a brownie. Easy-peasy under normal circumstances. Now imagine trying to bake a brownie while your house is on fire, you’ve quintupled the recipe, and you only have the exact amount of time the recipe prescribes as how long it takes to make them all. That’s pretty difficult.
See, humans can be pretty great. But we have limitations. While our brains work on other tasks, they also have to keep us alive: keep us breathing, digesting, making energy, moving muscles, etc. What’s nice about computers is they don’t have to worry about any of that nonsense. They can just focus acutely on the tasks we give them and perform them as prescribed.
Another nice thing about computers (algorithms again in particular) is that they don’t have a lot of the machinery we needed as organisms that undermine our ability to make optimal decisions. If your job is to bake a brownie but the house you’re in is on fire, you aren’t focusing very well on that brownie. An algorithm will make the same brownie.
Of course, if your house is on fire, it’s pretty understandable that you’re not so focused on the brownies. But what if the job isn’t baking a brownie? What if it’s figuring out the order in which you need to hit your kids rooms to get them all out before the fire you see upstairs cuts you off? What if it’s calculating how badly injured you’ll be if you leap out the window, and if that’ll hurt you too badly to crawl away from the flames? These are all calculations, after all, and the immense, intense stress your brain is under by focusing on keeping you alive is going to be fighting you every step of the way.
Here’s another example. Reactions by governments to the emergence of the novel coronavirus COVID-19 varied in speed. Some countries swiftly locked down; some are still in the process of it. There’s been endless debate about how swiftly we moved, how swiftly we ought to have moved. It remains a touchy subject in some circles about how much was done. Some people feel we moved way too slowly and weakly; others feel we came down way too hard and swift and need to lighten up.
Who’s right here isn’t the point. What is the point is much of the debate reflects the paralysis and uncertainty of our leadership. The house is on fire, and making a decision about what to do, or not, is influenced by that.
But algorithms don’t need to worry about that. They don’t think, or feel, or laugh, or cry. All they do is perform their function. Hell, one was used to predict COVID-19 becoming a big problem before the WHO caught it.
Something like a pandemic poses unique problems to human brains’ problem-solving apparatus. For one thing, it spreads exponentially. That’s not something our brains are well-equipped to handle. You might understand rationally what that means. Your brain probably doesn’t.
As an example, take the rabbit plagues of Australia. Yes, they’re a real thing. 24 cute little bunnies were released in Australia in 1859. Rabbits, like every other animal that has several offspring, increase their populations exponentially. In 2020, an estimated 200 million rabbits are plaguing the countryside. That’s a lot. I’m going to guess it’s more than you expected.
Back to coronavirus. Let’s assume everyone who gets it spreads it to two or three other people. That means, every cycle, the number of people infected doesn’t increase by two. Say we start with one person and assume each person spreads it to two others. Let’s call a cycle a week, just for kicks. After the first cycle, three people have caught it. After two, it’s up to six. After three, it’s 12. After the fourth cycle, it’s 24.
After 20 cycles (20 weeks), we’ve got more than a million people who’ve been infected. After 24, we’re up to 16 million. After 27, we’ve broken 134 million.
By 33 cycles (33 weeks, or about 11 months) we’ve hit 8.5 billion. In other words, unchecked, it would infect more than the entire population of the earth in less than one year.
I’m going to bet that’s also not what you were expecting. Our brains don’t handle this stuff well.
Algorithms can bypass the meatsoup evolution has given us. For one thing, we often determine how to act by how options are presented, as do our cousins, the great apes. Frame things in terms of gains and we’re much more eager; frame it in terms of losses and we’re averse, even if the mathematical model of the results is identical.
Algorithms didn’t have to evolve. Evolution gave them no programming. We can tell them to follow the mathematical model. In theory, we can tell them when and to what degree we need to lock down our societies to come out ahead of an unpredictable and terrifying global pandemic. Algorithms don’t get scared.
I’m not, whatsoever, suggesting we hand our public affairs over to algorithms. That would be very, very bad. I’m not suggesting that algorithms are perfect; they can not only not compensate for our natural biases but magnify them. Algorithms cannot govern us. They cannot do our valuations or make our analyses. What they can do is perform a function that we’ve decided is the right approach when we have the time to sit down and think about it, without our natural human impulse to panic getting in the way.
For certain, specific situations (like pandemics, par exemple) we can use algorithms to do something humans have been doing with tools for millenia; compensating for a weakness. Just as spears and bows let us avoid having to fight our food (or foes) face-to-face, a well-designed algorithm can bypass our natural foibles. It’s not worried about getting re-elected, cannot be bribed and doesn’t care about how afraid we are. It just performs its function.
And, of course, there are problems with creating powerful systems that govern public affairs, even for a limited time, without accountability. That said, people create algorithms, and people can be held accountable. That’s not to say that a bad actor couldn’t write a truly horrific algorithm designed to maximize human suffering, but that’s always true of politics and bureaucracy, generally. At least an algorithm is honest. It does what it does.
Imagine, if you would, that we designed a plan for how to deal with something like a pandemic. (Hint, we did.) Now imagine that the plan was executed as created. It was made when we were calm, could sit down with the information, talk over the plan, talk it over again, go back to the drawing board and rework it, until we had it as good as we could get it. Then, imagine, when the time came, it just ran. Like a program. If nothing else, we’d at least be able to honestly evaluate it when the dust settled.
Photo from The Coder Pedia