Justice and Artificial Intelligence
By Scott Hambrick, Reader-In-Chief
Cut a candy bar in two pieces so two people can share. What’s the just way to do this? Don’t say cutting it in equal pieces is justice. Cutting it exactly in half doesn’t take into account the size, hunger, or affinity for chocolate of the two who will share. Maybe one of the people is deathly allergic to chocolate. Don’t give him any. Cutting it in half damned sure doesn’t take into account who worked for the money that was given for the candy bar. See, we don’t know what justice is.
If you don’t REALLY know what justice is, how do you make decisions about candy bar sharing? When you don’t know what justice is, you are necessarily unjust in your decision making. I hope I don’t have to write more to show that small things like candy bar sharing, and bigger things like the trolley problem (is it better to purposely kill one to save 2 or more from natural disaster, or to let nature take its course?) all lead to the same conclusion: every value judgment we make concerning human life is governed by our notions of justice whether they are right or wrong.
We are thousands of years into this, and we don’t even have a good heuristic for dealing with humans. Heuristics are models we use for thinking about problems. Heuristics give us a quick and dirty way to make good decisions. Heuristics aren’t 100% accurate, but give us most of the information we need to get to good answers. Educated guesses and rules of thumb are types of heuristics, and I believe any heuristic involving the interpersonal needs to be based in justice.
We can’t use our intelligence to figure out the simple trolley car problem in a definitive way. Lives are at stake in this problem. We need to be 100% right 100% of the time. Why the hell are we going to turn loose an artificial intelligence on such problems?
Artificial intelligence will emerge from programming we humans have done. We’ll list a bunch of our best guesses about how to best make decisions, create heuristics from those, code it all up, load the kernel on some hardware, run thinkforyourself.exe and see what happens. Recursive AI will use the heuristics it was programmed with at the jump-off point to improve its “intelligence.” I don’t like this one bit.
Allen Barrow, a friend of mine, once told me, “Thought isn’t like travel. Where you start, dictates where you end up.” I think this is true. He also told me that, “You can’t fix your broken tools with your broken tools.” This sums it up. If we can’t prove to ourselves or an AI what the axiomatic truth is about justice, we can’t be sure of the justice of decisions we, or the AI, will make. Starting with the wrong idea about justice isn’t just doomed, it is doom itself.
I think our children are AI’s. Once thinkforyourself.exe runs in them, they have intellectual agency. It happens early, maybe at age two. Until we can reliably prove to our kids what justice is AND give them a heuristic they can use to always act justly, we have no business trying this with superfast computers. If a human is going to spiral out of control it normally takes 10-20 years after booting. The ultra-powerful computing AI of the future will be able to spiral out of control in milliseconds.
Let’s hold off on that until we can figure out how to do this with ourselves and children.