Humans are designing machines to “think” like humans. The most recent innovation, Chat GPT, has been combined with the search engine Bing to offer “conversational responses” to web inquiries. The results have been mixed, and some people have reported bizarre responses.
By all accounts, we have a long way to go before there are humanoid robots walking around, being our friends or servants, doing the things we would rather not do.
Ever since watching The Terminator at age 15, I’ve been fascinated with the idea of human-like robots. The most appealing aspect of robots, in my opinion, is their logical capability. The Terminator had a specific directive: Kill Sarah Connor. It did not have any doubt, insecurity, shame or guilt about this directive. When it killed the wrong Sarah Connor, it went to the next one in the phone book (no internet back then) and killed her. Logically, if all Sarah Connors were dead, then the one that would eventually give birth to John Connor, the leader of the rebellion against the machine overlords in the future, would be dead.
The Terminator’s programmers (also machines) did not take the time to write code specifying that it should not kill every Sarah Connor. They were either sloppy, lazy, or figured “How many Sarah Connors could there be in Los Angeles in 1984?” I think there were 3 or 4, so valuable time was spent killing the wrong ones. Sociopathic machine programmers sent a flawed sociopath back in time to do a simple job. It didn’t work.
Humans, unlike robots and other machines, use concepts rather than discrete bits of information to navigate through life. We use analogies, metaphors and abstractions in order to make sense of the environment. We also have a conscience, morals and ethics.
At a basic level, we seek pleasure and avoid pain, and behave in a way that maximizes our chance of survival. A lot of the time, however, we are several levels above the pain/pleasure/survival mode, leaving plenty of room for making a big deal about trivial shit. We create drama, cultivate appearances, strive for status, play mind games, and second guess everything we possibly can.
What if humans were more like robots? Or at least like an ideal robot that used logic to make decisions about what to do and what not to do? Life would be easier if we were not burdened by fear, guilt, shame, depression, anxiety, stress, insomnia and ambivalence. If we could simply analyze sensory data, process the input using an algorithm, and generate output in the form of speech or action, our lives would be much easier.
When I’m stuck, sometimes I ask myself, “What would Robot Thor do?” Stripping away ego and emotion can reveal the logical next step. The tricky part of this exercise is to keep morals and ethics intact, both of which I consider uniquely human forms of logic.