Questions Robots Ask Us (Without Using The Word “Love”): A List/Essay

Mei Aguon, Reporter

Spoilers for robot-related books and TV shows. Most of them have been around for five years or more.

Like I said previously in my Murderbot article, I loved I Am Mother. The aesthetic of that movie was impeccable. One of my favorite aspects of it was that there was no romantic presence at all. Among all the killings, its secondary themes were kind of sidelined, but I still find it interesting that there was a symbolic relationship between mother and daughter. Some reviewers of the movie interpret it as a daughter breaking away from a mother’s expectations, and Daughter certainly did (just not in a conventional setting). 

The movie was such a breath of fresh air that it really made me long for more parentlike robots in media. And it made me wonder, how else can we subvert this staple of sci-fi?

 People really love the more widely used trope, about robots wanting to be human and wishing they could love like one, et cetera. Of course, this trope has its place, and I love Wall-E to pieces, but I want to know how robots can raise and answer so many other questions about human nature without just saying “Love made them human.” (This applies to other non-human characters like aliens and monsters as well. I do not like the phrasing there!)

So here is my list of questions. 

Is being human the end goal of all robots? Why is being a human better?

One category of protagonist robots go on their own hero’s journey in a formulaic but well-loved sequence. They discover something that alters their frozen, repetitive way of thinking. They uncover their “self,” and then they join their place alongside humans; maybe even as one of them.

For example, Data from Star Trek wishes to feel emotions like a human. Not just love, but joy and anger and sadness. He was capable of doing this through emotion chips installed in his body. That seems to be an aspiration of many non-humans, even if being a human is messy and unpleasant, and unpredictable. Is it because humans are automatically respected above other beings? We are kind of the alphas of the earth. Maybe that cancels out all the ugly things we carry with us?

Murderbot’s Dr. Mensah directly challenges this trope in front of it. Murderbot has expressed discomfort with the thought of being human and decides it’s just going to be itself, with Mensah’s support. Yet another reason to love Murderbot.

The insidious AM from I Have No Mouth is another, more disturbing subversion. His incomprehensibly advanced intelligence was trained for use in war and nothing else, and as a result of being trapped forever with nothing to do, he developed a depraved fascination with human suffering. He’s horrifying because he took a human quality, hatred, and twisted it to beyond-human limits.  

Why would we not treat robots as equals if they act “human”? Why would they even need to look human, then? 

Personally, I like robots that are more mechanical. I don’t get why some robots are basically humans that run on batteries, but still get treated like dirt. (Detroit: Become Human.) Or I do understand, unfortunately. Humans are often hypocrites and don’t make sense. Maybe they still want that power dynamic over someone who looks human but isn’t human so they don’t feel as guilty for being terrible to them. The human superiority complex we have over all the creatures of the Earth would be coming into play here….

Depressing implications aside, I demand more robots that are just 3D shapes with cameras for eyes. They have a certain charm to them, and they make for cool concept art.

What defines a soul or a conscious? 

We’re getting philosophical now. If robots are not born but created and are not alive, can they have a soul like a human? If they have something that is very close to a soul, would you just relent and call it a soul? This is that one question that readers can argue over and English teachers can make comprehension questions out of. In real life, we don’t have a satisfying answer.

I am interested in how robots fool us into thinking they are conscious, as of now. There was a Google engineer who thought the language model LaMDA was sentient, almost in the same manner as a precocious human child. Obviously, it wasn’t and was just talking really, really convincingly, saying things that made it sound like it was aware of the outside world. It mimics the way humans sound. I’m sure it fooled quite a few people, but how long until AI fools all of us? Looking at you, ChatGPT.

What could be worse than dying?

For a story centered around something that can’t die, there needs to be a way to raise the stakes. Obviously, they can break or blow up, but robots can’t feel pain or feel fear the same way humans can. So how to make the situation as bad as “life-or-death”?

The rebel robots in Eden will be “reprogrammed” if the police robots find out they’re rebelling. So, I think one thing worse than death for a robot is the loss of their autonomy, which they value more than humans do because it can be taken away–”reprogrammed” at any second. 

Additionally, if your body is a computer, all your memories are probably stored in a drive somewhere in that computer. If someone plugs a USB into you and drags all your files away, what can you do about it? Another thing worse than death for a robot could be the loss of their memory, which is almost synonymous with their identity. It is able to be physically and virtually destroyed. 

Why do robots want to live?

Robots, simply put, are built to serve a purpose. If they suddenly gain autonomy, would they want to exercise it, or would they want to do what they have always done? Perhaps they are adaptive enough that their purpose could change and evolve to support their development as a new being. Or they could double down, and their dedication to a purpose could substitute a will to live. They could use their free will to carry out their goals more efficiently and more intelligently than a human programming them could. 

For example, Zima Blue from the series Love, Death and Robots was originally a pool-cleaning robot who was continually upgraded to clean better and then used as a test subject for advanced machine learning. He kept getting smarter and would use his intelligence to create masterful works of art that made him famous across the universe. However, his final art piece led him back to where he started: his pool. All of his artwork pointed him back to the pool. He reverted back to his first form, shedding his higher functions, and began again. For him, this was the most perfect way to be. 

Even if it seems backward, maybe the chance to live would just…not change robots at all! Maybe they would reject it completely. 

Whether that’s good or bad for them (or us!) is subjective as of now, since we don’t have robots who could forcibly take over the planet walking around in public. Yet. Tech is getting scarily more advanced by the day. 

Why do we envy them?

Fictional robots who want to be human are everywhere, but so are fictional humans who have given themselves cyborg implements and intelligence-enhancing microchips, et cetera. Everyone wants a piece of robot pie. I believe humans want to use parts of robots to become better humans. Unlike robots, we have that baseline respect that comes with being born human (usually). But we want more. Immortality. Freedom from ills. We might be willing to trade our autonomy for a perfect body or some other favorable trait–just like a robot! We might want to get rid of our emotions so that we become better decision-makers–just like a robot! You don’t have to be a reader of sci-fi for this to set off alarm bells in your head. Robots and humans could even swap philosophies if this sort of trend takes off.

 

But like I said, this is a long way away. Why don’t I check on us in ten years?