The Empathy Machine

The Empathy Machine

Imagine a highway. Not just any old highway- a highway with millions of junctions that weave in and out of each other. Gravity doesn’t apply here. Cars change lanes and ebb and flow into different directions, reaching their destinations only to turn around and go somewhere else. It looks like a ball of spaghetti. An utter mess.

This is a metaphor for something, but it’s a metaphor that doesn’t really fit for what I’m describing. Let’s try again.

Imagine you’re inside a computer. A processor to be more specific. You don’t know what that looks like? Completely fine; I’ll help. It’s trillions of switches, plattered in an intricate sandwich of more switches. Each switch connects with a number of switches surrounding it, creating an unspeakable amount of etched pathways from switch to switch.

Every switch can be flicked on or off with electricity. They do so at a rate of speed undetectable to the human eye. It’s absolute chaos. Except, it’s not. It’s organized. Directed by physics. This is where the metaphor, again, falls apart.

There’s no real metaphor for the magic that is the empathy machine. This machine is real, but it’s biological. It works on electrical impulses, but not only that. There are switches, but not really. These “not switches” have many ways of communicating with each other, one of which is electrically, the other chemically. Together they build an immense network of processing power, capable of understanding entire languages, planning, research, and most importantly, empathy.

If you’ve already figured out the empathy machine, great. If you’ve haven’t, it’s the human brain.

The brain is an extremely complex machine, incapable of comparison without oversimplifying its complexity and capability Yet, I hear it often, and now more than ever.

Multitudes of tools bolstered by LLMs, some running in a loop with MCPs as Agents, have sprung into the foreground of tech. Amazing capabilities, use cases, and investment. Some I make ample use of today. Many compared to brains. Wrongfully so.

Even with their capabilities, it’s an oversimplification to the maximum. I’ve been thinking about this for a while now, and only recently saw Apple’s paper on the illusion of reasoning https://machinelearning.apple.com/research/illusion-of-thinking

The tech giant’s paper didn’t get me thinking about this topic, though. It was a simple game of mini golf. I had gone out with family, compiling a 5-person hand-scribbled scorecard. While the family was strolling into a nearby gift shop, I was tasked with totaling up the game and determining the winner. A task which I immediately turned over to OpenAIs multi-model input.

Scorecard image in, scores out, right? At first, yeah, but then I realized all the scores were completely wrong. In fact, not even close. I probed the model further. It regenerated the scores. Again, it was wrong. I asked it to try a different method. It did, hoping to satisfy me with the right scores. It didn’t.

I made a spectacle of it, touting that the model had reached a limitation of its ability. It asked me to upload a more clear image, admitting to no limit, only clarity of input. I was skeptical, but I gave it my best go, uploading a super clear image of the mini golf score card. Again it responded. Numbers completely askew. Totals wrong.

I gave up. I spent the next 10 minutes totaling up the card. My son had won, by some miracle or fudging of the score. He celebrated, and life went on. For days after, still I wondered: What had went wrong? Perhaps the input, yeah. Model? Well, I had tried o3 and 3.5 and 4. I let the thought go but it came back often for a few days. Then, it hit me.

The model, ever eager and trained to please, hadn’t considered “giving up”. It was unable to understand its own limitation, unable to try a different method on its own. It refused to empathize with me. It only cared to provide a response, even if inaccurate. It failed to understand its own ability. This, as I’ve learned, is a key human trait.

The feeling of defeat, the scramble to try a different method, imposter syndrome, and conceding to one’s lack of knowledge are all driven by both reasoning and hormonal processes in our brains. This is where the comparison of AI between human intelligence reaches a biological impasse.

This is all to say that I think we haven’t yet reached the full limitation of AI. However, I suspect we are far away from a truly intelligent model. I could be wrong, but then again, the brain is a truly amazing biological feat.