Nature was unable to evolve telepathy, or even silent, digital radio communication like the kind we use to coordinate swarms of drones, but it has managed to create impressive methods of coordination through the unremarkable mediums of sound and light, as well as direct touch and even smell. When we build devices that appear to manifest consciousness in these mediums, even something as simple as a vinyl record that re-creates a human voice when you apply a needle, we feel an unease, because the device has entered an uncanny valley and we suddenly feel like the answer to a question is very important: Is this a person? Or a rock? This could be something baked into us by evolution, as a fundamental part of the process we use to recognize other people and communicate with them. Its usefulness precedes us, too: Any mobile animal that doesn't lay unfertilized eggs needs to at least learn how to recognize its own kind, or it's gonna have a hell of a time reproducing.
Suppose you created a robot. Suppose it was in the shape of a human with a speaker grill where the mouth would go, and it contained an audio player that would play speech through the speaker, so that every couple of hours, this robot doll would appear to be contemplating its existence aloud to anyone nearby.
You would look at that robot and know that it was doing a poor job of imitating a human. You'd know there wasn't any actual contemplation happening -- no mental process. No being was present whose existence could be contemplated, whatever words came droning from the speaker.
Now, consider a doll with a computer inside, and a program running on the computer, using the latest generative language techniques, and trained so that when you converse with it, it appears to be speaking to you the way a Midwestern teenager would. Throwing in "like" and "um" according to a statistical pattern, and so on. Also, install a microphone and some voice recognition software, so the conversation goes both ways. Perhaps there's some suspicious delay because the computer isn't very fast, and maybe the conversation degenerates into nonsense if you give it nonsense for input - which is unlike most humans who would stop and say "what the hell are you babbling about?" - but it would still be an alarmingly good simulation. It would absolutely fool a child.
You still know exactly how it works, but if you placed it in front of some adult who didn’t, they would either assume they were having a two-way radio conversation with a slightly insolent human somewhere else, or they would be forced to assume that they were talking to a sentient robot. Mostly sentient. Easily confused, but obviously trying to think.
The only reason you are not fooled is, you personally witnessed the robot being constructed.
Let a child watch while you construct and program the robot, and they would probably still assume you had just created life. They might and even refuse to believe your claim that it’s artificial. What does that word even mean to a young person?
This leads to a more fundamental question: Why does it matter, whether the thing in front of you is life, or a simulation of it?
Young people must have some instinctive impulse to recognize the self in other things for society to function at all. If we left it up to external training alone, the chance of empathy arising and then taking permanent hold would be hilariously small. On top of this instinct to empathize, to see bodies and faces and recognize a range of emotions, we soon need to learn that some animate things are more human than others, and that humans are the most human of all. We learn that people have feelings, and all the way on the other side of the spectrum, things like rocks do not.
Relatively speaking, humans are really, really good at cooperation, and empathy is foundational to that. It compels humans to act altruistically, but without accidentally prioritizing the survival of, say, rocks. Young people practice empathy by keeping dolls and imaginary friends, and humans of all ages practice their empathy by keeping pets. But now we encounter modern devices that act so convincingly human we cannot tell the difference. Evolution never had to deal with this.
When we become surrounded by devices that handily stride across the uncanny valley - that realm of suspicion that kept us from giving humans the same consideration as rocks, even really interestingly shaped rocks, for millions of years - what are the consequences?
And what are the consequences, when these robots are impossible to distinguish from people, but they are still programmed and directed and acting on the behalf of other people -- people we don't know?
This is the futue we are in, right now.
Suppose you created a robot. Suppose it was in the shape of a human with a speaker grill where the mouth would go, and it contained an audio player that would play speech through the speaker, so that every couple of hours, this robot doll would appear to be contemplating its existence aloud to anyone nearby.
You would look at that robot and know that it was doing a poor job of imitating a human. You'd know there wasn't any actual contemplation happening -- no mental process. No being was present whose existence could be contemplated, whatever words came droning from the speaker.
Now, consider a doll with a computer inside, and a program running on the computer, using the latest generative language techniques, and trained so that when you converse with it, it appears to be speaking to you the way a Midwestern teenager would. Throwing in "like" and "um" according to a statistical pattern, and so on. Also, install a microphone and some voice recognition software, so the conversation goes both ways. Perhaps there's some suspicious delay because the computer isn't very fast, and maybe the conversation degenerates into nonsense if you give it nonsense for input - which is unlike most humans who would stop and say "what the hell are you babbling about?" - but it would still be an alarmingly good simulation. It would absolutely fool a child.
You still know exactly how it works, but if you placed it in front of some adult who didn’t, they would either assume they were having a two-way radio conversation with a slightly insolent human somewhere else, or they would be forced to assume that they were talking to a sentient robot. Mostly sentient. Easily confused, but obviously trying to think.
The only reason you are not fooled is, you personally witnessed the robot being constructed.
Let a child watch while you construct and program the robot, and they would probably still assume you had just created life. They might and even refuse to believe your claim that it’s artificial. What does that word even mean to a young person?
This leads to a more fundamental question: Why does it matter, whether the thing in front of you is life, or a simulation of it?
Young people must have some instinctive impulse to recognize the self in other things for society to function at all. If we left it up to external training alone, the chance of empathy arising and then taking permanent hold would be hilariously small. On top of this instinct to empathize, to see bodies and faces and recognize a range of emotions, we soon need to learn that some animate things are more human than others, and that humans are the most human of all. We learn that people have feelings, and all the way on the other side of the spectrum, things like rocks do not.
Relatively speaking, humans are really, really good at cooperation, and empathy is foundational to that. It compels humans to act altruistically, but without accidentally prioritizing the survival of, say, rocks. Young people practice empathy by keeping dolls and imaginary friends, and humans of all ages practice their empathy by keeping pets. But now we encounter modern devices that act so convincingly human we cannot tell the difference. Evolution never had to deal with this.
When we become surrounded by devices that handily stride across the uncanny valley - that realm of suspicion that kept us from giving humans the same consideration as rocks, even really interestingly shaped rocks, for millions of years - what are the consequences?
And what are the consequences, when these robots are impossible to distinguish from people, but they are still programmed and directed and acting on the behalf of other people -- people we don't know?
This is the futue we are in, right now.
Thoughts
Date: 2026-01-28 09:34 am (UTC)It matters, because mistreating a being is wrong in ways that mistreating inert objects is not wrong, at least according to many ethical systems. Some others argue that one should not mistreat inert objects either.
Regrettably, history shows that humans have a tendency to misclassify other humans (e.g. women, black people) as inert objects for moral purposes. The same will happen should sentient robots arise. See "The Second Renaissance, Part I" of The Animatrix.
>>It compels humans to act altruistically, but without accidentally prioritizing the survival of, say, rocks. <<
From a certain Western perspective. From some other perspectives of mostly tribal peoples around the world, the planet -- including its rocks -- is alive and vitally important to respect. Look at the mess Westerners have made the world as a result.
>>And what are the consequences, when these robots are impossible to distinguish from people, but they are still programmed and directed and acting on the behalf of other people -- people we don't know? <<
Typically that people use these new tools to abuse other people, which is a very old bad habit.
You might like the American Society for the Prevention of Cruelty to Robots.
Re: Thoughts
Date: 2026-01-28 08:08 pm (UTC)To me it's a question not about things that are easily categorizable as non-living (begging the question that the distinction matters) but about things that aren't. Like, if a person encounters a robot that simulates a life in every respect, to the point where it's not possible to tell the difference without, say, watching the thing being built or tearing it apart, what's the point of the distinction then?
I suspect the only way to answer the question is with more questions, e.g., "What's the motive of the people who built and programmed the robot?" "What's it capable of, that would take you by surprise, if you assumed it was human?"
Mistreating inert objects is not - or less - wrong, in most ethical systems, yes. But you bring up the idea of animism, which isn't just a tribal thing of course, and depending on whether you categorize, say, the Japanese (Shinto) as Western or Eastern or tribal in their outlook, may not be outside the realm of the Western world. Unfortunately I don't think the philosophy of the planet being as alive as humans (and as important to respect) survives execution, even in small-scale civilizations. Scale up their activities to the population size the planet is bearing today, and you get massive ecological damage, unless you lean on technology to make things more efficient. Which is why the very best thing any of us can do for the sake of the planet, once global population is at about - let me pull a number out of my ass - ten million people, is to stop breeding. (If only breeding wasn't so compelling!)
Oh, which reminds me! You might really dig a recent scifi series called "Pluribus", if you haven't already seen it. Drags a bit in the middle but it presents a really nifty high-concept situation that's very on-topic.
no subject
Date: 2026-01-28 01:08 pm (UTC)no subject
Date: 2026-01-28 07:52 pm (UTC)Or, if you're a Chinese citizen about five years from now, ... to the communist party.
no subject
Date: 2026-01-29 02:29 am (UTC)no subject
Date: 2026-01-29 04:42 am (UTC)