I
Suppose that tomorrow, God came to you and told you that you are the only real person in the world. Everyone else is actually a soulless automaton, programmed to behave a certain way.
Is this believable? I think it's a horrifying thought. But I'm not sure how I would go about trying to demonstrate that it can't be true.
One thing I'm confident in is that *I* am conscious. I know beyond any possible doubt that I exist as a conscious being with internal experiences and such. So maybe I can start with that. I might point out to God that other people seem to report internal experiences similar to my own. God can answer that this response is simply programmed in, a mapping of inputs to outputs.
I can go a step further though. It's not just that other people report having experiences like my own. If I model them as having consciousness, this is pretty effective at predicting all sorts of behaviors. That feels like a point in favor of other people being conscious.
God might point out that this is just a version of teleological reasoning - making predictions about a system by thinking of it as being aimed at some core purpose. This sort of thinking can be applied to any optimizing system. For instance, I can make decent predictions about how an app on my phone might behave by pretending it is an agent with certain goals. That doesn't mean the app is actually a conscious agent - it just means my mental machinery for theory of mind can be applied to various other domains with reasonable levels of success.
I think my strongest argument would be: I know I have internal experiences, and other people seem to be like me in very many ways. It would be strange if I were to diverge so sharply from other humans on this one trait while matching them so closely across many other traits. God, being God, could simply assert that I am special (perhaps made special for some deific purpose). But beyond some contrivance, I think it is reasonable for me to continue believing other humans are more or less like me.
II
The reason I am interested in this question is curiosity around AI. Specifically, it seems really important that we figure out how to determine whether or not an AI is conscious. An unconscious AI is merely an object, having no more moral significance than a rock. But a conscious AI would be, in some sense, a person.
Is this right? It feels intuitive to me, but even as I write it out, I find myself second-guessing. Humans are connected to each other through bonds of family, friendship, and community. Can moral obligation extend outside this network? Maybe morality ought to be understood as a strictly human phenomenon.
But this feels wrong. For instance, it seems to me that exterminating an intelligent alien species would be an abominable act.
There's another assumption above that I haven't interrogated yet. I called the hypothetical aliens "intelligent." But earlier, I stated it differently - I spoke of *consciousness* as being the determining factor for morality, not intelligence. On reflection, it's pretty clear to me that my intuition is to equivocate between these two things, treating them as essentially the same.
Is that right? Probably not. For instance, I don't think smarter people are of greater moral worth than other people.
When I dive deeply on this, what I find in myself is a sort of anthropic morality - a tendency to assign moral worth based on similarity to humans. I think this drives my intuition that exterminating an intelligent alien species would be wrong - I'm imagining them as being sci-fi aliens, essentially humans with funny prosthetics. I'm imagining being able to speak with them, share culture with them, and possibly even live side-by-side with them.
Is this what actual aliens are likely to be like? It's hard to say, since we haven't encountered any alien life, let alone alien intelligences. But it's entirely plausible that an alien, even an intelligent alien, might be so different that I would have a difficult time recognizing them as a moral entity. What if the alien has no language, no ability to communicate, nor any desire to do so? What if the alien has no culture in the sense that I can parse or understand? What if the alien is so unlike earth life that it is difficult for me to tell it is even alive?
As an abstract matter, I feel like I nonetheless ought to extend moral worth to an alien who is alive and conscious. But would I honor it in a real-world situation, with real costs on the line? I'm not sure. I'm not a vegan - my diet involves killing animals. This doesn't bother me very much day-to-day. Perhaps it should, but in practice, it doesn't.
III
Seeing clearly now my own tendency to assign moral worth based on anthropic similarity, I find myself wondering - is my affinity for consciousness just more of the same? Is it just a fancier, more defensible version of my intuitive anthropocentrism? Have I just picked one human trait and decided to assign to it my entire moral load?
I think that's exactly what I've done. But I don't really feel bad about it. I am a human, after all, and any moral sentiment I carry will have to reflect that. I am able to imagine moral views based on different principles - for instance, a sort of ecological/Gaia morality that assigns innate moral worth to natural ecosystems. But I don't feel especially drawn to these views.
At this point, I feel like I've dug as far as I can, and I've hit bedrock. I'm willing to bite the bullet and take the moral worth of consciousness as axiomatic.
But from here, I run into a new problem - how do I know arbitrary things are *not* conscious? This is another assumption I hid away earlier - that random rocks are not conscious beings. How do I know that?
I'd probably start by flipping my arguments I made against God at the beginning. A rock has none of the behaviors I associate internally with consciousness, and therefore it would be strange for it to share that trait with me while diverging so sharply in other ways. If I try and model a rock as a conscious entity, my predictions will be flawed.
Is the prediction part true? Aristotle had a teleological view of the world - he believed earth had a purpose which it sought to fulfill. He explained gravity in an agentic way, with earth seeking the middle of things and air seeking the sky. Modern physics isn't usually understood this way, but am I certain it couldn't be? Newton's laws could probably be conceptualized this way. I don't know enough about relativity or quantum physics to say much about them.
It seems to me there's a kind of isometry at work here - any natural process can probably be understood as an agent trying to accomplish various goals. Whether or not such a model makes decent predictions may not tell me much about the underlying truth of the matter.
As I move back in the direction of contemplating AI, I run into something weirder. I generally act on the assumption that normal CPUs aren't innately conscious when they're turned on and running. I assume that any consciousness they achieve would be the result of higher-level software structures (that have not yet been achieved).
Can I support this view? Modern computers have memory, they perform computation, and they even respond to stimulus. Am I *certain* that a boring old computer isn't conscious? How would I even begin to demonstrate that a CPU is not a conscious being?
IV
Ultimately, I think I'm left with nothing but my guts-level anthropocentrism. If a plain computer is conscious, it is a consciousness very different from my own. I just can't bring myself to see a computer that blindly executes code and has no ability to grow, change, or learn as conscious in any way I actually care about. Or, rather, I can imagine it as a pure hypothetical, but I can't really hold on to it. It's too alien for me to see as my own.
This brings me face-to-face with a brute fact I am unlikely to overcome: I am going to assign moral worth to any non-human entity based almost entirely on its similarity to traits I find familiar and significant in myself. As a corollary, to whatever extent I widen my circle of concern to encompass non-human entities, it will be by convincing myself of my similarity with them.
This is, in some ways, an unsettling conclusion, but perhaps inevitable. On what else could I base my morality? As I said before, I am human myself, and morality is a human concept. How else was this ever going to work?
But it leads me to admit defeat on what I initially posed as an incredibly important question. I have no way of directly accessing the internal experience of anyone or anything besides myself. I have no way of indirectly assessing consciousness other than comparing the ways in which a person or thing behaves in ways similar to my own expression of consciousness. And therefore, I have no objective way to determine if an AI ought to be afforded moral worth.
I think the time is rapidly coming that AI is able to emulate humanity enough that we start to socially bond with it. And my guess is we will eventually turn a corner where it is more painful to deny the AI as a conscious, moral being than it is to continue treating it like an object. This will have relatively little to do with the AI's actual internal structure and much more to do with how capable it is at emulating and manipulating human social behavior.
It feels like there ought to be a more "rational," facts-based way of assessing something so important. But if there is, I have no idea what it would be. And I don't know what to do with that information.