While the author doesn’t ultimately accept the argument that consciousness is first and foremost social, I’m very much reminded of Marvin Minsky’s similar perspective, which I’ve always found pretty compelling: that there were clear evolutionary advantages to developing the ability to model others’ minds, and that once we had that capability, the ability to model our own — ie consciousness — came along as a free bonus.
GPT-4 does an awfully good job of modeling humans’ minds, or at least seeming to; on what grounds can we say that it doesn’t model its own?
I would like at least to begin here an argument that supports the following points. First, we have no strong evidence of any currently existing artificial system’s capacity for conscious experience, even if in principle it is not impossible that an artificial system could become conscious. Second, such a claim as to the uniqueness of conscious experience in evolved biological systems is fully compatible with naturalism, as it is based on the idea that consciousness is a higher-order capacity resulting from the gradual unification of several prior capacities —embodied sensation, notably— that for most of their existence did not involve consciousness. Any AI project that seeks to skip over these capacities and to rush straight to intellectual self-awareness on the part of the machine is, it seems, going to miss some crucial steps. However, finally, there is at least some evidence at present that AI is on the path to consciousness, even without having been endowed with anything like a body or a sensory apparatus that might give it the sort of phenomenal experience we human beings know and value. This path is, namely, the one that sees the bulk of the task of becoming conscious, whether one is an animal or a machine, as lying in the capacity to model other minds.
https://justinehsmith.substack.com/p/no-minds-without-other-minds