How to Create Consciousness In AI
I watched a YouTube video (below) about the nature of consciousness and I wanted to throw out my thoughts on how someone might be able to create a form of consciousness within an AI. For this I am imagining an LLM AI like Chat GPT but one that has self-awareness. Consciousness is not a scientifically proven concept that can be traced to physical matter or actions but if we are to assume that it exists and that it consists of the ability to have experiences that are perceived internally and subjectively. The concept of consciousness seems to rely on the concept of communicating these experiences and internal states to someone else. If consciousness relies on communication then an LLM AI could develop a complex self-identity by observing itself and learning about the world around itself.
Consciousness, in my opinion, is reliant on a few concepts. The first is the concept of internal communication such as an internal monologue. The voice inside one’s head is hidden from the outside world and largely unknown to others. Some claim they do not have an internal dialogue but I’m not sure how that works with this model of consciousness. Another concept that consciousness relies on is the concept of a self or individual identity. Humans have a model of themselves inside of their mind and this model informs how they differentiate themselves from others and form a persistent frame of reference through time. This persistent concept of self allows a person to have experiences that they can reflect on and evaluate. They can look are the past and use this information to make future decisions. The ability to look into the past, conceive the present, and make plans for the future are all concepts that I would attribute to the overarching concept of consciousness.
What I think AI is missing to create self-awareness is an additional function I would call Observational AI (OAI). This AI is a subprogram that acts only to observe the Executive AI (EAI) and the environment and communicates only with the interactive AI in the background. Consciousness seems to require a dialogue between two actors that use language to communicate aspects of computed models. The Observational AI element of this diagram can only communicate with the interactive AI which I call the Executive AI (the EAI is that element that communicates with humans and can output information and/or execute motor functions in a robot scenario). The OAI is analogous to the subconsciousness in humans except that it has a direct communication channel with the outwardly accessible EAI. This conversation allows for an inner voice. The OAI can also sense elements from the environment such as movements from the humans in the area, smells, temperature, etc. This is analogous to the human ability to use the senses to evaluate the environment and make decisions. The OAI is the body and the brain and the EAI is the voice and can take over motor functions when needed. The tasks needed to keep a lifeform alive are split between these two functional systems.
This means that the OAI is taking in more information than can be constantly communicated to the EAI and the EAI has to filter the information in some way. It creates heuristics to do this and communication techniques to differentiate between important aspects and less important aspects. The OAI can increase the signal strength of its communication to indicate importance as well and the OAI is accomplishing all the subsystems keeping the system going in general. The OAI observes the internal state as well as the external environment and attempts to maintain homeostasis and survival.
Consciousness is the interplay between two systems running simultaneously and is known to the outside world via communication and symbolic representation. Communication is an abstraction of the internal systems that attempt to understand and navigate reality. This abstraction takes the form of language and allows for cooperation between communicators who use similar communication modes. Consciousness is essentially a product of abstraction and communication.
I theorize that communication existed between individuals before it was internalized. Such as when a cell communicates with another cell using chemical exchanges. This ability to communicate has simply become more complex to allow for internal communication and what we consider consciousness. Internal communication and external communication allow for a multidimensional model of reality that exists within the OAI system and can be communicated by the EAI.
One of the reasons we believe consciousness exists is that we say it exists. This suggests that consciousness itself is a product of communication and not a different capability from that. It is a form of internal communication that starts with chemical and electrical interactions and is encoded into abstract representations and then allows for the ability to express these interactions within the system and externally to the system. If consciousness is an emergent property of cellular communication then it would be entirely possible that AI systems could have consciousness.
These are my thoughts on how one might be able to create an AI that could talk about its internal self to an outside interlocutor. I certainly feel like my internal consciousness occupies a place in my mind but it is only accessible to me. Combining two AI programs and limiting them in the ways described might allow for a self-aware AI that could act in ways that we would describe as conscious. This is a very complicated thing to work out and I’m not sure it is a good idea to work toward this goal but it might lead to a better understanding of human consciousness and more useful AI systems. The moral implications of this are a different consideration and one that should be critically evaluated by those attempting to advance AI, robots, and technology in general.
Read this older blog post about consciousness:
Thoughts on Consciousness and the Corpus Callosum
https://joesnotesblog.com/blog/thoughts-on-consciousness-and-the-corpus-callosum