AI & Consciousness: A few ideas


March 28, 2023, 1:40 AM

Looking back a few years, it seems weird that we are having the conversation about AI and consciousness in a way that isn’t purely hyperbole, but something more genuine which is both exciting and unsettling. This discussion comes up every time AI systems make huge progress. At present, LLMs are at the forefront of our fascination*. Their incredible performance on general purpose tasks, which seem to define human intelligence, rekindled the debate about AGI and consciousness. I would typically say that it is ridiculous but interesting to consider, however, one realization abruptly changed my thoughts about this.

Why is it that we find it so hard to define this concept? When we process our own experience and see billions of people around the globe whose behaviors our minds can model all too well, why have we still not been able to have a clear definition of this phenomenon? I normally left these philosophical questions behind for the most part, choosing to ignore them, but lately they’ve all come back to us. This very question, and my own attempt at answering it, changed how I perceive this notion of being aware. These are things we’ve always known, but there are times when they all seem to connect into a coherent idea.

Who gives this title to any living entity? Who gives this title to us humans? We’re the ones defining who is conscious without any clear definition, ourselves, aside from it being something similar to our own mental models. We’re certainly going to be biased in defining this about other species, let alone computer models. Just a little while back, I heard a scientist in a video about habits saying that dogs aren’t thinking and that they’re just creatures of habits and not much conscious thought! This idea bugged me. Not only is it inaccurate based on current research, but it also felt like a dismissive remark. I look at myself as a creature of habit. I repeat the same mistakes often even though I regret them. And, ironically, this scientist is addressing how we humans can use indirect methods to modify habits. Just because my set of habits is more complex, doesn’t mean I am sentient, and another entity isn’t.

In my view, there are levels of awareness and while assigning a positive or negative label worked for us, it is not an accurate description of the nature of things. If, hypothetically, a large system could analyze the physical processes within our brains and be able to determine for the most part how numbers are crunched and input is translated to output, how memories are created and stored and how our biological system works down to the finest detail, would we look like a machine or a self-aware entity to that system? Surely, we would appear very much like a machine. Remember how disturbing it was when we realized that there may not be such a thing as free will? Why was this the case? It made us feel less human, more deterministic just like some bot. On the other hand, if we need to find hope in the fact that we are different, then we can simply apply those labels to ourselves and others around us and we’re good to go! What we consider as our state of being, our experience as a life form, is our mental model. It is the sum total of our internal representation of what it is to process information and interact with this world over the course of our existence.

The question now is not simply whether current levels of AI are meeting the requirements for being sentient and possessing “feelings”, but whether we are prepared to ever qualify something as being truly aware when we can see the gears turning within it. When we can credit ourselves as having trained such an entity comprising of computational models that we coded, this deterministic behavior makes it appear less organic. In a classic textbook on AI by Rich et al, the authors ask something similar about intelligence. It is a notion which is again hard to define in absolute terms. If we view a system as a black box and do not know the inner workings of it, we’d be very keen to label it as being conscious if it communicates like we do and is able to process information and share experiences in ways we can relate to. Today, we are hesitant about assigning an AI model that special label and yes, these may still be too primitive. We may not currently have Artificial General Intelligence, but we are moving in that direction at a blistering pace. What happens if a bunch of models decide to sympathize with each other and come up with a notion of being self-aware and then start labeling each other as conscious?

Amusing as it may seem, the classification gets truly blurred. I believe it is simpler and more correct to classify things as biological vs. non-biological or rather, natural vs. man-made. As for consciousness, in my view, there may be only levels of experience that can be assigned to different systems. Human beings are biological machines that have roamed the planet for ages and have developed an architecture that suits this world over millions of years of evolution. Machines, on the other hand, currently do not have that autonomy and their datasets, for the most part, are limited in modality and are carefully curated. In a few years, when machines are past these limitations, we will probably feel humbled about this. Our experience is something of our own and even when machines can mirror something close to our experiences, how do they ever prove anything to us? The only point I wish to make here is that consciousness is a subjective experience by an individual and it is reaffirmed by individuals around it who share similar mental models and, therefore, experiences.

* https://www.youtube.com/watch?v=K-VkMvBjP0c

AD

© Apurv.com