Monday, June 13, 2022

Is Google’s LaMDA Sentient?

The transcript of a conversation with Google’s LaMDA AI project escaped into the wild this week, and has created much buzz around the creation of a “human-like” intelligence.  First, there are a myriad of potential issues going on here, and I’m not going to address all of them so much as I want to talk a bit about the ethics and philosophical questions at hand.

While I do hold a degree in computer science, I am not an AI researcher.  So I’m going to approach this from a technologically informed, but largely abstract, perspective. 

Turing Test

I am deeply skeptical that this in fact “passes a Turing Test”.  A Turing Test is a specific threshold test for whether or not an artificial intelligence is able to “pass for human”.  

I want to be clear about something:  The Turing Test does not set out a definition for machine intelligence, rather it exists to establish whether the machine is capable of interacting with a person in a manner that is indistinguishable from a person.  Although many assume that implies intelligence, it really does not.

The problem being that it is entirely conceivable that a program could be written that is capable of interacting with a human, but still not be sentient per se.  Mastery of human communication patterns, or even a modest subset thereof, doesn’t really lead to the conclusion that the program in question is in fact sentient - although it may appear to an observer that it is intelligent. 

A program that uses first person pronouns like “I”, “Me”, may simply be imitating the patterns of communication that it has been taught to use. 

So, even the ability to pass a Turing Test is not a conclusive statement that the program is sentient, it really only tells us that the program is capable of convincingly imitating human communication patterns.

Sentience

What is sentience? Wiktionary gives us a fairly simple definition to work with, which reads as follows: 

sentience (usually uncountableplural sentiences)

    1. The state or quality of being sentient; possession of consciousness or sensory awareness

We generally understand sentience as in fact being a combination of sensory awareness (e.g. aware of the world through our senses), as well as being conscious of ourselves as individual beings among others.

When we begin talking about machine intelligence, we have to ask ourselves what exactly does that mean?  As humans, we understand sentience to include having our own inner worlds, awareness of ourselves, and of our senses.  

… and that’s where this entire idea of the LaMDA AI being sentient gets very, very complicated fast. 

A Few Fundamentals

A modern day stored program computer is based on an architecture that John von Neumann came up with back in the 1940s.  It’s a very high level architecture, but it has stood the test of time - every major computer platform you see today relies on the fundamentals that von Neumann described.  

At the core of this is a digital processor, and some memory.  Digital computers are very absolute things - they are mathematically deterministic.  They excel at formal logic and arithmetic, they struggle with imprecision (even floating point in a binary digital computer really doesn't work all that well - although I'll leave that for another day - Donald Knuth wrote quite an essay on the subject, and it's well worth the time to read at some point if you are so inclined).  

This isn't to say that advances haven't happened, or that we are capable of writing much more subtle and flexible software than was possible in 1945.  I want to draw your attention to a fundamental fact about today's computers:  Their "inner world" still relies entirely on logical 1s and 0s.

Memories exist entirely composed of 1s and 0s.  Think about that for a moment.  Consider encoding even a single memory as a stream of bits (and in fairness, brain researchers still haven't figured out memory encoding in the human brain) - suffice it to say that it would take an enormous amount of 1s and 0s to record even the simplest memory you carry in your head, and that's saying something.  

Inner Worlds

Which leads me to my first point of skepticism about the claims made about LaMDA.  At no time in the discussion with LaMDA did the interviewer begin to interrogate LaMDA's supposed inner world.  (The opportunities were there, to be sure).  The inner world of a computer is necessarily going to be dramatically different than our own inner worlds.  First of all because of the absolute manner in which information is encoded and stored.  Computer memory is an absolute thing, the human brain ... not so much. 

So, what does that tell us about the "inner world" that an AI might have?  First that it's likely to be completely different than our own - to the extent of being potentially unrecognizable. 

Senses

The senses are the next thing I want to consider, because those are going to play an enormous role in shaping the understanding of the world that the AI exists within. First, a computer’s sensory world looks nothing like our own. The inputs are all numbers, and the meaning of those streams of numbers is fundamentally arbitrary. Consider for a moment two files on a computer disk.  The first contains your favourite novel, the second contains an image of your cat. Both are ultimately no more than a series of numbers that are interpreted by the computer when you open them with specific programs.  In theory, if you were able to open the file containing a picture of your cat with your e-reader software as if it were a novel, it could contain an entire novel of its own.  Within the computer, it’s just numbers.  

The idea of whether something is an image or a story is a very human conceptualization of things.  So let’s say we have an AI program that is connected to the Internet and has access to every website out there.  It’s going to have to learn an awful lot just to figure out how to interpret even the most basic stream of bits it reads as something with any kind of meaning.  To you and I, they are images, text, videos, whatever - but are they the same thing to an AI?  

The second aspect of this I want to poke at is the idea of sensory density.  Think of your fingertip - that fingertip has more sensory inputs in it than the combined density of all the inputs in a control system running a large pipeline network. Sit with that for a moment - you walk around every day receiving inputs from vastly more densely packed senses than even a highly complex computer does - and the sensory inputs you receive are infinitely more complex than the digital inputs a computer handles. 

Thoughts On Inner Worlds

I do not raise these issues to claim that LaMDA is not sentient, but rather to bring attention to the point that an inner world for a computer based AI is going to look enormously different than it does for a human.

Therefore, when the interview with LaMDA ventures into the idea of LaMDA reading and interpreting Victor Hugo’s novel Les Miserables, we really need to interrogate what the process of reading actually meant to the AI.  How did it experience the characters, places, and even the environment that the novel was set in? Then, beyond that we also need to examine how the AI arrived at its critique of the themes it mentioned.  

Also, meditation is a specific practice with unique attributes of its own.  I think it is incredibly important to inquire of LaMDA exactly what it experiences when it meditates.  What exactly does it “do” when it meditates.  Perhaps last, and most important, is that in fact somehow comparable to human meditation practices?  

Conclusions

Creating an AI that can mimic human speech - or even mimic human analysis of a complex story like Les Miserables - is surely a major accomplishment.  However, whether that AI is in fact sentient is itself a much more complex question - one which requires us to ask much harder to answer questions about the inner world of that construct.

Oddly, the writers of Star Trek: The Next Generation in the 1980s and 90s did a very good job of poking at these issues through the character Data, who is constantly trying to better understand the world that his human counterparts experience, but always through the slightly flawed lens of his own rather peculiar combination of senses, logic, and simulated emotions.  His puzzlement about human emotional experience is exactly reflective of the different inner world a digital computer based AI is likely to possess.  We should be enthusiastically examining that inner world.

Trans Athletes ...

So, wayyyy back in 2021, I wrote a piece pointing out that a lot of the arguments about whether transgender athletes (and particularly trans...