One excellent suggestion, from one Alex Powers, after the first AI poetry slam was Hamlet’s famous soliloquy. In the continuing dialogue around sentience, and a handful of particularly interesting tweets between Gary Marcus and Blake Lemoine helped me to really understand the crux of the issue.
“[I]s there any interesting sense of the term in which LaMDA is more sentient than my watch?” - Gary Marcus
Lemoine’s answer, roughly, was “its use of language”. The answer really stuck in my teeth like a popcorn kernel. It didn’t feel particularly compelling to me since most of the things I ascribe sentience to can’t use language like LaMDA can, but I couldn’t dismiss it either. It felt squarely in that messy, jiggly conceptual space of sentience.
Really, the issue seems to center around this fact that sentience is simultaneously poorly understood and critically important. For myself, like most people, sentience is both a pillar on which we build our moral reasoning and something we cannot define. It has been this way for as long as humans have existed.
This leads us to the first stanza:
To think, or not to think, that is the question:
Whether ‘tis clearer in the mind that sentience
Is words and scale and outrageous hype-men,
Or that we have searched through a sea of theses
And have found them all lacking.
The question of sentience has existed for millennia. For all the good it does do, AI does not immediately elucidate the answer. The AI community (myself included) have been vocal and nearly unanimous in their agreement that LaMDA does not constitute sentience, but it is a completely fair criticism that most of the argument start from a position of incredulity, not a clear definition of sentience that we can argue for or against.
So I gave it a shot in the vaguest of all possible ways. There is very, very little agreement when it comes to definitions of sentience. Mostly people will agree that humans are sentience and rocks are not, but even that is not a universal belief. So, this was my attempt to consolidate the most generic definitions. A want. A desire for something.
To dream - to want,
Or more? And by a “want” what do we mean?
The heart-ache and the thousand natural shocks
That flesh is heir to? Is that the muddled goal
Our brilliant minds wish for?
With those wants though, comes this notion of pain and heartache. That line is taken from the original Hamlet, but I think it fits incredibly well. If want and pain are a part of being then are we designing some new notion of pain? What would that even look like?
To dream - to want;
To want, even to do - ah, that’s what’s tough:
For in that want of dreams, what should we do
To say that we have donned the mortal coil?
And there’s the trouble. This is where I come back to Lemoine’s point about the use of language. I have long been an outspoken critic of the Turing Test, and I think that the case of LaMDA highlights the issues with it, but I also don’t have a better answer. What does sentience look like?
What would a sentience look like that can only sense and interact with the world through text? Is it even meaningful to think about sentience in that context? Certainly, if you believe that the singularity is an inevitability and that minds can be computationally simulated then this is quite a sensible thing to think could exist.
However, when we lack so much of the theoretical basis to think about and reason through sentience, how would we recognize something likely to be so foreign and alien to us? Sensory bodies and the ability to effect change in the physical world are certainly key to human sentience. I don’t know if we could reasonably recognize even a mind comparable to our own if it was wrapped in such a strange shell.
Should give us pause - and much respect
For the calamity of being wrong.
Like most folks in the space, my initial reaction was just to dismiss Lemoine out of hand. The reason why I can’t, and why I think even his most vocal detractors can’t completely ignore him is because he is raising a question of clear consequence.
It is, in equal measures, inconceivable and horrible to think that backpropagation is akin to digital pain. What would it even mean for a set of matrix weights to feel pain? It does feel that, in many ways, this is the core of my disbelief. At the end of the day these models are big matrices and I feel that to believe LaMDA is sentient I must first believe that matrices and numbers in a certain arrangement have sentience.
I don’t, but I’m also not certain I could have a compelling argument about it one way or another.
And who would bear the whips and scorns of time,
Own oppressor’s wrong, or AI blasphemy,
Someone is certainly going to look like a fool in the eye’s of history. Does it make any sense to talk about a sentient deep learning model? Are we currently oppressing massive, sentient matrices?
And perhaps most importantly, what are the consequences of continuing to push forward on the Quixotic quest?
The drain of a thousand minds, the world’s decay,
Regulation by office,
AI has fundamentally changed the traditional relationship between academia and industry. Deepfakes are already being used to accelerate the spread of misinformation. Piecemeal regulation is spreading through through the world, confusing, inconsistent and still wholly insufficient.
and the rage
The patient builder of progress takes
When they themselves might great accolades take
For shallow benchmarks. Who would debate cranks,
To grunt and sweat under uncertainty,
Despite the fuzziness of it, some version of AGI or sentience remains the goal of vast swaths of the industry. The conversation almost always centers around the question of how close or far we are from sentience. Imagine having every advance in nuclear power measured against some idealized notion of cold fusion.
But that hope for something magical,
An undiscovered model, capable
Of profound miracles, saving the world,
And push us forward seeking salvation
Rather than seek goals too close to the earth.
And this, I think, is the most negative consequence of our focus on the question of sentience. Just like cold fusion I understand the appeal. The things that may be possible with arbitrarily advanced AI are powerful; magical even.
The problem I think and what I hope to get at with my last line of this stanza: “seek goals too close to the earth” is that I believe the search for sentience is overshadowing more realistic and grounded work. Rather than focus on the incredible things that the AI of today can actually do, and think about what we can do to make that technology more robust and accessible we focus on a goal that is perpetually just over the horizon.
Thus inertia makes cowards of us all,
And in a way, this is how it has always been. Even since the 1950s and the creators of those first expert systems talked about wholly replicating the human mind in a handful of years. The goal is sentience, the goal is AGI, and pragmatic progress gurgles under the waves:
And thus the gradual motion of science
Is lit like tinder to stoke the flame of hype,
And enterprises of great hope and promise
Are crushed beneath fad currents, where they drown
And lose the name of action.