This situation has, understandably, generated a lot of critiques and skepticism (and memes), but it is nonetheless worth talking about because the framing of this story is so clearly unusual.
While I can’t peer into Google’s code and say for certain that Lemoine is just seeing things, I will say that this does highlight a problem for Google. Just not the one Lemoine thinks.
I’d like to posit about this story that there are two separate considerations here that need to be discussed, both of which can be true: First, the idea that AI can become so convincing in its use of language that it fools people like Lemoine; and second, the fact that Google struggles to understand its AI ethicists is a continuing problem that threatens the company in the long run.
In some ways, it’s good to think of Lemoine in the way that you might think of someone like Fox Mulder from The X-Files. Given his job and his predisposition, it’s understandable why someone like him would be likely to believe something farcical like AI actually having emotions and feelings. After all, as someone who works in a “Responsible AI” discipline, it reinforces things he’s already decided are worth believing.
As Clive Thompson thoughtfully notes, the reason why he believed this is because the AI showed vulnerability, something that real humans show. But that means that AI is getting better at producing something similar to vulnerability, not that the AI is sentient.
“It's a story about the danger of wanting to believe in magic so badly that you'll manufacture a simulation of it and say it's real,” Joshua Topolsky tweeted last night by way of explanation.
However, more broadly, Google has been really bad about how it has treated its AI teams, with employees like Timing Gebru and Margaret Mitchell being shown the door after raising broader questions about the work and the department. That Lemoine was suspended, rather than fired, suggests that Google wants to be careful not to make a mistake here, even if it has to deal with the public nature of something its leadership clearly disagrees with.
Google has to be sensitive about how it handles this situation, because it could be setting up someone like Lemoine to speak out against the company for decades if it plays this poorly. It’s not that he’s right, and odds are that he’s probably wrong. It’s that Google could be seen as stifling legitimate internal debate about the AI tools it builds if it plays its cards poorly here.
After all, it has already been accused of doing just that.