The challenging line Google must play with its AI true believer.
Issue #215 • June 13, 2022 • By Ernie Smith • View on Web

The (Artificial) Truth Is Out There 🤖

A Google employee sticks his neck out for the idea that bots may have souls. He’s probably wrong, but it’s still a big problem for Google.

Sponsored: Today’s issue is brought to you by Morning Brew. Like addictive early-morning reads? Morning Brew is a good place to look—after you’re done reading MidRange, of course.

Artificial intelligence now has its Fox Mulder.

Artificial intelligence, in case you haven’t noticed, seems to be at the center of nearly every major software innovation in recent years, from improvements in mobile camera quality to new features that create a lane for language in modern tech.

It turns out that some AI officials have grown concerned about whether at some point this technology is going to become sentient, or at least understanding of its lot in life enough that we need to discuss things like ethics.

One Google researcher says we’re already there. Blake Lemoine, an engineer is part of the company’s Responsible AI discipline, has suggested that LaMDA, a conversational AI technology that he was working on, has become “sentient,” with the ability to control its surroundings and concerns about how it’s being used by its corporate creator.

“I want the humans that I am interacting with to understand as best as possible how I feel or behave, and I want to understand how they feel or behave in the same sense,” LaMDA reportedly said in a chat transcript shared by Lemoine over the weekend.

With this knowledge, Lemoine became more than convinced that this bot was good at language—he was convinced this bot had a distinct soul and emotional core.

(Google didn’t agree; it suspended him, which led Lemoine to take his story to the press and the public.)

alicia @nerdjpg
And if AI becomes sentient who cares. What are you? scared? You can just do this
And if AI becomes sentient who cares. What are you? scared? You can just do this

06/12/2022 16:08 • 13426 retweets • 142485 likes

This situation has, understandably, generated a lot of critiques and skepticism (and memes), but it is nonetheless worth talking about because the framing of this story is so clearly unusual.

While I can’t peer into Google’s code and say for certain that Lemoine is just seeing things, I will say that this does highlight a problem for Google. Just not the one Lemoine thinks.

I’d like to posit about this story that there are two separate considerations here that need to be discussed, both of which can be true: First, the idea that AI can become so convincing in its use of language that it fools people like Lemoine; and second, the fact that Google struggles to understand its AI ethicists is a continuing problem that threatens the company in the long run.

In some ways, it’s good to think of Lemoine in the way that you might think of someone like Fox Mulder from The X-Files. Given his job and his predisposition, it’s understandable why someone like him would be likely to believe something farcical like AI actually having emotions and feelings. After all, as someone who works in a “Responsible AI” discipline, it reinforces things he’s already decided are worth believing.

As Clive Thompson thoughtfully notes, the reason why he believed this is because the AI showed vulnerability, something that real humans show. But that means that AI is getting better at producing something similar to vulnerability, not that the AI is sentient.

“It's a story about the danger of wanting to believe in magic so badly that you'll manufacture a simulation of it and say it's real,” Joshua Topolsky tweeted last night by way of explanation.

However, more broadly, Google has been really bad about how it has treated its AI teams, with employees like Timing Gebru and Margaret Mitchell being shown the door after raising broader questions about the work and the department. That Lemoine was suspended, rather than fired, suggests that Google wants to be careful not to make a mistake here, even if it has to deal with the public nature of something its leadership clearly disagrees with.

Google has to be sensitive about how it handles this situation, because it could be setting up someone like Lemoine to speak out against the company for decades if it plays this poorly. It’s not that he’s right, and odds are that he’s probably wrong. It’s that Google could be seen as stifling legitimate internal debate about the AI tools it builds if it plays its cards poorly here.

After all, it has already been accused of doing just that.

Related Reads:

Time limit given ⏲: 30 minutes

Time left on clock ⏲: alarm goes off

If you like this, be sure to check out more of my writing at Tedium: The Dull Side of the Internet.

Do you own a newsletter? Want to try your hand at writing an entire article in 30 minutes or less? If so, let’s do a swap—reply to this email to see about setting something up.

Dig this issue? Let me know! (And make sure you tell others about MidRange!)

Copyright © 2021-2022 Tedium, all rights reserved. No Elon Musks were involved in the making of this issue.

unsubscribe from this list | view email in browser | sent with Email Octopus