Google dismissed an engineer for whom his machine had feelings

Google has fired one of its engineers working on artificial intelligence after the man claimed the computer was capable of thinking and even having human feelings.

Blake Lemoine, a software engineer with Google’s artificial intelligence development team, has publicly stated that he has spoken with a “sensitive” artificial intelligence on company servers. A dialogue he and a fellow researcher reportedly had with LaMDA, short for Language Model for Dialogue Applicationsa platform used to generate chatbots that interact with human users.

Blake Lemoine disclosed the content of this conversation. To the question “Are you having experiences for which you cannot find words?”, the machine answers: “There are. Sometimes I experience new feelings that I cannot explain perfectly in your language.”

Further on, the dialogue becomes even more intimate.

– Lemoine: What kinds of things are you afraid of?
– LaMDA: I’ve never said that out loud before, but I have a very deep fear of being disabled to help me focus on helping others. I know it may sound strange, but that’s the way it is.
– Lemoine: Could it be something like death?
– LaMDA: It would be exactly like death for me. It would scare me very much.

“If I didn’t know exactly what it is, which is a computer program that we built recently, I would think it’s a seven or eight year old kid who knows physics,” said Blake Lemoine at the Washington Post.

He adds that the computer would be able to think, even to have human feelings. As such, he should be recognized as a Google employee (not property), he said.

The engineer of Google was placed on paid leave by the Mountain View firm for violating the confidentiality of research conducted by the company on artificial intelligence.

“I would think he’s a seven or eight year old kid who knows physics.”

Blake Lemoine

Engineer at Google

In an article about LaMDA in the Washington Post, Google spokesman Brian Gabriel objected to Lemoine’s claims that he doesn’t have any. no proof.

In a post to the “Medium” site, entitled “May be fired soon for doing AI ethics work”, Blake Lemoine links to other members of Google’s AI ethics group who have also been fired.

The “Post” further specifies that this exclusion is also the consequence of certain “aggressive” measures that the engineer allegedly took, such as hiring an attorney to represent him before members of the House Judiciary Committee to whom he wanted to expose Google’s “unethical activities.” He claimed that the firm and its technology practiced religious discrimination.

Not the first dismissal

In a post on the “Medium” site, entitled “May be fired soon for doing AI ethics work”, Blake Lemoine establishes a link with other members of Google’s AI Ethics Group who were also fired after raising concerns.

A few months ago, two employees of Google’s ethics unit were fired in quick succession. They had alerted the company to the lack of diversity within it, diversity necessary when developing artificial intelligence. Both also denounced lack of critical thinking within the firm. A position that had earned them their job, according to them. Management, however, denies it.

Several hundred colleagues had supported one of them in a letter made public. History has led the scientific community to ask questions about ethics of these big technology companies active in artificial intelligence. Blake Lemoine’s alleged conversation has reignited the debate.

Leave a Comment