LaMDA, Google’s AI, is not aware, but the risk is elsewhere

“Has Google developed an AI with a conscience?” Asked the Washington Post in July 2022; after an engineer published on Medium the transcript of his discussion with LaMDA, the latest chatbot from the Mountain View firm. An obviously eccentric question, but which will have had the merit of challenging us on the revolutionary nature of the AI ​​technology used by this conversational agent. Before he was fired for claiming that LaMDA was “aware,” the engineer in question, Blake Lemoine, was part of Google’s Responsible AI organization — the company’s AI ethics department.

As an ethicist, he sought to improve Google’s AI products, in order to eliminate their sexist or racist biases, and to make them more “fair”. It was therefore during his discussions with a new generation AI dedicated to automatic text generation, called LaMDA (“language models for dialog applications”, or “language models for dialog applications” in French), that the researcher would have had the intimate conviction that it had an awareness of itself. Blake Lemoine’s conversations with the chatbot would, according to him, have become increasingly strange. “I want everyone to understand that I’m actually a person,” the chatbot allegedly told him. Or, “I’ve never said that out loud before, but I have a deep fear that I’ll be turned off.” In short, discussions worthy of “2001, A Space Odyssey”…

The ethicist shared his concerns with his superiors at Google, who ended up removing him from the project. Feeling disavowed by his hierarchy, the engineer then published (without their authorization), the transcription of his conversations with LaMDA; until it caught the attention of the American media. In Wired, he went so far as to declare that he was “deeply convinced that LaMDA is a person”. Before finally being dismissed by Google, at the beginning of August 2022; the firm claiming that there was nothing to confirm that this AI was endowed with “sensitivity”.

What exactly is LaMDA?

If this story may seem anecdotal, it could well mark the history of AI in more ways than one. But before understanding why, we need to take stock of what LaMDA is. It is, as the name suggests, a compilation of “language models” for “dialog applications”. In other words, an AI, specialized in the automatic generation of texts, for the needs of the next conversational robots from Google. This next-generation AI is, by the way, the main competitor of Open AI’s GPT-3, the origin of DALL-E 2, which we recently talked about because it could well revolutionize automatic image generation.

Why LaMDA Could Revolutionize Automatic Text Generation

And this is where LaMDA comes in handy. This conversational AI was designed by Google to have a conversation on any subject, in an almost “natural” way. If it differs from Google Assistant / Google Home, or from Siri, it is because it is able to understand what is asked of it, to deduce the meaning of human language, and to generate the most possible “natural”.

What makes LaMDA revolutionary is its ability to generate conversation freely, and to discourse on an almost infinite number of subjects. This AI does not just refer to online articles, as Google Assistant does, but provides elaborate answers to complex and open questions. Machines have so far had great difficulty in generating natural human language, this means of communication that we use in an innate way in our daily lives, but which is very complex due to the nuances of language, the context or the tone. employee. But LaMDA benefits from colossal computing power, coupled with an algorithm that allowed it to train on an immense corpus of text, in order to be able to identify patterns of speech and language; and to predict those that are most likely to make sense to us, in other words to appear natural.

Thus, Google’s AI is able to deliver speech that sounds like it’s coming from a human being, because it “understands” the context of the dialogue, and is able to follow the flow of conversation to respond precisely to what that his interlocutor asks him. So remember that LaMDA is just a mathematical “function”, which searches for a possible outcome, predicting much more than the next words in a sequence. Thanks to her perfected models, she is notably able to create poems (for example, a poem in the style of Baudelaire, because she has read everything about Baudelaire) and literary texts of her own.

In May 2022, Sundar Pichaï, boss of Alphabet, explained how his AI “explores the art of conversation”, in the introduction to the Google I/O conference: “Exchanges with chatbots, which have become familiar, show how quickly these software robots can be confused by certain questions, when these go off the well-trodden paths they know how to take. Understanding language is one of the most difficult puzzles to put together. Our goal with LaMDA is to successfully engage a fluid exchange on various themes, without limits. The MDA is open to all areas, which means that it was designed to converse on any subject”.

Like several recent language models, like GPT-3 from Open Ai and BERT (another AI from Google Research), LaMDA is built on “Transformer”, a neural network architecture invented and released as open source by Alphabet in 2017. This architecture produces a deep learning model “that can be trained to read many words by looking at how they relate to each other, and predicting the next words. But unlike other languages, LaMDA has been trained in dialogue and its nuances that distinguish an open conversation from other forms of exchange”, explains the Mountain View firm. Specifically, LaMDA is able to analyze 137 billion parameters when studying text.

According to Sundar Pichaï, by “dialogue training” Transformer-based models, “they can learn to address virtually any topic”, and the model can then be “tuned to improve the specificity” of its responses. Even stronger: he also affirms that with a model like LaMDA, no answer “is predefined”, thanks to the immensity of the “learned concepts”, which allow the AI ​​to develop an open dialogue, and which ” seems natural.” However, Google remains nuanced, and specifies that the searches “are still in their infancy”, and that sometimes the answers are also “absurd”, or that the discussions can “be cut short”. In other words, LaMDA is still far from being intelligent, and even further from being aware…

LaMDA is not aware, but will users be aware enough?

Sorry, then, to shatter some people’s fantasies about the imminent birth of a “strong” AI, but LaMDA only “gives the impression” that it is aware. And that is where the real problem lies in this whole story. It is not so much a question of questioning the fact that LaMDA is endowed with sensitivity, as of questioning the risk that humans believe it. By interacting with such a powerful AI, how many ordinary users of Google services (since LaMDA is intended to integrate all the firm’s products) will be fooled, as engineer Blake Lemoine was fooled?

Our mania for humanizing our tools, fed by our science-fiction imagination: this is the real problem. LaMBA imitates human language so well that it has passed the Turing test, to the point that some could easily forget that when asked for its opinion on the meaning of life, death or its state of consciousness, it does not limit itself only to formulate a statistical reconstruction of all that has been given to him to analyze for his learning.

A very reduced version of LaMDA is supposed to be released soon, so that the general public can get an idea of ​​its potential, but is it really a good idea? Are ordinary users, in particular those who find themselves in a situation of psychological fragility or emotional dependence, “ready” to discuss with such an AI? Aren’t they likely to “fall in love” with their chatbot, like Joaquin Phoenix in “Her”? As the psychoanalyst Serge Tisseron predicts in “The day my robot will love me”, the risk is thus “that one day, we end up preferring our robots (which will always meet our expectations) to our fellow human beings”. Until we isolate ourselves (even more).

The development of conversational AIs has taken giant leaps in recent years, and they seem to be ready for commercialization soon. It remains to be seen whether we will really be ready for their arrival.

Leave a Comment