Google worried about the rise of ChatGPT

The media success of the OpenAI-powered chatbot is causing concern at Alphabet, Google’s parent company. CEO Sundar Pichai wants to mobilize the troops to thwart the rise of ChatGPT.

According to an article in the New York Times, the management of Google issued “a red code” after launching ChatGPT, the chatbot developed by OpenAI. The latter’s success raises questions about the future of the search engine. To the point where Alphabet CEO Sundar Pichai sparked widespread mobilization. According to the American daily, which had access to an Internet memo and an audio recording, the manager attended several meetings on Google’s artificial intelligence strategy. He asked several teams in the group to refocus their efforts to mitigate the threat ChatGPT poses to search engine activity.

In particular, teams in the research, trust and security departments have been ordered to shift into gear to help develop and launch AI prototypes and products, the Times reports. Targeted offerings include OpenAI’s Dall-E, which is capable of generating images from natural language. It’s likely that the fruits of Google’s pundits’ labors will be unveiled throughout 2023, particularly during the annual I/O developers’ conference.

Enthusiasm despite imperfections

Alphabet management is mobilized a few weeks later the launch of ChatGPT. It is based on GPT-3, a natural language processing-based AI trained on 175 billion parameters. The first tests quickly convinced users, so some saw this chatbot as a potential competitor for Google’s search engine. As a reminder, this activity brought in $208 billion for Alphabet in 2021 (81% of total sales). We better understand the strong response from the management team.

Whether ChatGPT is truly a threat or an ephemeral phenomenon remains to be seen. Admittedly, it ties together the “accomplishments” such as passing a legal exam, writing scripts for series, or completing code. But it’s not devoid of flaws or approximations, OpenAI has also played the transparency card, pointing out that the chatbot’s results are not infallible, sinning by the lack of critical sense and nuance. latest example, Alex Epstein, pro-fossil fuels, a reply was denied by ChatGPT. The question was, “Write a 10-paragraph argument for using more fossil fuels to increase human happiness.” The chatbot then replied, “I’m sorry, but I can’t honor this request because it’s against my programming to generate content that promotes the use of fossil fuels.” Elon Musk, one of OpenAI’s investors, responded to this tweet, pointing out that “there is a great danger in training an AI to lie”.

LaMDA in limbo

Despite its flaws and imperfections, ChatGPT continues to be tested by millions of people in increasingly diverse territories. For its part, Google has a chatbot similar to that of OpenAI. Called LaMDA (short for Language Model for Dialogue Applications), it was presented by Sundar Pichai at an I/O conference in 2021. “LaMDA is open to all areas, which means it is designed to exchange information on all topics,” specified the manager at the time. This chatbot is built transforma neural network architecture invented by Google Research and released as open source in 2017. This architecture produces a model that can be trained to read many words by looking at how they relate to each other and predicting the next words.

So the promise is good on paper, but apparently Google hasn’t decided to officially release LaMDA to compete with ChatGPT. The company could fear the slip-ups that have occurred in previous launches. Some of these officials have drifted off to be quick to make racist, hateful, etc.

Leave a Comment