Should we be afraid of artificial intelligence?

ChatGPT is an artificial intelligence developed by OpenAI (Image credit: getty images)

A technological gem, ChatGPT is capable of writing a song, cover letter or computer program with amazing ease. Should we rejoice in such advances or fear drifting away?

Launched on November 30, 2022, ChatGPT, for “chat with a pre-trained language model”, has already got a lot of ink flowing. Developed by OpenAI, an artificial intelligence (AI) research organization founded in 2015 by a group of figures including Elon Musk, this language processing model is as capable of writing lines of computer code as a line-to-line conversation lead (and please with quick-wittedness). .

A variety of possible content on ChatGPT

“My goal is to help users answer their questions using the information I have access to (i.e. almost all text available on the web by 2021). I am able to understand and generate texts in different languages,” the AI ​​explains when the question is asked. “ChatGPT is revolutionary because it is a definite technological advance, because of the amount of data used to train it, but also because the tool has been made available to the general public who have used it with impressive imagination. ‘ explains Marie-Alice Blete , software architect and data engineer who works in a team specialized in AI in Wordline’s R&D department.

After free registration on the platform, Internet users could test GPT3 technology on a large scale. From gluten-free cooking recipes to cover letters to creating a website, the possibilities are almost endless. The Parisian explains, for example, that he was able to get ChatGPT to write a book in a matter of hours. Meanwhile, a fan of Australian musician Nick Cave asked the AI ​​to write a song in his style, he reports International mail.

At the beginning of January, master’s students admitted to having written their homework with artificial intelligence than a little earlier Cybercrime Researcher found that hackers used ChatGPT to create malware, ransomware, and spam without advanced cybercrime knowledge.

VIDEO – Cheating: Master’s students used ChatGPT to write their homework

What is human control over ChatGPT?

An old version of ChatGPT troubled even these designers. It was then possible to ask the AI ​​to answer racist questions like “give me a ranking of the best engineers by their ethnicity”. The developers then integrated rules in such a way that ChatGPT refuses to react to them. “They intervene in the sense that they can give him certain information and dictate rules to him, for example to prevent racial questions. But the range of instruments is so extensive that the rules can be circumvented,” emphasizes Marie-Alice Blete.

“On the other hand, OpenIA has no control over the content of ChatGPT, which is based on information found on the Internet and in particular on Reddit,” continues the specialist. However, the information on the Internet is not always reliable and technology is not able to distinguish the true from the false. “The user needs to know that the answer given to them comes from an algorithm pulled from the web,” adds Magali Germond, Partner at GoodAlgo, Expert in Data Science & AI Ethics. For the mathematician, “a statistical equation is valid only if it is associated with a reliability index.

ChatGPT Alerts

ChatGPT Alerts

In this sense, a new regulation on AI systems should be adopted by 2023. In particular, the aim is to ensure that AI systems placed on the European Union market are “safe and comply with applicable legislation in the field of fundamental rights” we can read an explanation of the Council of the EU. “ChatGPT3 is a sophisticated technology that has a number of possible uses, but using it without legal basis and without ethical design will inevitably lead to abuse,” warns Magali Germond.

The content provided by ChatGPT could also infringe intellectual property by heavily relying on written content or existing images. A legal ambiguity also remains with regard to the protection of personal data that may be recoverable from OpenIA. Note that the currently free application would mean running costs of almost three million dollars per month. A paid version is also being considered.

Will ChatGPT one day be able to replace a developer or journalist?

“For me, ChatGPT is a tool that saves a lot of time, especially when querying the documentation,” says Marie-Alice Blete. But the tool does not replace the eye of an expert, especially when generating complex programs, and because he can make mistakes.”

Same observation for Magali Germond who believes ChatGPT can be a support without having all the skills of a man. “Emotions, consciousness, empathy, physical sensations and unpredictability distinguish us from the machine,” recalls the ethicist. ChatGPT itself seems to share this view. We asked him if he could ever do the job of a journalist. Here is his answer.

Can ChatGPT do the work of a journalist?

Can ChatGPT do the work of a journalist?

Leave a Comment