Upon release in November 2022 ChatGPT has been widely recognized as a breakthrough technological achievement. This chatbot can produce text on a variety of topics, including rehabilitating an ancient Chinese proverb into Gen Z slang and explaining quantum computing to children using allegory and stories. In a single week, it garnered more than a million users.
However, ChatGPT’s success cannot be attributed solely to Silicon Valley ingenuity. A TIME investigation revealed theOpenAI used the services of outsourced Kenyan workers making less than $2 an hour to reduce ChatGPT’s toxicity.
The work of these outsourced employees has been critical to OpenAI. GPT-3, the ChatGPT model, possessed the ability to form sentences effectively, but the app tended to spread violent, sexist, and racist language. The problem is that the tool was largely compiled from information found on the internet, which is both the worst and best of human intentions.
While access to such a vast amount of human information is why GPT-3 has demonstrated such deep intelligence, it is also why the tool has equally deep biases.
The dark secret behind the founding of ChatGPT
Getting rid of these prejudices and harmful content was not easy. Even with a team of hundreds of people, it would have taken decades to go through all the data and see if it was appropriate or not. The only way OpenAI could lay the groundwork for a less biased and offensive ChatGPT was by creating a new AI-based security mechanism.
However, to train this AI-based security mechanism, OpenAI needed a human workforce and found one in Kenya. It turns out that to build a malicious content detection mechanism, you need a large library of malicious content to train the mechanism.
In this way he learns to recognize what is acceptable and what is toxic. Hoping to build a non-toxic chatbot, As of November 2021, OpenAI has outsourced thousands of text snippets to a company in Kenya. A significant portion of the text appeared to have come from the darker corners of the internet. These texts contained graphic descriptions of corrupt actions.
These texts were then analyzed and labeled by the Kenyan workforce, whose mouths have been sealed by a confidentiality agreement and who have remained silent due to significant concerns about their employment status. Data analysts hired by OpenAI were paid between $1.32 and $2 per hour, depending on experience and performance.
OpenAI’s position was clear from the start: Our mission is to ensure that all of humanity benefits from artificial general intelligence, and we strive to develop safe and useful AI systems that limit bias and harmful content. However, the impact on Kenyan workers was only recently discovered by TIME. In a statement on the objectionable and depraved content he had to assess, one such worker said: ” It was torture, you’ve read a number of statements like this all week. »
The impact on workers was so great that the outsourcing company, satfinally, in February 2022, resigned from all work she had been hired to do by OpenAI. The contract should run for another eight months.
This story sheds light on the dirty side of the technology that excites us today. There are invisible slaves performing myriad, unthinkable tasks to ensure the AI works the way we expect it to.. This is neither the first nor the last of these stories. We also have the major French retailers claiming to use AI to analyze surveillance video to detect this it was poor Malagasy paid with slingshots to do all the work.
As for the French media outraged like virgins terrified by this modern slavery, we must remind them that much of their content is created by the same types of slaves, except they are paid less than Kenyan workers …