AI Tech Giants Unite: Promise to Protect Children From the Dangers of AI

| Updated on April 26, 2024
ai giants unite to protect children from csam
protect children from csam

Recently, many of the top artificial intelligence companies, including OpenAI, Microsoft, Google, Meta, and others, have come together to pledge to prevent their AI tools from being used to exploit children and generate any kind of child sexual abuse material (CSAM).

This initiative was brought into play by a couple of child-safety groups, Thorn and All Tech is Human.

According to Thorn, the pledge from the major AI companies has set a groundbreaking precedent for the industry and represents a pretty huge leap in defending children from sexual abuse as a feature of generative AI.

The main goal of the initiative is to stop people from being able to generate any kind of explicit material containing children and if some already exists, then take it off social media platforms and search engines.

This Tuesday, Thorn, in cooperation with All Tech is Human, released a new paper titled “Safety by Design for Generative AI: Preventing Child Sexual Abuse.” that pretty much outlines the strategies and recommendations for companies that build AI tools, search engines, and social media platforms to take steps to prevent generative AI from being used to cause harm to children.

One of the recommendations from Thorn includes that companies choose the data sets that they use to train their AI models carefully and avoid any instances containing CSAM as well as any adult content.

Many companies have already agreed to separate images, videos, and audio that involve children from data sets containing adult content to prevent their models from combining the two.

Related Post

By subscribing, you accepted our Policy