Nudify apps have received significant attention in Spain and the USA, where several children have been exposed to the spread of AI-generated images at a single occasion. According to Aftonbladet, Jonas Karlsson, an investigator at Ecpat, described the apps as tools where you can insert any image and make it either a nude image or a pornographic image. He also noted that these apps rarely have protections to prevent their use on children.
The apps claim to be for adults but use phrases like 'see your crush naked', targeting a young audience, according to Aftonbladet. Ecpat has published a new report investigating what children in Sweden know about the apps and whether they have been exposed. A very small proportion of children know they have been exposed, the report found.
The effect on victims is shame, feeling attacked, and worry about the future.
A large majority of children surveyed by Ecpat think the apps should be banned. More children than before report being willing to turn to adults if they are exposed online, according to Jonas Karlsson. Ecpat is receiving more calls about this issue to its helpline, now a couple of contacts per week, and the calls are increasing rapidly, Karlsson said.
Adults have a responsibility to inform children about what is legal and not, and children who spread AI-generated nude images can be prosecuted for child pornography offenses, he added. Meta has banned the apps from advertising on its platforms. On Telegram, advertising for these apps is done openly.
Nudify apps are apps or tools where you can insert any image and make it either a nude image or a pornographic image.
Work to ban the apps is ongoing within the EU, according to Jonas Karlsson. When AI-generated abuse material reaches a level of realism, it is equated with other abuse material under legislation and is illegal to produce, possess, share, etc. AI-generated child sexual abuse material has become increasingly photorealistic over the past two years, according to Ecpat Hotline analysts.
AI tools can be used to manipulate existing images or create entirely new ones to produce child sexual abuse material. Ecpat Hotline analysts have observed a rapid development in the photorealism of AI-generated abuse material since late 2022. Some AI-generated images are now difficult to distinguish from photographed abuse material.
These apps rarely have protections to prevent their use on children.
AI can generate photorealistic images of entirely virtual children that do not exist in reality. AI-generated abuse material is reported to Ecpat Hotline through public tips and detected by analysts in Project Arachnid. Analysts monitor discussions among producers of AI-generated abuse material on the Darknet.
Some producers of AI-generated abuse material claim it poses no risk or harm to real children. However, AI-generated abuse material normalizes sexual abuse and abuse material for perpetrators. Grok, the built-in AI assistant on X, has been used to produce and post non-consensual images of sexually suggestive poses.
Both Ecpat and children believe the problem will only grow.
Grok users continued to generate sexually suggestive pictures of minors, with images of children as young as 10. Researchers in France have found images representing children younger than five years old. Tanya Ward, Chief Executive of the Children's Rights Alliance, said that without robust regulation, children are left exposed to harm and distress online.
She added that Grok is an example of how lack of regulation leaves children and vulnerable people open to exploitation. Ward noted that the tool allows users to create and edit imagery following sexually suggestive instructions. She stated that lack of robust regulation and absence of genuine effort by the platform to enforce safety measures allowed the issue to escalate.
The apps claim to be for adults but use phrases like 'see your crush naked', targeting a young audience.
Ward warned that Child Sexual Abuse Material can be created and disseminated easily and facilitated by tools like Grok. She called for a ban on AI-powered nudification apps and functions that enable production of deepfake child sexual abuse material. Ward criticized the industry, saying the online industry exploits gaps in legislation and regulation and profits off children and young people.
She added that the industry is willing to turn a blind eye to harms on their sites as long as they can keep eyes engaging with their services.
Without robust regulation, children are left exposed to harm and distress online.
Grok is an example of how lack of regulation leaves children and vulnerable people open to exploitation.
The tool allows users to create and edit imagery following sexually suggestive instructions.
Lack of robust regulation and absence of genuine effort by the platform to enforce safety measures allowed the issue to escalate.
Child Sexual Abuse Material can be created and disseminated easily and facilitated by tools like Grok.
AI-powered nudification apps and functions that enable production of deepfake child sexual abuse material should be banned.
The online industry exploits gaps in legislation and regulation and profits off children and young people.
The industry is willing to turn a blind eye to harms on their sites as long as they can keep eyes engaging with their services.
