Global Hit that generated on children leads to dozens of arrests

At least 25 arrests were carried out during the world operation against images of children created by artificial intelligence (AI), said the European Union’s law enforcement organization.

The suspects were included in the criminal group, the members of which were engaged in the spread of fully ai-generated images of minors, the agency reports.

The operation is one of the first related sexual abuse (CSAM), Europol said. The lack of national legislation against these crimes has made it “exclusively difficult for investigators,” he added.

The arrests were made simultaneously on Wednesday, February 26 during the operation of Camberland, led by Danish law enforcement agencies, the statement said.

Authorities have participated in at least 18 other countries, and the operation is still ongoing, and more arrests are expected in the next few weeks, Europol said.

In addition to the arrests, 272 suspects were still identified, 33 searches were carried out in the house, and 173 electronic devices have been confiscated, the agency reports.

It also states that the main suspect was a national citizen who was arrested in November 2024.

The statement said that “he directed the Internet platform where he distributed the material obtained by AI he created.”

By making a “symbolic payment on the Internet”, users from all over the world were able to get a password that allowed them “to access the platform and observe how they abuse children.”

The agency said the sexual exploitation of children on the Internet has become one of the top priorities for the European Union law enforcement organizations, which were “constantly growing in the amount of illegal content”.

Europol added that even in cases where the content was completely artificial, and the real sacrifice was not depicted, for example, with Cumberland’s operation, “AI-generated CSAM still promotes the objectivization and sexualization of children.”

Europol Executive Director Catherine de Bol said: “These artificially generated images are so easily created that they can be produced by persons who have criminal intentions, even without significant technical knowledge.”

She warned that law enforcement agencies should develop “new investigative methods and tools” to solve new problems.

Foundation Internet House (IWF) warns that more sexual abuse Images of AI children are produced and become more common in the open network.

During the research last year, the charity found that 3512 sexual abuse and exploitation were found on one darkest site. Compared to the month in the previous year, the number of heaviest images of the category (category A) increased by 10%.

Experts say that sexual abuse materials can often look incredibly realistic, which makes it difficult to tell the true fake.

Source link