A new report by the Tech Transparency Project has raised concerns about the presence of mobile applications that allow users to create nonconsensual sexualized images of individuals, despite clear policies by Apple and Google prohibiting such content. The findings have sparked renewed debate about how effectively major tech platforms enforce their own rules.
According to the report, these apps remain accessible through official app stores, highlighting what critics describe as gaps between policy commitments and real-world enforcement. Both Apple and Google have long maintained guidelines aimed at preventing harmful or abusive content, yet the report suggests that some developers may still find ways to bypass detection systems.
Supporters of the tech companies argue that moderating millions of apps is a complex and ongoing challenge. They point out that both firms regularly remove apps that violate their standards and invest heavily in review processes and automated tools designed to identify harmful content before it reaches users.
At the same time, digital safety advocates say the stakes are too high for such gaps to persist. They warn that tools enabling nonconsensual image creation can cause serious emotional harm and privacy violations, especially for women and vulnerable individuals who may be targeted without their consent.
The report places a human lens on the issue, emphasizing how victims can face lasting consequences, including reputational damage and psychological distress. For many, the existence of such apps on trusted platforms raises concerns about safety in everyday digital spaces.
As scrutiny grows, the findings are likely to increase pressure on Apple and Google to strengthen enforcement and close loopholes. The broader conversation now centers on how tech companies can better align their policies with real-world outcomes while ensuring user protection remains a top priority.


