An alarming investigation conducted by security research laboratories CovertLabs has blown the lid off a digital Pandora’s box inside the App Store of Apple.
Through a project named Firehound, researchers are actively scanning and indexing a vast number of applications that, due to serious development negligence, publicly expose users’ sensitive data.
As of the time of writing this report, as many as 198 vulnerable iOS apps have been identified, the almost totality of which leak critical information such as names, email addresses, phone numbers and, even more alarmingly, entire chat histories.
Firehound è lo store che classifica le app da evitare assolutamente

The tip of the iceberg of this security crisis is represented by a specific application that dominates the project’s negative rankings: Chat & Ask AI.
The discoveries related to this software were described by researcher Harrison as a scenario “worst-case scenario“. The application in question exposed over 406 million records belonging to more than 18 million unsuspecting users.
The gravity of the situation was urgently underscored on X (formerly Twitter), where a peremptory appeal to cease using the app immediately was launched. According to the technical analysis, a critical vulnerability allows anyone who knows where to look to access around 380 million messages.
This is not just metadata, but the textual content of the conversations that users have had with AI.
Considering the often intimate nature of interactions with AI chatbots, these archives contain confessions about mental health issues, relationship difficulties, financial matters and medical doubts. They are information most people would not reveal even to a close friend, now potentially publicly linked to users’ emails and phone numbers.
The AI Slop invasion and the systemic flaws
The phenomenon is not isolated, but appears to be symptomatic of what security experts, including the account @vxunderground, have dubbed “slopocalypse“.
This term refers to the wave of low-quality software, often rapidly generated via AI-assisted coding tools (the so-called “vibe coding”) or developed autonomously without the necessary security controls, that is flooding digital stores.
Although most of the 196 apps confirmed as “leaking” (i.e., losing data) are tied to the world of artificial intelligence, the problem is cross-cutting.
The investigation has found similar vulnerabilities in apps dedicated to education, entertainment, graphics, health and fitness, as well as social networks and lifestyle. The common denominator of these security flaws almost always lies in misconfigured databases or cloud storage services left unprotected, exposing data schemas and the records contained there to anyone performing a network scan.
A public yet protected registry
To manage this volume of sensitive data responsibly, the Firehound project has taken a cautious approach. Although the goal is to catalog and report insecure apps, the public registry has been intentionally limited.
The results of the scans contain highly sensitive information that cannot be disclosed indiscriminately without first being reviewed and redacted.
For this reason, access to the complete datasets and technical details of the vulnerabilities is limited and requires a manually approved registration.
Access priority is granted to journalists, law enforcement and cybersecurity professionals, in an effort to mitigate the risk that malicious actors may exploit this information before developers have time to fix it.
The Need for a New Awareness
This discovery serves as a stern warning for the entire mobile ecosystem.
On one hand, it underscores the need for users to exercise extreme caution with the platforms they use, especially when sharing intimate thoughts with AI chatbots, starting from the assumption that the entry barrier to creating an app has drastically lowered, often at the expense of security.
On the other hand, it highlights the responsibility of developers, who must be called to answer for negligence that puts the privacy of millions at risk.
As researchers reiterated, the use of autonomous development tools does not absolve anyone from the duty to properly protect users’ data.



