Key Takeaways
- Two major breaches exposed sensitive data — Tea app leaked thousands of selfies, government IDs, and over 1.1 million private messages, including names, contact details, and deeply personal conversations.
- Poor security practices enabled attacks — An unsecured Firebase database and an exploitable API left user data wide open, showing a lack of encryption and proper access controls.
- AI-driven “vibe coding” played a role — Heavy reliance on AI-generated code without audits likely introduced vulnerabilities, highlighting the security risks of unreviewed AI-assisted development.
- Severe privacy and safety fallout — Leaked data is being exploited online, putting users at risk of doxxing, harassment, and legal consequences in sensitive cases like abortion discussions.
What if the “anonymous” dating app you used to warn women about toxic men accidentally leaked your selfie, ID, messages, and contact details?
That’s exactly what happened to users of Tea (officially Tea Dating Advice): a women-only “dating safety” app designed to help women share information about men in their area.
To join, women must upload a selfie and a government-issued ID to verify their identity. Once they’re in, they’re encouraged to share experiences, raise red flags, and connect with others for mutual support and protection.
In a surprising turn for its mission, Tea has now experienced two major data breaches: one exposed photos and IDs, and the second leaked over a million messages sent in (admittedly misplaced) confidence.
Some of these messages discuss abortions, cheating partners, and personal details like car models and social handles. Both types of breaches are now being exploited online, with some images turned into public rankings and private data used to dox or mock users.
So how did an app whose mission was to make women safer end up doing the exact opposite in one of the worst privacy disasters of the year?
Let’s take a closer look.
What Got Leaked, and Why It’s So Serious
The first Tea breach drew a lot of negative attention. An exposed Firebase database left tens of thousands of selfies and government IDs accessible to anyone. 4chan users quickly scraped the images and made mirror downloads.
They even set up a Facemash-style site where people ranked leaked selfies by attractiveness, including leaderboards.
Tea’s initial response was disappointing. The company minimized the breach, claiming it only involved “legacy” data from over two years ago. Sadly for them, that defense quickly fell apart.
A second, much larger breach has now exposed over 1.1 million private messages, with many of these sent as recently as last week. These weren’t just casual DMs. They included:
- Women discussing abortions
- Users realized they were dating the same men
- Real phone numbers, names, and social media handles
- Accusations of cheating, abuse, and more, often naming people directly
To make matters worse, a researcher found out it was possible to use the app’s API to send a notification to every single user. Tea marketed itself as a place to stay anonymous.
The nature of these leaks showed that it was anything but: with full identities linked to deeply personal conversations, users could now face blackmail, harassment, or worse.
A Case Study in Negligence: How It Happened
Tea’s backend was shockingly insecure for an app that promised safety, not once, but twice.
The initial breach involved a completely unsecured Firebase storage instance. That alone exposed over 72,000 images, including 13,000 selfies, government-issued IDs, and 59,000 images from posts, messages, and comments.
In a statement, Tea claimed the breach only affected data stored on its “legacy data system.”
That claim didn’t last long, though. Just days later, security researcher Kasra Rahjerdi uncovered a second, more serious vulnerability: Tea’s API allowed any logged-in user to access a recent, unsecured database using their API key, which included private messages from as recently as last week.
Rahjerdi discovered something even more alarming in his research: push notifications could be sent to all users using the same attack vector.
Tea claims it has since fixed the vulnerability and contacted law enforcement. But it’s too little too late: the damage has been done. The data has already been scraped, archived, and widely shared online.
The app was marketed as discreet and anonymous, but the reality was closer to leaving the door wide open and hoping no one walked in.
Vibe Coding, AI Tools, and Faking Competence
Tea Dating Advice didn’t just have bad luck. It also suffered from poor development practices and likely relied too much on AI-generated code.
According to the original hacker who revealed the first breach on 4chan, Tea was a prime example of “vibe coding”: a rising trend where developers rely heavily on AI tools to build products without proper security checks, version control, or code reviews.
Guillermo Rauch, founder and CEO of AI cloud app company Vercel, offered a sardonic take on this trend: “On Tea Dating, AI and Vibe Coding security TL;DR: the antidote for mistakes AIs make is… more AI.”
Unfortunately for Tea, and even more so for the women who used it, that approach appears to have backfired. A Georgetown University study found that 48% of AI-generated code had security flaws.
Tech consultant Santiago Valdarrama gets it right: “Vibe coding is awesome, but the code these models generate is full of security holes and can be easily hacked.”
This kind of AI-assisted (or, in all honesty, AI-led) development might help quickly ship features. But without oversight, it also ships vulnerabilities.
The Ongoing Repercussions of Tea’s Breach
Tea promised its users a private space to share sensitive stories, from relationship red flags to personal trauma. Sadly, it ended up turning those confessions into liabilities.
After the initial breach, photos of women who used the app were scraped and reposted on 4chan. Soon after, they were transformed into a Facemash-style site that ranked their appearances. Many of these pictures were voted on tens of thousands of times, erasing any anonymity and dignity from these women instantly.
The second breach surpasses the first by a large margin. It includes genuine conversations between real women, discussing deeply personal subjects like infidelity, stalking, and abortion. Many shared real names, social media handles, and contact information, trusting these messages would remain private.
Now, the data is out there: easily accessible by anyone who wants to.
At first glance, the consequences seem to be public exposure and humiliation. However, it goes deeper: in some US states, discussions about abortions could lead to legal risks.
And, of course, being linked to controversial discussions of this nature could invite harassment, doxxing, or worse.
Conclusion: The Age of Digital Self-Defense
The Tea app wasn’t breached because hackers outsmarted a system. It was breached because the system wasn’t really there to begin with. All it had was broken security, weak best practices, and blind user trust.
This isn’t just a one-time warning. It’s a snapshot of what can happen when our most personal data is spread across dozens of apps, many of which are built quickly and cheaply, and despite their claims, without regard for your privacy.
- Developers should be held responsible. But in the meantime, you can also take charge. Here’s what you can do:
- Be selective about where you share your ID, photos, or real name.
- Use burner emails and phone numbers when possible.
- Look for apps that offer transparency, encryption, and minimal data collection.
- Follow organizations like EFF, Privacy International, and 404 Media to stay informed.
If an app like Tea, designed to make women feel safer, can backfire this badly, it’s reasonable to ask: how many of our other “safe” apps are just one bad developer away from becoming public record?
Monica is a tech journalist and content writer with over a decade of professional experience and more than 3,000 published articles. Her work spans PC hardware, gaming, cybersecurity, consumer tech, fintech, SaaS, and digital entrepreneurship, blending deep technical insight with an accessible, reader-first approach. Read more
Her writing has appeared in Digital Trends, TechRadar, PC Gamer, Laptop Mag, SlashGear, Tom’s Hardware, The Escapist, WePC, and other major tech publications. Outside of tech, she’s also covered digital marketing and fintech for brands like Whop and Pay.com.
Whether she’s explaining the intricacies of GPU architecture, warning readers about phishing scams, or testing a liquid-cooled gaming PC, Monica focuses on making complex topics engaging, clear, and useful. She’s written everything from deep-dive explainers and product reviews to privacy guides and e-commerce strategy breakdowns.
Monica holds a BA in English Language and Linguistics and a Master’s in Global Media Industries from King’s College London. Her background in language and storytelling helps her craft content that’s not just informative, but genuinely helpful—and a little bit fun, too.
When she’s not elbow-deep in her PC case or neck-deep in a Google Doc file, she’s probably gaming until the early hours or spending time with her spoiled-rotten dog. Read less
The Tech Report editorial policy is centered on providing helpful, accurate content that offers real value to our readers. We only work with experienced writers who have specific knowledge in the topics they cover, including latest developments in technology, online privacy, cryptocurrencies, software, and more. Our editorial policy ensures that each topic is researched and curated by our in-house editors. We maintain rigorous journalistic standards, and every article is 100% written by real authors.