How a Lovable App Exposed 18,000 Users, Including Students

Share

TL;DR

A researcher found 16 vulnerabilities, six critical, in a single Lovable-hosted app featured on Lovable's Discover page. The AI-generated authentication logic was literally backwards: it blocked logged-in users and granted access to anonymous visitors. 18,697 user records were exposed, including 4,538 student accounts from K-12 schools and universities like UC Berkeley and UC Davis. The app had over 100,000 views. Lovable initially closed the security report without responding.

What Happened

On February 27, 2026, The Register published findings from security researcher Taimur Khan, who discovered that a popular app hosted on Lovable's vibe-coding platform was riddled with basic security flaws.

The app, an exam question and grading platform, was featured on Lovable's own Discover page. It had more than 100,000 views and around 400 upvotes. Behind the scenes, it was leaking everything.

"You can't showcase an app to 100,000 people, host it on your own infrastructure, and then close the ticket when someone tells you it's leaking user data."

Taimur Khan, security researcher

Khan found 16 distinct vulnerabilities. Six were critical. The root cause was the same thing we keep seeing: a Supabase backend with missing security controls and AI-generated code that looks functional but isn't.

The Authentication Was Literally Backwards

Here's the detail that makes this one stand out.

The AI that generated the Supabase backend implemented access control using remote procedure calls. The intent was to block non-admin users from sensitive parts of the app. Simple enough.

But the AI inverted the logic. Instead of blocking unauthenticated users, it blocked authenticated ones. If you were logged in, you were locked out. If you were an anonymous visitor, you got full access.

The logic inversion: The access control guard blocked the people it should have allowed and allowed the people it should have blocked. This error was repeated across multiple critical functions in the app.

"This is backwards. The guard blocks the people it should allow and allows the people it should block. A classic logic inversion that a human security reviewer would catch in seconds, but an AI code generator, optimizing for 'code that works,' produced and deployed to production."

Taimur Khan

This isn't a subtle misconfiguration. It's the kind of bug that a single code review (or an automated security scan) would have caught immediately.

What Was Exposed

Because the app was a platform for creating exam questions and viewing grades, the userbase was teachers and students. Some from top US universities. Some from K-12 schools with minors on the platform.

18,697
Total user records exposed
14,928
Unique email addresses
4,538
Student accounts
6
Critical vulnerabilities

With the security flaws in place, an unauthenticated attacker could:

  • Access every user record on the platform
  • Send bulk emails through the platform's infrastructure
  • Delete any user account without authentication
  • Grade student test submissions as if they were a teacher
  • Access organizations' admin emails and internal data

Of the exposed users, 10,505 were enterprise accounts and 870 had their full PII leaked: names, emails, and organizational data all accessible to anyone who looked.

Lovable's Response

When Khan initially reported his findings through Lovable's support, his ticket was closed without a response.

After The Register got involved, Lovable's CISO Igor Andriushchenko said the company received "a proper disclosure report" on the evening of February 26 and acted on the findings "within minutes."

Andriushchenko pointed out that every Lovable project includes a free security scan before publishing:

"This scan checks for vulnerabilities and, if found, provides recommendations on actions to take to resolve before publishing. Ultimately, it is at the discretion of the user to implement these recommendations. In this case, that implementation did not happen."

Igor Andriushchenko, Lovable CISO

The security scan existed. The user ignored it. This is the fundamental tension in vibe coding: platforms can flag problems, but if the person building the app doesn't understand security (or doesn't care), the warnings go unheeded. The app ships anyway, vulnerabilities and all.

This Is a Pattern, Not a One-Off

This isn't the first time Lovable has been in the security spotlight. In 2025, researchers scanned 1,645 Lovable-created web apps from the platform's Discover page. Of those, 170 allowed anyone to access user information including names, emails, financial data, and API keys.

And it's not just Lovable. The broader vibe-coding ecosystem has the same problems:

  • Veracode found that 45% of AI-generated code contains security flaws, with no improvement over time as models get larger
  • Bubble.io had an unpatched zero-day that allowed database bypass
  • Firebase test mode was left enabled across 900+ mobile apps, exposing 1.8 million passwords
  • 39 million secrets were leaked on GitHub in 2024. 70% are still active today

The pattern is always the same: AI generates code that works functionally but fails on security. The developer doesn't know enough to spot the problems. The app ships. Users pay the price.

Why AI Gets Security Wrong

AI code generators optimize for "code that works." When you ask for an authentication system, you get one that appears to authenticate users. When you ask for an admin panel, you get one that looks like it restricts access.

But "looks like it works" and "is actually secure" are two very different things. As Alex Stamos, CISO at SentinelOne, has noted, the best practice for web apps is to avoid letting users access the database at all. The application should determine what information users can see and fetch only that data.

Vibe-coded apps often skip this architectural pattern entirely, connecting frontend code directly to the database with only client-side guards standing between users and full data access.

Who's Responsible?

This is the question the vibe-coding industry hasn't answered yet.

Lovable says users are responsible for implementing security recommendations. That's technically correct. But when your platform markets itself as generating "production-ready apps with authentication included," users reasonably expect that the authentication actually works.

The marketing vs. reality gap: Vibe-coding platforms sell the dream of shipping without needing to understand code. But security can't be vibed. Someone (the platform, the AI, or the developer) needs to verify that the generated code is actually secure. Right now, nobody's doing that consistently.

Khan's perspective is worth hearing:

"If Lovable is going to market itself as a platform that generates production-ready apps with authentication 'included,' it bears some responsibility for the security posture of the apps it generates and promotes. At minimum, a basic security scan of showcased applications would have caught every critical finding in this report."

Taimur Khan

What You Can Do

If you're building on Lovable, Bolt, Cursor, or any other vibe-coding platform, the lesson here is straightforward:

  1. Never trust generated code is secure. Always verify authentication and access control logic
  2. Enable Row Level Security on every Supabase table before going live
  3. Test as an unauthenticated user. Can you access data you shouldn't? If yes, you have a problem
  4. Run a security scan before publishing. The flaws in this app were basic, and any automated scanner would have caught them
  5. Don't ignore security warnings. Lovable's own scan flagged issues. The developer ignored them. 18,000 users paid the price

The fix isn't complicated. The consequences of skipping it are.

What happened with the Lovable app vulnerability?

A security researcher found 16 vulnerabilities in a Lovable-hosted exam app, including backwards authentication logic. 18,697 user records were exposed including student data from K-12 schools and universities like UC Berkeley and UC Davis. The app was featured on Lovable's Discover page with over 100,000 views.

What is the Lovable vibe coding platform?

Lovable is a vibe-coding platform that generates full-stack web applications from natural language prompts. Apps are built with AI-generated code and powered by Supabase backends for authentication, file storage, and database access through PostgreSQL. It's part of the broader movement of AI-assisted development tools.

How did the AI write authentication backwards?

The AI implemented access control using Supabase remote procedure calls but inverted the logic, blocking all authenticated (logged-in) users while allowing unauthenticated (anonymous) visitors full access. This classic logic inversion was repeated across multiple critical functions. A human security reviewer would catch this in seconds, but the AI optimized for "code that works" rather than "code that's secure."

Could a security scan have prevented this?

Yes. An automated security scan before publishing would have caught the missing Row Level Security, the inverted authentication logic, and the exposed user data. Lovable includes a free security scan before publishing, but the app owner didn't implement the recommendations. Regular scanning (both before launch and ongoing) catches exactly these kinds of basic misconfigurations.

Is Your Vibe-Coded App Leaking Data?

Run a free scan to check for backwards auth logic, missing RLS, exposed API keys, and other vulnerabilities the AI might have missed.

Start Free Scan
Security Stories

How a Lovable App Exposed 18,000 Users, Including Students