Richie Bartlett

The central problem of every vulnerability report: researchers want to fix it and companies don’t.

PayPal’s two-factor problems are the rule, not the exception.

One of the most trusted companies on the internet has been owned by a teen.

We learned this morning that PayPal’s two-factor authentication setup has a fairly major hole in it, detailed here. The researcher who found it, a 17-year-old who has done impressive work spotting vulnerabilities in Australian government sites, first reported the bug to PayPal back in June, but after two months of radio silence, he went public.

It’s not the first time PayPal has run into two-factor problems and it’s not the first indication that two-factor might have its own issues — but for a company that handles money, it’s not a good look. Users shouldn’t panic, provided no one has their password, but there’s a reason we tell people to sign up for two-factor authentication, so it’s embarrassing to find PayPal using a poorly considered cookie that lets you bypass the whole thing. The company says it’s “working to get the issue addressed as quickly as possible,” but it’s not reassuring that, given two months of lead time, the company hasn’t done anything to fix the issue.

Still, it’s not exactly surprising. Security researchers like these sorts of bugs because they make a living finding them, but the view from the corner office is very different. Companies rarely care about security, even if various people within the companies do. Good security is expensive. It often means structuring your service in a certain way that pushes users through an extra step or two, and that’s a sacrifice most companies simply don’t want to make.

PayPal’s bug is a great example. PayPal wanted to make it easy for eBay users to link their accounts, so the company set up a special cookie that identified anyone coming in from eBay. As it turned out, that cookie also let Rogers bypass PayPal’s two-factor protections. Fixing it should be simple, just disable the cookie and make eBay users log in the old-fashioned way. But if PayPal did that, fewer users would link the accounts and it would cost the company money — more money than they’re likely to lose as a result of this bug. Given the choice between security and usability, companies will take usability every time.

This is the central problem of every vulnerability report: researchers want to fix it and companies don’t. I’m usually more sympathetic to the security side, but the companies have a point too. It’s hard to make software with no vulnerabilities, just like it’s hard to make a door that can’t be broken into. As security ramps up, diminishing returns set in fast. You could put a three-inch steel door on your house, but it would be ugly and heavy and you don’t want to. Instead, you trust that no one will want to kick in your door. Aside from once-in-a-generation bugs like Heartbleed, most security failures don’t have much fallout, particularly for the companies that spawn them. Six months later, it’s hard to argue that Goto-Fail had much effect on Apple’s bottom line.

Instead, the bad effects show up at an ecosystem level. We’re left with a relatively unprotected web where nothing is perfect and (as Quinn Norton put it) everything is broken. Heavy-hitters like the NSA and China’s Unit 61398 can buy up vulnerabilities and break into most systems, while anyone without a corporate security budget is left to fend for themselves. PayPal isn’t the worst case — no one will die or go to jail over this bug — but it’s one more example of why the world of security can seem so bleak. The problem isn’t that we can’t protect ourselves, but that we don’t want to.