from the yeah-maybe-stop-doing-that dept
When you sign up for security services like two-factor authentication (2FA), the phone number you’re providing is supposed to be explicitly used for security. You’re providing that phone number as part of an essential exchange intended to protect yourself and your data, and that information is not supposed to be used for marketing. Since we’ve yet to craft a formal privacy law, there’s nothing really stopping companies from doing that anyway, something Facebook exploited last year when it was caught using consumer phone numbers provided explicitly for 2FA for marketing purposes.
It’s not only a violation of your users’ trust, it incentivizes them to not use two-factor authentication for fear of being spammed, making everybody less secure. As part of Facebook’s recent settlement with the FTC the company was forbidden from using 2FA phone numbers for marketing ever again.
Having just watched Facebook go through this, Twitter has apparently decided to join the fun. In a blog post, the company this week acknowledged that participants of the company’s Tailored Audiences and Partner Audiences advertising system may have had their phone numbers used for 2FA used for marketing as well:
“We cannot say with certainty how many people were impacted by this, but in an effort to be transparent, we wanted to make everyone aware. No personal data was ever shared externally with our partners or any other third parties. As of September 17, we have addressed the issue that allowed this to occur and are no longer using phone numbers or email addresses collected for safety or security purposes for advertising.”
Security conscious folks had already grumbled about the way Twitter sets up 2FA, and those same folks weren’t, well, impressed:
In all seriousness: whose idea was it to use a valuable advertising identifier as an input to a security system. This is like using raw meat to secure your tent against bears.
— Matthew Green (@matthew_d_green) October 8, 2019
While it’s nice that Twitter came out and admitted the error, you have to think it’s unlikely this would happen were there real federal penalties for being cavalier about user privacy and security.
Last year, the company admitted to storing passwords for 330 million customers unencrypted in plain text, and a bug in the company’s code also exposed subscriber phone number data, something Twitter knew about for two years before doing anything about it. Earlier this year Twitter acknowledged that another bug exposed the location data of its users to an unknown partner. And of course Jack’s own account was hacked thanks to an SMS hijacking problem agencies like the FCC haven’t been doing much (read: anything) about.
While there’s understandable fear about the unintended consequences of poorly crafted privacy legislation, having at least some basic god-damned rules in place (including things like penalties for storing user data in plaintext, or using security-related systems like 2FA as marketing opportunities) would likely go a long way in deterring these kinds of “inadvertent oversights.” Outside of the problematic COPPA (which applies predominately to kids), there are no real federal guidelines disincentivizing the cavalier treatment of user data, though apparently we’re going to stumble through another 10 years of daily privacy scandals before “conventional wisdom” realizes that’s a problem.