Since is launched in 2011, Google+ has been the butt of nearly every joke in the tech industry. A favorite is the idea that the only people who use Google+ are Google employees, being forced to do so to try and show some sort of usage. Of course, there are more users than employees, but not by a huge margin. Over the years Google has done a lot to try and force people to interact with Google+. The most famous of which was forcing YouTube integration, making users have a Google+ account to comment on videos. None of these tricks worked.
Google has been having an internal debate for several years, whether to continue the fight or walk away. They've walked away from other massive mistakes, such as Google Buzz, a product that not even Google knew what was for. This week, Google's decision was finally made: walk away. The company has finally decided that the trouble of Google+ is finally more than it cares bare and will begin the process of shuttering the consumer side of the product. If you are one of the very few people who used the platform and has content there that will be lost, Google has created an export process.
The stated reason for the shutdown, however, doesn't have anything to do with incredibly low usage. Instead, the company claims that the shutdown is caused by a security bug that was patched earlier in the year. The bug, which is similar in nature to Facebook's recent issue, in that the company claims that they have no evidence that any data was exposed incorrectly, only that it could have been. In Google's case, however, it was more than API keys that were available: it included name, email, gender, job, and age; enough to do some real-world damage.
The bug was introduced into the software in 2015 and discovered early in the year, being patched in March. Unfortunately for users, Google decided that it didn't need to disclose the issue publicly. This is contrary to the way they treat other companies, however. If Google discovers a bug like this in someone else's software, they give 90 days to fix it before disclosing it themselves. It apparently doesn't apply to their own software, though.
The issues were disclosed by Wall Street Journal, followed by Google's announcement about the future of the product. As one would expect, users are not happy about the lack of disclosure and have filed a proposed class action against the company. The suit claims negligence, invasion of privacy, and more. Attorney Joshua Watson wrote,
Worse, after discovery of this vulnerability in the Google+ platform, Defendants kept silent for at least seven months, making a calculated decision not to inform users that their Personal Information was compromised, further compromising the privacy of consumers' information and exposing them to risk of identity theft or worse.
Even the government is unhappy about the scenario. Three US Senators sent a letter to Google, in part saying,
Please describe in detail when and how Google became aware of this vulnerability and what actions Google took to remedy it.
Why did Google choose not to disclose the vulnerability, including to the Committee or to the public, until many months after it was discovered?
Are there similar incidents which have not been publicly disclosed?
This is similar in nature to what happened after the Cambridge Analytica breach at Facebook. It is liklely that, in addition to the class action suit, someone from Google is going to be speaking in front of Congress soon.