LinkedIn sued for using private messages to train AI
What's the story
Microsoft-owned LinkedIn is facing a class-action lawsuit, courtesy of its Premium subscribers.
The plaintiffs allege that the professional networking platform has been sharing their private messages with third parties without consent.
This unauthorized disclosure was reportedly done to train generative artificial intelligence (AI) models.
The legal action represents millions of LinkedIn Premium users who claim their personal data was improperly used.
Policy alterations
Privacy policy changes under scrutiny
The lawsuit notes that LinkedIn added a new privacy setting in August, giving users the option to control the sharing of their personal data.
But, on September 18, the company quietly updated its privacy policy. The new policy said user data could be used for AI model training.
A hyperlink in a "frequently asked questions" section further clarified that opting out "does not affect training that has already taken place."
Alleged violations
Accusations of privacy violation and damage control
The plaintiffs contend that LinkedIn's actions show an intentional breach of user privacy.
They claim the company was well aware it had breached its promise to use personal data only for enhancing the platform.
The lawsuit claims these policy changes were a strategic move by LinkedIn to escape public backlash and possible legal consequences, after the unauthorized data disclosure.
Legal repercussions
Lawsuit seeks damages for contract breach and legal violations
The legal action was filed in a federal court in San Jose, California. It represents LinkedIn Premium subscribers who sent/received InMail messages, and whose private information was shared with third parties for AI training before September 18.
The lawsuit seeks unspecified damages for contract breach and violations of California's unfair competition law. It also seeks $1,000 per person for breaches of the federal Stored Communications Act.