LinkedIn's AI Data Harvest: Protecting Your Professional Profile

Opting Out of LinkedIn's AI Training

LinkedIn has begun training AI models using user data, a practice that has raised concerns due to its implementation without explicit user consent. The platform recently updated its privacy policies to disclose this data use, which includes information from user profiles, posts, and interactions on the site.

Here are the key points about LinkedIn's use of user data for AI training:

Data Collection and Usage

LinkedIn is now using personal data from user profiles, posts, and interactions to train generative AI models. These models are designed to provide features like writing suggestions and post recommendations

Opt-in by Default

Users in most regions (US, India and rest of the world excluding the European Union, European Economic Area, and Switzerland*) are automatically enrolled and opted into allowing LinkedIn to use their data for AI training, with the option to opt out in their account settings. This means that unless users take action, their data may be used for AI training.

*Users in the EU, EEA, and Switzerland jurisdictions are automatically opted out and are not affected by LinkedIn's AI training practices due to stringent privacy regulations in these regions. LinkedIn has clarified that it does not scrape user data from these areas for AI model training, likely in compliance with the EU AI Act and GDPR requirements.

Opt-out Option

LinkedIn has introduced an opt-out toggle in the account settings:

  1. Go to Settings

  2. Select Data Privacy

  3. Look for "Data for Generative AI Improvement"

  4. Toggle the switch to opt out

LinkedIn - Data for Generative AI Improvement

However, opting out only prevents future data usage and does not affect any training that has already occurred.

The AI models being trained may include features for content generation and writing suggestions, with LinkedIn stating it employs privacy-enhancing techniques to mitigate personal data exposure during this process.

Data minimization

LinkedIn states it employs "privacy-enhancing techniques, including redacting and removing information, to limit the personal information contained in datasets used for generative AI training." This suggests they attempt to minimize the amount of identifiable personal data used.

Unclear affiliate data sharing

While LinkedIn mentions it may share data with "affiliates" for AI training, it does not specify which companies are considered affiliates. This lack of transparency makes it difficult to fully assess the privacy implications.

Potential for misuse

Some experts argue that an opt-out model is inadequate for protecting user privacy, as most people cannot be expected to monitor every company's data practices. There are also concerns that LinkedIn's data could be misused by bad actors to train harmful AI models.

In summary, while LinkedIn has taken some steps to enable user control over data usage, the automatic opt-in by default, lack of clarity on data sharing, and potential for misuse raise valid privacy concerns about the company's AI training practices. Stronger privacy protections and transparency may be needed.

Reply

or to participate.