Bestgamingpro

Product reviews, deals and the latest tech news

LinkedIn Is Using Your Data to Train AI Models by Default—Here’s How to Stop It

Another major tech company has joined the list of platforms that use personal data to train artificial intelligence models—often without explicitly informing users. Like Meta and X (formerly Twitter) with its Grok AI, LinkedIn is now automatically opting users into AI training efforts for both its own models and those of unnamed “affiliates.”

Owned by Microsoft, LinkedIn’s new data policy has raised concerns, especially since Microsoft also has substantial investments in OpenAI, the company behind ChatGPT. This relationship implies that Microsoft could utilize LinkedIn data for training its own AI models. However, after the initial publication of these details, LinkedIn clarified that user data will not be used to train OpenAI’s foundational models. Instead, the data will be shared with Microsoft for integration into its own OpenAI-powered software.

LinkedIn’s Data Usage for AI Training

LinkedIn explained: “The AI models that LinkedIn uses to power generative AI features may be trained by LinkedIn or by third-party providers. For instance, some of our models are sourced from Microsoft’s Azure OpenAI service.”

A LinkedIn spokesperson, Greg Snapper, further elaborated: “We leverage OpenAI models through Microsoft’s Azure AI service, just like other API customers. Importantly, when using these models, we are not sending any user data back to OpenAI for the purpose of training their models.”

Despite these reassurances, the platform’s decision to opt users into these AI training initiatives by default has prompted backlash, especially from privacy advocates. Critics argue that relying on an opt-out system is inadequate when it comes to protecting user rights.

Concerns Over Opt-Out Practices

Mariano delli Santi, a legal and policy officer at the U.K.-based privacy advocacy group Open Rights Group, voiced strong concerns: “The opt-out model proves once again to be insufficient in safeguarding our privacy rights. People cannot reasonably be expected to constantly monitor every platform that might decide to use their data for AI training purposes. Opt-in consent isn’t just a legal requirement, it’s a matter of common sense.”

Delli Santi emphasized the importance of regulatory oversight, calling on the U.K.’s Information Commissioner’s Office (ICO) to take decisive action. “Regulators need to step in and ensure that companies like LinkedIn comply with legal standards, rather than acting as though they are above the law.”

Data Protection and Geographic Considerations

To mitigate some concerns, LinkedIn highlighted that privacy-enhancing technologies are being used to reduce the presence of personal data in the datasets used for AI model training. The company also confirmed that it is not training AI content-generation models using data from users in the European Union, European Economic Area (EEA), or Switzerland—regions governed by stricter data protection regulations like the General Data Protection Regulation (GDPR).

The EEA, which includes the 27 EU member states along with Iceland, Liechtenstein, and Norway, has stringent privacy laws, compelling companies to obtain explicit user consent before processing personal data for non-essential purposes, including AI training.

How to Opt Out

If you live in a country where LinkedIn is using your data for AI training and want to opt out, the process is relatively straightforward. Navigate to the data privacy section within LinkedIn’s settings and disable the option for “Use my data for training content creation AI models.”

Growing Concerns Around AI and Privacy

LinkedIn’s decision to automatically include users in AI model training programs has sparked broader debates about data privacy and user consent in the age of AI. As AI technologies continue to evolve, tech companies are increasingly leveraging vast amounts of user data to enhance the capabilities of their models. However, this practice has led to significant pushback from privacy advocates who argue that users should have more control and transparency over how their data is being used.

In a landscape where opt-out mechanisms are common, many believe that opt-in consent should be the standard, particularly when it involves sensitive data processing like AI training. With governments and regulators around the world grappling with how to regulate AI and protect consumer data, LinkedIn’s approach serves as another flashpoint in the ongoing conversation around privacy in the digital age.

Leave a Reply

Your email address will not be published. Required fields are marked *