Bloomberg Law
Sept. 1, 2023, 5:27 PM UTC

X Plans for AI on Collision Course With Latest US Privacy Laws

Skye Witley
Skye Witley
Reporter

X Corp.—formerly known as Twitter—joined other tech companies in revising its privacy policy to allow it to leverage users’ data for artificial intelligence development, opening the door to potential legal headaches at the intersection of hot-button issues facing Big Tech.

X may combine public information with data collected from users to train machine learning and AI models, according to an update effective Sept. 29. The company also introduced new categories of information it plans to collect about some users, including biometrics and employment history.

The expanded data collection is part of owner Elon Musk’s aspiration to rebrand the platform as an “everything app.” But a dozen recently enacted state-level privacy laws and attention at the federal level could ensnare certain actions outlined in X’s new policy, lawyers told Bloomberg Law. The updated sections addressing biometrics and AI are so light on detail—a single sentence each—that they run the risk of underinforming consumers who have rights to know more, attorneys said.

“With the rise of ChatGPT and similar tools in the past year, all kinds of businesses are thinking about how to integrate these new types of AI technology into their various products and services,” said Travis Brennan, chair of the data privacy and security practice at Stradling Yocca Carlson & Rauth. “But I think in doing so, they’re setting up a new potential collision with emerging privacy regulations in the United States.”

A few states—such as Texas and Washington—regulate the collection, storage, and deletion of biometric data. In Illinois, consumers can sue companies directly for violating their rights under a biometric privacy law. Residents in multiple states including Virginia were recently endowed with the right to opt-out of having their data used for automated tech.

X’s current privacy policy makes no mention of whether users can opt out of having AI trained on their data and doesn’t describe policies specific to the retention or deletion of biometric data. Musk said in a Thursday post that X would only use “public data, not DMs [direct messages] or anything private” for AI. He did not define public or private data, and X didn’t respond to multiple requests for comment.

Earlier this summer, Meta Platforms Inc. introduced a tool allowing users to see and control what information is collected about them from third parties to train its generative AI models in states that require it. Alphabet Inc.-owned Google notified users in July it would train its own AI models on “publicly available information,” and Zoom changed its terms of service last month to clarify that it doesn’t use customer communications to train AI products, after facing backlash for an earlier version of the policy that suggested otherwise.

“As someone who writes these privacy policies, the challenge with writing them is that the direction is to be very straightforward, clear, and easy to read, but to provide really comprehensive information,” said Jessica B. Lee, chair of Loeb & Loeb LLP’s privacy, security, and data innovations practice. “So the flip side of it being very clear and easy to read is that it’s not clear necessarily what information will be used to train the models.”

The privacy policy updates could be “sort of a practice disclosure point” as X observes other companies putting up similar notices, said Melissa Krasnow, a partner advising clients on data and security matters for VLP Law Group LLP.

The question, Krasnow said, is whether X will apply the same “highest watermark” privacy standard to everyone or parse state-level requirements by an individual’s location.

“I think you’re going to continue to see scrutiny of other companies’ statements to this effect as they update their privacy policy and/or terms,” Krasnow said.

All previous Twitter privacy policies archived before Musk acquired the company last year are currently unavailable online—each version link leads to a page reading “nothing to see here.”

Collecting Biometrics

Hundreds of companies that collect biometric data have run afoul of a powerful consumer and employee legal protection in Illinois under the state’s Biometric Information Privacy Act. Attorneys said if X plans to start collecting biometric data and wants to avoid legal exposure, the company will need to provide consumers with more than a single sentence.

X plans to collect biometric data “for safety, security, and identification purposes,” according to the policy—but only from premium users, a company spokesperson told Bloomberg. Premium X users will have the option of uploading their picture in conjunction with a government ID for additional verification, according to the spokesperson.

Several lawyers who reviewed the policy described the language addressing biometric collection as vague.

How X collects the new categories of data, and how it informs consumers about those practices, is important because the company is still subject to a 2011 consent decree with the Federal Trade Commission, said Suzanne Bernstein, a law fellow at the Electronic Privacy Information Center specializing in consumer privacy.

The agency fined X $150 million in May 2022 for violating the decree by improperly leveraging user’s emails for targeted advertising.

“Is it clear on the scope of use for the individual? Do we get more information beyond the kind of one-liners in the privacy policies?” Bernstein asked. “That’s at least my hope, because for the consumer to consent, you really need to understand what you’re consenting to and what risks are attached to that.”

Under Illinois’ BIPA, companies must obtain written consent before collecting an individual’s biometric data like facial scans or fingerprints. They are required to disclose how they will use the data, how long it will be kept, and when it will be deleted. State residents can sue companies for violating the law—and they have, creating millions in dollars of liability for the alleged offenders.

X is currently defending against one BIPA-related class action, which accuses the company of using software to filter through content containing nudity without informing users or obtaining their consent. Changes to the company’s privacy policy now are unlikely to influence the direction of that litigation because it alleges practices dating several years back, Brennan, the Stradling attorney, said.

Biometric data is considered especially sensitive because an individual’s attributes can’t be easily changed, compounding the impact of any disclosure of that information, Lee of Loeb & Loeb said.

“Is this going to attract plaintiffs’ class action attorneys?” Krasnow asked. “In other words, if they don’t do things according to the way they say in the privacy policy—like if they collected and it’s without consent—does this open them up to more risk of being sued, potentially of enforcement actions?”

To contact the reporter on this story: Skye Witley at switley@bloombergindustry.com

To contact the editors responsible for this story: Adam M. Taylor at ataylor@bloombergindustry.com; Tonia Moore at tmoore@bloombergindustry.com

Learn more about Bloomberg Law or Log In to keep reading:

Learn About Bloomberg Law

AI-powered legal analytics, workflow tools and premium legal & business news.

Already a subscriber?

Log in to keep reading or access research tools.