The tech industry’s quest to read our minds — and profit from them

 

Surveillance of biometric data, such as fingerprints or facial scans, used to feel like the most Orwellian of tech frontiers. But the collection of brain data, also known as neural data, may have it beat. The issue is on my brain this week because Neuralink — yet another one of Elon Musk’s tech companies that is marching forward with feverish, at times frightening abandon — just announced that it successfully placed a digital implant into the brain of a real live person.

Devices like this one are being built to help people with Lou Gehrig’s disease and quadriplegia communicate better. If Neuralink’s implant works, and that’s a big if — physicians have voiced concern that it will pose other health risks — it could really help improve people’s lives. But implants are a relatively small subset within a growing industry building neurotech products, most of which are for commercial use. In sectors like mining, trucking and construction, employers have begun using headset-like brainscan devices to help protect workers’ safety, but also to monitor (read: surveil) their productivity. These tools capture plenty of data from our brains. Should we worry about how companies might use that data?

Margot Hanley, an artificial intelligence ethics scholar at Cornell University’s Cornell Tech graduate school, says that we should. In an email exchange this week, she described to me the “unique richness of neural data” and explained how these kinds of devices could glean “sensitive or personal information, from thoughts, feelings and beliefs to health or medical information.” Although Musk may be dominating the headlines around neurotech right now, implants like the one Neuralink just piloted are already regulated by the U.S. Food and Drug Administration, and data they collect is protected under HIPAA, the federal law that restricts the release of medical information. Hanley is much more concerned about non-invasive commercial devices, which are entirely unregulated in most countries including the U.S.

What kinds of things can we expect these companies to do with brain data? Like most big datasets these days, it will probably be used to make more AI. But it could also be used to market products. It could even be leveraged to convince people to vote a certain way.

“It’s not entirely far-fetched to think about neural data being collected, analyzed and integrated into political campaign testing, in the same way we have seen web activity and social media data used these past years,” wrote Hanley.

If we want to safeguard “our deepest, most private selves” from commercial exploitation, Hanley believes we badly need regulatory oversight and ethics guidelines to boot.

The U.S. is still a wild west on this front, but Chile and Brazil have already passed laws that set safeguards and barriers around the use of brain data. This was prescient. Chile’s law was put to the test last year by former Chilean senator Guido Girardi, who bought and used a brain scan device built by the U.S. firm Emotiv, and then took the company to court, arguing that it had taken his brain data without obtaining his informed consent. The case went up to Chile’s Supreme Court, which unanimously ruled against Emotiv and ordered the company to destroy Girardi’s data.

While most companies aren’t literally reading our minds yet, people’s biometric data worldwide is already getting slurped up by everyone from immigration authorities and period-tracking apps to Sam Altman’s WorldCoin “orb,” which authenticates your identity by scanning your iris. Adding brain data to this mix — especially when you think about all that it could reveal about us — makes the whole data grab sound even more like a sci-fi novel that I do not want to read.

GLOBAL NEWS

As Pakistanis head for the polls, will networks stay up and running? Access to mobile internet and broadband services has been cut off several times in the run-up to elections in Pakistan, set for February 8. The shutdowns, which primarily affected major social media sites, corresponded with online events hosted by Pakistan Tehreek-e-Insaf, the party of embattled former Prime Minister Imran Khan, who this week was convicted on charges of corruption and leaking state secrets and may be spending the next quarter century behind bars. The government is now under a court order to keep the internet on through polling day. We’ll see if it holds.

Meta is still platforming Holocaust denialism. The company’s semi-autonomous Oversight Board has ruled that Instagram should permanently remove a post that featured Squidward, a character from the hit kids’ TV series “SpongeBob SquarePants,” casting doubt on the number of Jewish people killed in the Holocaust and suggesting that there were no functioning crematoria at the Auschwitz concentration camp. In-house researchers found that Meta had reviewed the post six times, after users flagged it for violating company rules, and that each time the company decided to leave it up, despite Meta’s explicit ban on Holocaust denial.

Social giants are back in the hot seat over kids’ safety. The CEOs of Meta, Snap, TikTok and X all testified at a hearing yesterday in which U.S. members of Congress lobbed a litany of accusations about the harms that platforms do to minors, from bullying and the promotion of eating disorders to the distribution of child abuse images. Attempts by legislators to tackle these issues — none of which can be entirely solved by the platforms — haven’t gotten far.
But in a peculiar twist yesterday, both X and Microsoft announced their support for the U.S. Kids Online Safety Act, a bill that would require parental controls on social media sites, but also empower state attorneys general to ban online material on the grounds that it harms kids. Dozens of legal experts have opposed the law, arguing that it could even allow censorship of information about LGBTQ issues. This is serious, especially at a moment when books on gender, sex and sexuality are disappearing from the shelves of public libraries across the U.S., thanks to laws driven by the so-called “family values” lobby. Lest there be any doubt on this, the uber-conservative Heritage Foundation has explicitly endorsed the bill in order to “keep trans content away from kids.”

WHAT WE’RE READING

  • Ms. Swift goes to Washington? Deepfakes are not new, but the ease with which people can create them is greater than ever. Amid the outcry over the spread of AI-generated, non-consensual pornography featuring Taylor Swift, MIT Technology Review’s Melissa Heikkila has written a compelling plea in which she pushes the pop megastar to take this up with lawmakers. Let’s see if Taylor listens.
  • Adrienne LaFrance says techno-authoritarianism is on the rise. As the author of a newsletter called Authoritarian Tech, I obviously agree. Her new piece for The Atlantic argues that tech CEOs “promise community but sow division; claim to champion truth but spread lies; wrap themselves in concepts such as empowerment and liberty but surveil us relentlessly. The values that win out tend to be ones that rob us of agency and keep us addicted to our feeds.”