The AI apocalypse might begin with a cost-cutting healthcare algorithm

 

On Monday, patients in California filed a class action lawsuit against Cigna Healthcare, one of the largest health insurance providers in the U.S., for wrongfully denying their claims — and using an algorithm to do it. The algorithm, called PXDX, was automatically denying patients’ claims at an astonishing rate — the technology would spend an estimated 1.2 seconds “reviewing” each claim. During a two-month period in 2022, Cigna denied 300,000 pre-approved claims using this system. Of claim denials that were appealed by Cigna customers, roughly 80% were later overturned.

This is bad for people, but it could also sound wonky, banal or even “small bore” to tech experts. Yet it is precisely the kind of existential threat that we should worry about when we look at the consequences of bringing artificial intelligence into our lives.

You might remember this spring, when the biggest and wealthiest names in the tech world gave us some pretty grave warnings about the future of AI. After a flurry of opinion pieces and full-length speeches, they found a way to boil it all down to a simple “should” statement

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

This sentence and its most prominent signatories (Sam Altman, Bill Gates and Geoffrey Hinton among them) swiftly captured the headlines and our social media feeds. But have no fear, the statement’s authors said. We will work with governments to ensure that AI regulations can prevent all this from happening. We will protect you from the worst possible consequences of the technology that we are building and profiting from. Oh really?

OpenAI CEO Sam Altman then jetted off on a global charm tour, on which he seems to have won the trust of heads of state and regulators from Japan to the UAE to Europe. A week after he visited the EU, the highly anticipated AI Act had been watered down to suit his company’s best interests. Mission accomplished.

Before the tech bros began this particular round of spreading doom and gloom about blockbuster-worthy, humanity-destroying AI, journalists at ProPublica had published an investigation into a much more clear and present threat: Cigna’s PXDX algorithm, the very subject of the aforementioned lawsuit. 

In its official response to ProPublica’s findings, Cigna had noted that the algorithm’s reviews of patients’ claims “occur after the service has been provided to the patient and do not result in any denials of care.” 

But hang on a second. This is the U.S., where medical bills can bankrupt people and leave them terrified of seeking out care, even when they desperately need it. I hear about this all the time from my husband, who is a physician and routinely treats incredibly sick patients whose conditions have gone untreated for years, even decades, often due to their being uninsured or underinsured. 

This is not the robot apocalypse or nuclear annihilation that the Big Tech bros are pontificating about. This is a slow-moving-but-very-real public health disaster that algorithms are already inflicting on humanity. 

Flashy tools like ChatGPT and LensaAI may get the lion’s share of headlines, but there is big money to be made from much less interesting stuff that serves the banal needs of companies of all kinds. If you read about what tech investors are focused on right now, you will quickly discover that the use of AI in areas like customer service is expected to become a huge moneymaker in the years to come. Again, forget the forecasted human extinction by robots that take over the world. Tech tools that help “streamline” processes for big companies and state agencies are the banal sort of evil that we’re actually up against.

Part of the illusion that seems to drive statements that prophesy human extinction is that technology will start acting alone. But right now, and for the foreseeable future, technology is the result of a multitude of choices made by real people. Right now, tech does not act alone.

I don’t know where we’d be without this kind of journalism or the AI researchers who have been studying these issues for years now. I’ve plugged them before, and now I’ll do it again — if you’re looking for experts on this stuff, start with this list.

And now I’ll plug a new story of ours. Today, we’re publishing a deep dive that shows how a technical tool, even when it’s built by people with really good intentions, can contribute to bad outcomes. Caitlin Thompson has spent months getting to know current and former staff at New Mexico’s child welfare agency and speaking with them about a tool that the agency has been using since 2020. The tool’s intention? To help caseworkers streamline decisions about whether a child should be removed from their home, in cases where allegations of abuse or neglect have arisen. This is a far cry from the ProPublica story, in which Cigna seems to have quite deliberately chosen to deny people’s claims in order to cut costs. This is a story about a state agency trying to improve outcomes for kids while grappling with chronic staffing shortages, and it shows how the adoption of one tool — well-intentioned though it was — has tipped the scales in some cases, with grave effects for the kids involved. Give it a read and let us know what you think.

GLOBAL NEWS

Google and Meta are facing new legal challenges over violent speech on their platforms. The families of nine Black people who were killed in a supermarket in Buffalo, New York in 2022 have filed suit against the two companies, arguing that their technologies helped shape the ideas and actions of Payton Gendron, the self-described white supremacist who murdered their loved ones. The U.S. Supreme Court has already heard and decided to punt on two cases with very similar characteristics, reasoning that the companies are shielded from liability for speech posted by their users under Section 230 of the Communications Decency Act. So the new filings may not have legs. But they do reflect an increasingly widespread feeling that these platforms are changing the way people think and act and that, sometimes, this can be deadly.

The Saudi regime is using Snapchat to promote its political agenda — and to intimidate its critics. This should come as no surprise: An estimated 90% of Saudis in their teens and 20s use the app, so it has become a central platform for Saudi Crown Prince Mohammed “MBS” bin Salman to burnish his image and talk up his economic initiatives. But people who have criticized the regime on Snapchat are paying a high price. Earlier this month, the Guardian reported allegations that the influencer Mansour al-Raqiba was sentenced to 27 years in prison after he criticized MBS’ “Vision 2030” economic plan. Snapchat didn’t offer much in the way of a response, but Gulf-based media have reported​ on the company’s “special collaboration” with the Saudi culture ministry. It’s also worth noting that Saudi Prince Al Waleed bin Talal — who is Twitter, er, X’s, biggest shareholder after Elon Musk — is a major investor in the company.

WHAT WE’RE READING

  • Writing for WIRED, Hossein Derakshan, the blogger who was famously imprisoned in Iran from 2009 until 2015, reflects on his time in solitary confinement and what it taught him about the effects of technology on humanity.
  • Justin Hendrix of Tech Policy Press has written a new essay on the “cage match” between Elon Musk and Mark Zuckerberg, the “age of Silicon Valley bullshit” and the overall grim future of Big Tech in the U.S. Read both pieces, and then take a walk outside.