How surveillance tech controls the fate of migrants

 

Hundreds of people remain missing and are feared dead following last Wednesday’s shipwreck in the Mediterranean sea. Rescue organizations began receiving distressed calls from people aboard the fishing vessel that was carrying an estimated 750 migrants, mainly from Egypt, Libya and Pakistan, who said they had run out of supplies and that people were dying of thirst. When officials from Greece’s Hellenic Coast Guard approached by boat, the vessel made a turn so abrupt that it capsized and rapidly sank, with hundreds falling into the sea from the outer decks.

What caused the ship to sink, exactly? What would the Hellenic Coast Guard have done had they successfully reached the vessel? And how did these hundreds of people wind up on this ship, only to plunge to their deaths?

Answers are beginning to emerge, thanks to reporting by various outlets including Mada Masr, which has begun investigating the Egyptian smuggling operation that appears to have brought many of the passengers to the ship to begin with.

The story underlines the life-threatening lengths people now go to in order to reach European shores. What it doesn’t tell us, at least not yet, is how technology is playing an increasingly powerful role in determining the fates of people seeking to cross borders. Last month, at Coda, we published an investigation into the International Centre for Migration Policy Development, a Vienna-based non-government agency that has received hundreds of millions of euros in contracts from the EU to supply tools and tactics — including surveillance tech — to non-EU governments in exchange for their cooperation in preventing people from attempting to migrate to Europe.

Technical surveillance has become an inescapable condition of crossing a border in pursuit of a better life. This issue dominated tech news from Europe this past week as the European Parliament finally reached an agreement on critical components of the AI Act, which has been hotly debated for months. The regulation, which is designed to require tech manufacturers to assess and disclose the possible risks that their products might pose for people, covers everything from seemingly innocuous chatbots to AI-powered surveillance technology. But the current text has critical carve-outs that allow immigration enforcement agencies to use facial recognition and other “discriminatory profiling” technologies on migrants and people seeking refugee or asylum status in the bloc. 

And it’s not just the EU. Last week, we published a deep dive by my colleague Erica Hellerstein on the increasingly tech-centric approach to immigration enforcement by U.S. authorities. Technical surveillance, whether through using an ankle monitor or a facial recognition-enabled mobile app, is a constant for the estimated quarter of a million migrants enrolled in the “Alternatives to Detention” program while they await immigration proceedings. While the Biden administration has promoted the program as a more humane, economical and effective way to enforce immigration policy, researchers and advocates have found that the long-term psychological effects for people in the program are severe. One enrollee put it to Erica this way: “If you have an invisible fence around you, are you really free?”

IN GLOBAL NEWS

  • It turns out Sam Altman’s global PR tour was not just for show. Behind the pseudo-diplomatic meet-and-greets, OpenAI’s CEO appears to have been greasing the policymaking wheels in order to cultivate a friendly regulatory environment for his $29 billion company. A new investigation from TIME’s Billy Perrigo revealed this week that in the EU, Altman urged parliamentarians to water down the AI Act so that OpenAI’s tools wouldn’t be placed in the “high risk” category of the draft regulation, which would subject the company to more regulatory scrutiny. His efforts seem to have borne fruit. The language that OpenAI objected to has indeed been removed from the text as it currently stands. Of course it wouldn’t be a proper EU regulation without yet another round of deliberations that could take up to six months, or maybe more, so it’s hard to say exactly what effects all this will all have. But it certainly lays bare Altman’s true agenda.
  • A prominent group of U.S. research organizations are being sued over their efforts to reduce Covid and election-related disinformation on social media during the 2020 general election in the U.S. The GOP-led “legal campaign,” as the New York Times put it, targets members of the Election Integrity Partnership, which included the Stanford Internet Observatory, the German Marshall Fund and the Atlantic Council’s Digital Forensic Research Lab among others. A lawsuit filed in Louisiana by the editor of the far-right news site The Gateway Pundit, which is known for spreading falsehoods, accuses these institutions of “working closely” with government officials to “urge, pressure, and collude with social-media platforms to monitor and censor disfavored speakers and content.”
  • These organizations were studying disinformation on social media and giving major platforms insights and recommendations based on what they were finding. Although they published much of the resulting work, some aspects of the partnership were kept under wraps, generating suspicion about what may or may not have happened behind closed doors. The resounding message from those behind this lawsuit and other legal challenges against the researchers is that the partnership led to widespread censorship of right-wing views on mainstream social media sites. But hang on a second. If anyone got censored, it was the companies — Facebook, Twitter, etc. — who actually had the power to remove speech or accounts, not the researchers. So why are lawmakers and right-wing media targeting these research groups? The Knight First Amendment Institute’s Jameel Jaffer put it simply when he described the campaign as a wildly partisan “attempt to chill research.”

WHAT WE’RE READING

  • The Verge’s Josh Dzeiza has a great new deep dive on the “vast tasker underclass” of people who work to make tools like OpenAI’s ChatGPT seem almost as smart as a real person.
  • Scarly Zhou has a new story for Rest of World that looks at how sexism and misogyny have become a recurrent problem for female users of Glow, a new Chinese AI platform featuring customized, highly real-seeming chatbots.