AI systems are enabling mass surveillance in the US, and there is no national law that 'meaningfully limits' the use of this data

The Eyes in the Sky: AI-Powered Surveillance Rises in the US

The streets of downtown Los Angeles are a sea of steel and glass, with towering skyscrapers that seem to touch the clouds and a never-ending stream of people rushing to and fro. But amidst the bustle, a different kind of presence watches and waits. Cameras equipped with artificial intelligence (AI) systems, once the domain of science fiction, have become a ubiquitous feature of modern American life. These AI-powered cameras can detect and track individuals based on their face, clothing, and even their gait, raising profound questions about the balance between security and civil liberties.

At the heart of this issue is the use of AI in camera surveillance systems, which has grown exponentially in the past decade. According to research by the technology policy organization, the Center for Security and Emerging Technology (CSET), there is no national law that “meaningfully limits” the use of this data. This lack of regulation has created a Wild West of sorts, where local law enforcement agencies and private companies can collect and store vast amounts of personal data without any meaningful oversight. The CSET’s research reveals that at least 63 cities and towns across the US have deployed AI-powered surveillance systems, with many more planning to follow suit.

The stakes are high, as the use of AI in surveillance has significant implications for individual privacy and civil liberties. AI systems can analyze vast amounts of data in real-time, allowing authorities to identify and track individuals with unprecedented precision. But this also creates a culture of suspicion, where people are judged based on their appearance and behavior, rather than any actual evidence of wrongdoing. The consequences of this can be severe, particularly for marginalized communities who are already disproportionately affected by law enforcement overreach.

Historically, the US has been a pioneer in the use of surveillance technologies, from the FBI’s COINTELPRO program in the 1960s to the NSA’s mass surveillance revelations in the 2010s. But the current wave of AI-powered surveillance raises new and pressing questions about the role of technology in policing and the limits of government power. As one prominent advocate for digital rights noted, “the use of AI in surveillance is a classic case of the ‘slippery slope,’ where a well-intentioned technology ends up being used for purposes that were never intended.” This concern is echoed by many civil liberties groups, which argue that the lack of transparency and accountability in AI-powered surveillance systems is a recipe for disaster.

The issue is further complicated by the fact that many of these surveillance systems are being sold to law enforcement agencies by private companies, which have a vested interest in promoting their products. This creates a conflict of interest, where the companies are more focused on making a profit than on ensuring that the technology is used responsibly. As one expert pointed out, “the fact that these companies are selling these systems without any meaningful oversight or regulation is a clear indication that they are more interested in making a buck than in protecting people’s rights.”

A Culture of Suspicion

The consequences of AI-powered surveillance are already being felt on the ground. In cities like Chicago and Baltimore, residents have reported being stopped and questioned by police based on their appearance, without any evidence of wrongdoing. In other cities, residents have been detained and harassed by law enforcement for simply attending protests or engaging in other forms of free expression. The use of AI in surveillance has created a culture of suspicion, where people are judged based on their appearance and behavior, rather than any actual evidence of wrongdoing.

This culture of suspicion has a disproportionate impact on marginalized communities, who are already more likely to be stopped and searched by police. According to data from the American Civil Liberties Union (ACLU), African Americans are nearly four times more likely to be stopped by police than white Americans, even though they make up only 13% of the population. The use of AI in surveillance only exacerbates this problem, creating a situation where people are judged based on their appearance, rather than any actual evidence of wrongdoing.

Reactions and Implications

The use of AI in surveillance has sparked a heated debate in the US, with many calling for greater transparency and accountability. The ACLU has launched a campaign to ban the use of AI-powered surveillance systems, arguing that they are a threat to civil liberties and individual privacy. Other groups, such as the Electronic Frontier Foundation (EFF), have called for greater regulation of the industry, arguing that the lack of oversight is a recipe for disaster.

The implications of AI-powered surveillance are far-reaching, with many experts warning that it could create a surveillance state that is unparalleled in modern history. As one prominent academic noted, “the use of AI in surveillance is a slippery slope that could lead to a world where people are judged based on their appearance and behavior, rather than any actual evidence of wrongdoing.” This raises profound questions about the role of government and the limits of power, with many experts arguing that the use of AI in surveillance is a clear indication of the need for greater transparency and accountability.

Looking Ahead

As the use of AI in surveillance continues to grow, it is clear that the stakes are high. The US has a unique opportunity to set a new standard for the use of surveillance technologies, one that balances security with individual rights and freedoms. But this will require greater transparency and accountability, as well as a willingness to challenge the status quo and push for meaningful reform. As one advocate for digital rights noted, “the use of AI in surveillance is a wake-up call for all of us, reminding us that the line between security and freedom is always thin and easily crossed.”

Written by

Veridus Editorial

Editorial Team

Veridus is an independent publication covering Africa's ideas, politics, and future.