Artificial Intelligence (AI) is as controversial as it is impressive. It can make many parts of work and everyday life easier, but it also raises some ethical questions. The use of AI by the U.S. government, in particular, leaves some people feeling uneasy.

There are many government AI projects in use or in development today, and some have done a lot of good. At the same time, they raise plenty of artificial intelligence privacy concerns. Here’s a closer look at these projects and what they mean for public privacy.

Examples of Government AI and Automation Projects

a man holding a green post-it note with the words "A.I." written on it

The most basic examples of artificial intelligence used by the U.S. government involve automating routine office work. In 2019, Seattle used Robotic Process Automation (RPA) to take care of data entry and processing applications. Since then, the city got through more than 6,000 backlogged applications and saved hundreds of work hours.

Other government AI projects are more eye-catching. The New York Fire Department is testing robot dogs from Boston Dynamics to measure structural damage and toxic fumes before firefighters enter. Before the firefighting robot project, the New York Police Department had planned to implement the same robots.

Police departments and other government agencies across the country are considering using similar technologies. However, as these government AI projects take off, their potential privacy shortcomings have become clearer.

Is Artificial Intelligence a Threat to Privacy and Security?

Whether you’ll see police robots in the future is still uncertain, but things seem to be heading that way. These projects have plenty of benefits, but artificial intelligence privacy concerns become more serious when dealing with the government. Here are some of the biggest issues with these technologies.

Hidden Surveillance

Artificial intelligence relies on collecting and analyzing data. As a result, more government AI projects mean these agencies will gather and store additional information about their citizens. Some people feel like all this information collecting breaches their privacy and violates their rights.

Technologies like the firefighting dog project are particularly concerning because they can make government surveillance misleading. Agencies say the robot is there to check safety concerns, but there’s no way for people to tell what data it’s collecting. It could have cameras and sensors that scan their faces or track their cellphones without them knowing.

Some people worry that robots’ "cool factor" will hide their surveillance potential. Police robots in the future could spy on citizens without raising much suspicion since people just see new tech instead of a breach of their privacy.

Unclear Responsibilities

Person operating self-driving Tesla without hands on the wheel
Image Credit: Roberto Nickson/Upslash

These AI and automation projects also raise the question of accountability. If a robot makes a mistake that results in harm, who’s responsible for it? When a government employee oversteps their boundaries and infringes on someone’s rights, courts can hold them accountable, but what about a robot?

You can see this issue in self-driving cars. People have charged the manufacturer under a product liability claim in some autopilot crash cases, while others blame the driver. In one instance, the National Transportation Safety Board placed responsibility on both the manufacturer and the driver, but it ultimately must be decided on a case-by-case basis. Police robots muddy the waters the same way. If they breach your privacy, it’s unclear whether to blame the manufacturer, the police department, or the human supervisors.

This confusion could slow down and complicate legal proceedings. It could take a while for victims of privacy breaches or infringements on their rights to get the justice they deserve. New laws and legal precedent could clarify things and solve this issue, but right now, it’s uncertain.

Data Breach Risks

Artificial intelligence used by the U.S. government could also expand the AI privacy concerns you see in the private sector. Collecting some data may be perfectly legal, but the more organizations gather, the more’s at risk. The company or government may not use the information for anything illegal, but it may make people vulnerable to cybercrime.

There were more than 28,000 cyberattacks against the U.S. government in 2019 alone. If agencies hold more of citizens’ private information, these attacks could affect more than just the government. A successful data breach could endanger many people without them knowing. Breaches often go unnoticed, so you need to check your data isn't already for sale.

For example, if a police robot in the future uses facial recognition to look for wanted criminals, it may store many citizens’ biometric data. Hackers who get into the system could steal that information and use it to break into people’s bank accounts. Government AI projects must have strong cybersecurity measures if they don’t want to endanger people’s data.

Government AI Has Benefits but Raises Concerns

It’s still not clear how the American government will use artificial intelligence in the future. New protections and laws could resolve these issues and bring all the benefits of AI without its risks. For now, though, these concerns raise some alarms.

Privacy concerns abound anywhere AI works. These questions become more serious as it plays a bigger role in the government. State AI projects could do a lot of good, but they also have substantial potential for harm.