Artificial intelligence is disrupting and revolutionizing almost every industry. With advancing technology, it has the potential to improve so many aspects of life drastically.

But, it isn’t without risk.

And, with swathes of experts warning of the potential danger of AI, we should probably pay attention. On the other hand, many claim that these are alarmist views and that there’s no immediate danger from AI.

So, are concerns about artificial intelligence alarmist or not? This article will cover the five main risks of artificial intelligence, explaining the currently available technology in these areas.

How Can AI Be Dangerous?

AI is growing more sophisticated by the day, and this can have risks ranging from mild (for example, job disruption) to catastrophic existential risks. The level of risk imposed by AI is so heavily debated because there’s a general lack of understanding (and consensus) regarding AI technology.

It is generally thought that AI can be dangerous in two ways:

  1. The AI is programmed to do something malicious
  2. The AI is programmed to be beneficial but does something destructive while achieving its goal

These risks are amplified by the sophistication of AI software. The classic hypothetical argument is the facetious “paper clip maximizer.” In this thought experiment, a superintelligent AI has been programmed to maximize the number of paper clips in the world. If it’s sufficiently intelligent, it could destroy the entire world for this goal.

But, we don’t need to consider superintelligent AI to see that there are dangers already associated with our use of AI. So, what are some of the immediate risks we face from AI?

1. Job Automation and Disruption

Artificial intelligence illustration

Automation is a danger of AI that is already affecting society.

From mass production factories to self-serve checkouts to self-driving cars, automation has been occurring for decades—and the process is accelerating. A Brookings Institution study in 2019 found that 36 million jobs could be at a high risk of automation in the coming years.

The issue is that for many tasks, AI systems outperform humans. They are cheaper, more efficient, and more accurate than humans. For example, AI is already better at recognizing art forgery than human experts, and it’s now becoming more accurate at diagnosing tumors from radiography imagery.

The further problem is that, with job displacement following automation, many of the workers who lost their jobs are ineligible for newly created jobs in the AI sector due to lacking the required credentials or expertise.

As AI systems continue to improve, they will become far more adept at tasks than humans. This could be in pattern recognition, providing insights, or making accurate predictions. The resulting job disruption could result in increased social inequality and even an economic disaster.

2. Security and Privacy

In 2020, the UK government commissioned a report on Artificial Intelligence and UK National Security, which highlighted the necessity of AI in the UK’s cybersecurity defenses to detect and mitigate threats that require a greater speed of response than human decision-making is capable of.

The problem is that the hope is that as AI-driven security concerns rise, so do AI-driven prevention measures. Unless we can develop measures to protect ourselves against AI concerns, we run the risk of running a never-ending race against bad actors.

This also begs the question of how we make AI systems secure themselves. If we use AI algorithms to defend against various security concerns, we need to ensure that the AI itself is secure against bad actors.

When it comes to privacy, large companies and governments are already being called out for the erosion of our privacy. With so much personal data available online now (2.5 million terabytes of data is created daily), AI algorithms are already able to easily create user profiles that enable extremely accurate targeting of advertisements.

Facial recognition technology is also already incredibly sophisticated. Cameras can perform real-time profiling of individuals. Some police forces worldwide reportedly use smart glasses with facial recognition software that can easily label wanted or suspected criminals.

The risk is that this technology could be expanded to authoritarian regimes or simply individuals or groups with malicious intents.

3. AI Malware

AI is becoming increasingly good at hacking security systems and cracking encryption. One way this is occurring is via malware “evolving” through machine learning algorithms. The malware can learn what works through trial and error, becoming more dangerous over time.

Newer smart technology (like self-driving cars) has been assessed as a high-risk target for this kind of attack, with the potential for bad actors to cause car crashes or gridlocks. As we become more and more reliant on internet-connected smart technology, more and more of our daily lives will be impacted by the risk of disruption.

Again, the only real solution to this danger is that anti-malware AI outperforms malicious AI to protect individuals and businesses.

4. Autonomous Weapons

Autonomous weapons—weapons controlled by AI systems rather than human input—already exist and have done for quite some time. Hundreds of tech experts have urged the UN to develop a way to protect humanity from the risks involved in autonomous weapons.

Government militaries worldwide already have access to various AI-controlled or semi-AI-controlled weapon systems, like military drones. With facial recognition software, a drone can track an individual.

What happens when we start allowing AI algorithms to make life and death decisions without any human input?

It’s also possible to customize consumer technology (like drones) to fly autonomously and perform various tasks. This kind of capability in the wrong hands could impact an individual's security on a day-to-day basis.

5. Deepfakes, Fake News, and Political Security

What is a coding boot camp - Should You Take One?
Image Credit: geralt via Pixabay

Facial reconstruction software (more commonly known as deepfake tech) is becoming more and more indistinguishable from reality.

The danger of deepfakes is already affecting celebrities and world leaders, and it’s only so long until this trickles down to ordinary people. For instance, scammers are already blackmailing people with deepfake videos created from something as simple and accessible as a Facebook profile picture.

And that’s not the only risk. AI can recreate and edit photos, compose text, clone voices, and automatically produce highly targeted advertising. We have already seen how some of these dangers impact society.

Mitigating the Risks of Artificial Intelligence

As artificial intelligence increases in sophistication and capability, many positive advances are being made. But unfortunately, powerful new technology is always at the risk of being misused. These risks affect almost every facet of our daily lives, from privacy to political security to job automation.

The first step in mitigating the risks of artificial intelligence will be to decide where we want AI to be used and where it should be discouraged. Increasing research and debate into AI systems and their uses is the first step to preventing them from being misused.