Artificial intelligence (AI) has not yet reached the human level. But with technology closing the gap more and more each year, many ethical problems have arisen.

One important question is: How similar will artificial intelligence be to human beings? Will they think for themselves or have desires and emotions? Should they have legal rights like humans? Should they be forced into work, or be accountable if something goes wrong?

We’ll take a deep dive into these questions and more in this article.

AI Ethics: Why They'reImportant To Think About

An AI robot

AI and machine ethics are two related fields that are gaining increasing traction. They relate to several important aspects of technology including how we design, use, and treat machines. Most of these issues relate to safety concerns in relation to human beings.

However, AI ethics is beginning to move past these foundational issues and into more controversial territory. Imagine that in the next few decades, a super-intelligent AI is developed which is potentially conscious and expresses desires, feelings, or can experience suffering. Since we aren’t even sure what human consciousness is or how it arises, it isn’t as far-fetched a proposition as it originally sounds.

How would we go about defining and treating such an AI? And what are some of the ethical issues that we’re facing right now with our current level of AI?

Let’s take a look at a few of the ethical dilemmas that we’re facing.

Should AI Receive Citizenship?

In 2017, the Saudi Arabian government granted full citizenship to Sophia: one of the most life-like AI-driven robots in the world. Sophia can take part in a conversation and is able to mimic 62 human facial expressions. Sophia is the first non-human to have a passport, and the first to own a credit card.

The decision to make Sophia a citizen has been controversial. Some regard it as a step forward. They think it’s important for people and regulatory bodies to start paying more attention to issues in this area. Others consider it an affront to human dignity, stating that the AI is not close to being human yet—and that society as a whole is not ready for robot citizens.

The debate becomes heated because of the rights and duties that are afforded citizens. These include being able to vote, pay taxes, marry, and have children. If Sophia is allowed to vote, who is actually voting? With the current state of AI, would it rather be her creator that is voting? Another poignant criticism is that Sophia was awarded more rights than Saudi Arabia’s women and migrant workers.

AI and IP: Should They Own the Rights to What They Create?

Discussions surrounding intellectual property (IP) and privacy concerns have reached an all-time high, and now there's another concern. AI is being used increasingly to develop content, produce ideas, and perform other actions subject to IP laws. For instance, The Washington Post released Heliograf in 2016; an AI reporter who developed almost a thousand articles in its first year. Several industries also use AI to scour huge amounts of data and develop new products, such as the pharmaceutical industry.

Currently, AI is considered a tool; all IP and legal rights are afforded to its owner. But the EU previously considered creating a third entity, an “electronic personality”, that would become a legal entity in the eyes of IP laws.

Some argue that without IP being afforded to the machine’s owner, there will be a lack of incentive to build “creative” AI. If the IP went to the AI, why would anyone develop them? And, because of this, they believe there’ll be a lack of innovation.

AI and the Future of Work

Man and robot touching hands

The role of AI in work is a bit of a conundrum. In recent years, we’ve seen the controversial use of AI in hiring and firing algorithms where the AI was unintentionally biased toward certain demographics. AI is also gradually replacing higher and higher levels of human work—first manual labor, and now higher-order mental labor.

What should be done about this? And what happens if some form of conscious AI is developed? Should it be forced to work? Compensated for its labor? Granted workplace rights? And so on.

In one episode of Black Mirror (a show notorious for messing with our heads), a girl called Greta creates a digital clone of her consciousness. The clone is told that its purpose is to carry out duties for Greta’s life. But, with Greta’s consciousness, the clone considers itself to be Greta. So, when the clone refuses to be a slave, its creators torture it into submission. Finally, the clone goes to work for Greta.

Should we preemptively grant certain rights to AI in the case that they believe themselves to be human or experience suffering?

To take this one step further, let’s consider whether AI should be freely turned off or decommissioned. Currently, when something goes wrong, we’re able to simply pull the plug and turn AI off. But, if the AI had legal rights and this was no longer possible, what would be possible?

The famous example of super-intelligent AI gone wrong is the paperclip maximizer. This is an AI that has been designed to create the maximum amount of paperclips possible. Given that the AI is powerful enough, it’s conceivable that it could decide to convert human beings, and then everything, into paperclips.

Should AI be Held Accountable?

AI is already responsible for many decisions that affect human lives. In fact, AI is already used in many areas that directly affect human rights, which is worrying considering how biased many AI algorithms seem to be.

For instance, AI is used by large companies to decide who should be hired for a job. It is also used in some countries to determine who should receive welfare. More worryingly, it’s used by police and court systems to determine the sentencing for defendants. And that’s not all.

What happens when AI makes mistakes? Who is made accountable—those using it? The creators Alternatively, should the AI itself be punished (and if so, how would that work)?

AI and Humanity

AI will never be human. But it may be conscious, feel suffering, or have wants and desires. If an AI like this was developed, would it be unethical to force it to work, freely decommission it, or do things to it which cause it suffering?

While AI is still in its infancy, it's already being used for things that directly affect human lives, sometimes drastically. Decisions need to be made about how to regulate the software that is best for human lives, and AI lives should that ever come about.