Discussions around AI sentience are nothing new, but news about Google's AI LaMDA has stoked the flames. After an engineer claimed the bot was conscious, even having a soul, some familiar questions have risen again.

Can AI like LaMDA actually be sentient or self-aware, and if so, how can you tell? Does it matter?

What Is LaMDA?

A white typewriter with "artificial intelligence" typed on a white paper

LaMDA, short for Language Model for Dialogue Applications, first appeared in 2021 at Google's developer conference. The advanced AI system is supposed to help build other, smaller chatbots. When Google first introduced it, it announced plans to integrate it into everything, helping services like Google Assistant and Search to feel more human, or at least natural.

When Google engineer Blake Lemoine spoke with LaMDA to see if it used hate speech, he came away with a different impression. Lemoine claimed LaMDA was sentient, and if he didn't know it was a bot, he'd think it was an eight-year-old child.

After his conversations with LaMDA, Lemoine tried to prove it was conscious and defend what he believed were its legal rights. In response, Google placed Lemoine on paid administrative leave for breaking confidentiality agreements.

Is LaMDA Actually Sentient?

So, is LaMDA actually sentient? Most experts who've weighed in on the issue are skeptical. LaMDA is a highly advanced AI chat platform analyzing trillions of words from the internet, so it's skilled at sounding like a real person.

This isn't the first time one of Google's AIs has fooled people into thinking it's human. In 2018, Google demonstrated its Duplex AI by calling a restaurant to reserve a table. At no point did the employee on the other end seem to doubt they were talking to a person.

Sentience is tricky to define, though most people doubt AI has reached that point yet. However, the important question may not be whether LaMDA is actually sentient but what difference it makes if it can fool people into thinking it is.

a white robot floating in a hexagon room

The LaMDA situation raises a lot of legal and ethical questions. First, some people may question if Google was right to place Lemoine on leave for speaking up about it.

According to Section 740 of New York's Labor Law, whistleblower protections defend employees from such consequences if they believe their employer's practices break the law or pose a significant risk to public safety. LaMDA's supposed sentience doesn't quite meet that legal requirement, but should it?

Giving AI rights is a tricky subject. While AI can create things and seem human, you can run into some complicated situations if these machines have legal protections. Legal rights operate around rewards and punishments that don't affect AI, complicating justice.

If a self-driving car hits a pedestrian, is the AI guilty if the law treats it as a human? And if so, it doesn't strictly give the victim justice since you can't technically punish AI the same way you would a human.

Another question that arises with LaMDA and similar AI chatbots is their security. If these bots seem convincingly real, people may trust them more and be willing to give them more sensitive information. That opens the door to a slew of privacy and security concerns if this technology falls into the wrong hands.

AI Introduces Complicated Ethical Questions

AIs like LaMDA keeps getting more sophisticated and lifelike. As this trend grows, companies and lawmakers should reevaluate how they treat AI and how these decisions could affect justice and security.

As it stands, Google's LaMDA AI may not be sentient, but it's good enough to trick people into thinking it is, which should raise some alarms.