Have you ever cursed at Siri, Alexa, or Google for failing to understand your words? You're not alone. We may be guilty of lobbing an insult or two ourselves.

But we should all be careful.

While there isn't an actual person at the other end of your insult, those swear words aren't disappearing into the void either. Our mindless barbs are transferred over the net to distant servers. How we treat digital personal assistants can teach them the worst of humanity, and it can cause some companies to regard some of us as too unhinged to hire.

As it turns out, what we may think of as harmless banter isn't so harmless after all.

They're Learning From Us

Siri hit the scene in 2011, followed by Google Now a year later. By 2014, Microsoft had Cortana. In that same year, Amazon stuck an AI called Alexa in a plastic tube that people could leave on their countertops. Google and Apple have since done the same. Tech giants are rapidly seeking ways to turn artificial intelligence into consumer-ready products.

These digital personal assistants may seem mature. They may even have moments that delight us. But they're adolescent. They're highly literate toddlers, and they're actively learning from the information we provide them. So are the companies that make them.

The team behind Siri is working on Viv (now owned by Samsung), a digital assistant that integrates with third-party services such as Weather Underground and Uber to provide speakers with more detailed responses. Onstage demonstrations show Viv responding to the kind of questions we ask other people, not the kind of language we tailor for machines to understand. This is the result of learning from the way people use language.

Digital personal assistants aren't what most of us imagine when we picture artificial intelligence. Unlike ELIZA, a computer program from the 1960s that simulates natural language (you can try talking to it online), these AIs aren't doing much "thinking" for themselves. That work is all offloaded over the internet.

There are several steps to the process.

The first component is speech recognition. The device either uploads a direct recording or translates your words into text that it can send to remote servers (Apple's, Google's, Amazon's, whomever's). That's where the magic happens. Or rather, software searches a database for a suitable response. It then pushes this information back to the personal assistant.

In short: someone asks you a question, you ask Siri, and Siri then asks Apple servers. The servers give Siri an answer, she responds back to you, and you're either happy or left dealing with how to handle your disappointment.

These databases don't just contain answers. Some store voice recordings that help computer navigate the many nuances of our different dialects. This information isn't only to help bots understand us. Facebook has used thousands of natural language negotiations between two people in order to teach Messenger chatbots how to negotiate.

Are We Setting a Good Example?

These are hardly the only AIs learning from the way we speak.

Last year, Microsoft released a chat bot onto Twitter, Kik, and GroupMe with the purpose of simulating an American teenage girl. Within a few hours, "Tay" was agreeing with Hitler and espousing all manner of offensive rhetoric. Microsoft pulled Tay offline before the day was up.

While Tay was a failure, that hasn't slowed the proliferation of chatbots. You can find them in social networks like Facebook and Snapchat, along with messaging clients such as HipChat and Slack. Some are around for conversation. Others connect you to services. Some of these bots are safer because they don't try to imitate natural conversation, because right now, it's not necessarily good for machines to mimic the way we talk.

We don't exactly set the best example for bots to learn from. Missouri State University college professor Sheryl Brahnam conducted a study concluding that 10 to 50 percent of the interactions studied showed humans being abusive or otherwise mean to computer assistants (of which digital personal assistants are only one type). That's a disturbingly large number. Some parents feel guilty for uttering a single bad word in front of their children. That's a far cry from half of their interactions being negative.

Those Facebook Messenger chatbots I mentioned earlier? Not only did they learn how to negotiate by studying natural language, they also learned how to lie.

Your Future Employer Could Be Watching

We could reach a point in the future where how we communicate with bots can cost us our job. According to Harvard Business Review, this shift could happen when we go from viewing a mistreated bot as a broken mobile phone and more like a kicked kitten. Being disrespectful to a company bot could potentially get you fired.

chatbot concept
Image Credit: Chatbot Concept via Shutterstock

This doesn't mean employees or employers will start to view bots as adorable living entities. However, they could be sentient enough for mistreating them to seem unprofessional and counterproductive. If you're a manager, abusing AI could get you called before HR for bad leadership.

Does this sound too hypothetical? Consider what we already know is happening. Everything we type or say to these assistants is sent over the internet, and we don't really know what happens to it afterward. Much of that information is logged. Even if that information isn't always tied to your account, it's still stored.

Google is technically transparent about this, but that doesn't make it obvious. This kind if data collection is stretching the definition of having someone's consent before recording them.

That collected data may seem harmless now, but there's nothing to stop the tech giants from eventually putting together detailed portfolios on each of us that future employers, banks, and other entities could check before engaging with us, much like a credit report.

Law Enforcement Is Watching Too

In a case involving an Arkansas man accused of killing his friend (a former police officer), the prosecutor sought to use recordings from an Amazon Echo as evidence. Amazon denied the request, but that's only partly comforting. What's unnerving is that Amazon has stored data in the first place. The suspect has since granted access to that data.

I mentioned that companies could check our data records before interacting with us. In the case of law enforcement, they already do have their eyes on this information. Does the NSA really need to collect data itself if it can rely on the private sector to collect this information for them? Should we trust anyone with that much data on each of us?

Included in the data isn't merely what we've searched for or the commands we issued, but how we did it. We aren't just giving away insight into our interests and actions -- we're giving others a look into how we act. What goes on between you and Alexa doesn't stay between you and Alexa.

What We Say Matters

While Tay had to go into timeout, Microsoft found overwhelming success with a different bot. It's name is Xiaoice. In China and Japan, that bot has interacted with over 40 million people. That Xiaoice hasn't self-destructed is partly due to a difference in culture. In China, certain types of speech aren't allowed online.

Now the bots have started to do the policing themselves. Social networks are starting to use bots to curtail hate speech. We may not be talking directly to these bots, but they're still studying our speech to learn what qualifies as abuse.

No matter how you approach the issue, what we say and how we say it matters. The bots, and the companies that make them, are listening. Someday, our employers may join them. And if computers were to replace us in the future, wouldn't we want them to be nice?

How do you react when digital personal assistants fail? Do you treat them as though they have feelings? Do you care if someone is keeping tabs on what you say? What do you see as the responsible way forward? A human will keep an eye out for your comments below.

Image Credit: deagreez1/Depositphotos