Microsoft, Artificial Intelligence, and The Robot Apocalypse
For those familiar with the classic 1984 film “Terminator,” some recent news out of oft-criticized Microsoft may be cause for consternation. No, Skynet isn’t ready to destroy all of humanity just yet. But it seems that autonomous security robots are garnering serious attention from big companies, including Microsoft. Is this the beginning of the end?
Or, depending on your perspective, it could just be the latest innovation that will help protect and improve humanity.
Recently, Microsoft showcased a line of new autonomous robots, called K5, as security guards for one of its campuses. And though the robots looked nothing like Arnold Schwarzenegger , the machine – developed by Knightscope – is impressive and intimidating. Standing 5 feet tall, weighing 300 pounds, and equipped with HD cameras, sensors, alarms, artificial intelligence, and WiFi, it’s one incredible piece of technology.
Not Without Virtue
Before considering whether this could be the beginning of a robot apocalypse, it’s only fair to point out the benefits associated with this device. The K5 is capable of operating 24 hours on a single charge, and only requires about 20 minutes to recharge.
It can read license plates and detect things that are out of place, like an injured employee or a potential burglar climbing a building wall. Its built-in WiFi allows it to contact security headquarters and report any issue it finds, allowing a human officer to come in and finish the job.
In the long run, transitioning to this form of technology would be more efficient and less costly than human security guards. And as technology improves, the K5 will probably be more vigilant and capable of recognizing smaller discrepancies than a human could. It could also lower cost, meaning more bots patrolling a given area, reducing the number of holes in security coverage.
An Impartial Jury
Another significant benefit for using security robots would be the potential ability to eliminate human bias in decision making. Recent events have called into question the actions of law enforcement, and reignited questions about profiling. Bots, unless programmed to do so, wouldn’t have these biases. They would perform a thoroughly objective assessment of a given situation and avoid jumping to conclusions.
For human law enforcement, potentially life-threatening situations increase the odds of a violent resolution. Because robots are not inherently concerned with their own safety, they would be far less prone to rash decisions or an overly-aggressive instinct to protect themselves.
Are We Summoning the Devil?
However, merging A.I. with security does open the door to a number of concerns. Elon Musk, CEO of Tesla, recently warned against the dangers of A.I. , comparing it to summoning the devil.
This came just a few months after Steven Hawking stated that A.I. could lead to the downfall of humankind. While we may laugh at the far-out plots of science fiction movies, it seems a level of fear concerning A.I. is not completely without merit. Machine learning and cognitive computing are bringing computers to the point where they can learn and make decisions on their own. Is it unrealistic to think they could act outside of human control?
“We should be very careful about artificial intelligence. If I were to guess at what our biggest existential threat is, it’s probably that…I’m increasingly inclined to think there should be some regulatory oversight maybe at the national and international level, just to make sure that we don’t do something very foolish.” -Elon Musk
A Smarter Solution
Information giant Google has recently made its machine learning software available via the cloud. That means A.I. processes could be distributed across a network of millions of servers and accessible by third party developers. With more and more devices relying on cloud computing and with greater and greater interconnectivity, we will eventually see a very diverse group of smart objects. And with the Internet of Things getting more powerful all the time, objects could learn to communicate and cooperate with each other seamlessly, all without human oversight.
In the case of K5, to date it uses only sirens and alarms, and doesn’t carry any weapons. Some do though, like the robotic sentry guards South Korea employs to patrol its side of the Demilitarized Zone. Before we get serious about adopting weaponized robots, we had better be certain the A.I. software is well tested, predictable, and free of bugs. After all, what happens if there is an incident? Who will be responsible?
Pros and cons aside, the technology surrounding humanoid robots is impressive. And we have already begun seeing some amazing innovations, achieving what was once considered merely science fiction.