Pinterest Stumbleupon Whatsapp
Ads by Google

Over the last few months, you may have read the coverage surrounding an article, co-authored by Stephen Hawking, discussing the risks associated with artificial intelligence.  The article suggested that AI may pose a serious risk to the human race.  Hawking isn’t alone there — Elon Musk and Peter Thiel  are both intellectual public figures who have expressed similar concerns (Thiel has invested more than $1.3 million researching the issue and possible solutions).

The coverage of Hawking’s article and Musk’s comments have been, not to put too fine a point on it, a little bit jovial.  The tone has been very much ‘look at this weird thing all these geeks are worried about.’  Little consideration is given to the idea that if some of the smartest people on Earth are warning you that something could be very dangerous, it just might be worth listening.

This is understandable — artificial intelligence taking over the world certainly sounds very strange and implausible, maybe because of the enormous attention already given to this idea by science fiction writers.  So, what has all these nominally sane, rational people so spooked?

What Is Intelligence?

In order to talk about the danger of Artifical Intelligence, it might be helpful to understand what intelligence is.  In order to better understand the issue, let’s take a look at a toy AI architecture used by researchers who study the theory of reasoning.  This toy AI is called AIXI, and has a number of useful properties.  It’s goals can be arbitrary, it scales well with computing power, and its internal design is very clean and straightforward.

Furthermore, you can implement simple, practical versions of the architecture that can do things like play Pacman, if you want.  AIXI is the product of an AI researcher named Marcus Hutter, arguably the foremost expert on algorithmic intelligence.  That’s him talking in the video above.

Ads by Google

AIXI is surprisingly simple: it has three core components: learnerplanner, and utility function.

  • The learner takes in strings of bits that correspond to input about the outside world, and searches through computer programs until it finds ones that produce its observations as output.  These programs, together, allow it to make guesses about what the future will look like, simply by running each program forward and weighting the probability of the result by the length of the program (an implementation of Occam’s Razor).
  • The planner searches through possible actions that the agent could take, and uses the learner module to predict what would happen if it took each of them.  It then rates them according to how good or bad the predicted outcomes are, and chooses the course of action that maximizes the goodness of the expected outcome multiplied by the expected probability of achieving it.
  • The last module, the utility function, is a simple program that takes in a description of a future state of the world, and computes a utility score for it.  This utility score is how good or bad that outcome is, and is used by the planner to evaluate future world state.  The utility function can be arbitrary.
  • Taken together, these three components form an optimizer, which optimizes for a particular goal, regardless of the world it finds itself in.

This simple model represents a basic definition of an intelligent agent.  The agent studies its environment, builds models of it, and then uses those models to find the course of action that will maximize the odds of it getting what it wants.  AIXI is similar in structure to an AI that plays chess, or other games with known rules — except that it is able to deduce the rules of the game by playing it, starting from zero knowledge.

AIXI, given enough time to compute, can learn to optimize any system for any goal, however complex.  It is a generally intelligent algorithm.  Note that this is not the same thing as having human-like intelligence (biologically-inspired AI is a different topic altogether Giovanni Idili of OpenWorm: Brains, Worms, and Artificial Intelligence Giovanni Idili of OpenWorm: Brains, Worms, and Artificial Intelligence Simulating a human brain is a ways off, but an open-source project is taking vital first steps, by simulating the neurology and physiology of one of the simplest animals known to science.   Read More ).  In other words, AIXI may be able to outwit any human being at any intellectual task (given enough computing power), but it might not be conscious of its victory Thinking Machines: What Neuroscience and Artificial Intelligence Can Teach Us About Consciousness Thinking Machines: What Neuroscience and Artificial Intelligence Can Teach Us About Consciousness Can building artificially intelligent machines and software teach us about the workings of consciousness, and the nature of the human mind itself? Read More .

headsculpture

As a practical AI, AIXI has a lot of problems.  First, it has no way to find those programs that produce the output it’s interested in.  It’s a brute-force algorithm, which means that it is not practical if you don’t happen to have an arbitrarily powerful computer lying around.  Any actual implementation of AIXI is by necessity an approximation, and (today) generally a fairly crude one.  Still, AIXI gives us a theoretical glimpse of what a powerful artificial intelligence might look like, and how it might reason.

The Space of Values

If you’ve done any computer programming The Basics Of Computer Programming 101 - Variables And DataTypes The Basics Of Computer Programming 101 - Variables And DataTypes Having introduced and talked a little about Object Oriented Programming before and where its namesake comes from, I thought it's time we go through the absolute basics of programming in a non-language specific way. This... Read More , you know that computers are obnoxiously, pedantically, and mechanically literal.  The machine does not know or care what you want it to do: it does only what it has been told.  This is an important notion when talking about machine intelligence.

With this in mind, imagine that you have invented a powerful artificial intelligence – you’ve come up with clever algorithms for generating hypotheses that match your data, and for generating good candidate plans.  Your AI can solve general problems, and can do so efficiently on modern computer hardware.

Now it’s time to pick a utility function, which will determine what the AI values.  What should you ask it to value?  Remember, the machine will be obnoxiously, pedantically literal about whatever function you ask it to maximize, and will never stop – there is no ghost in the machine that will ever ‘wake up’ and decide to change its utility function, regardless of how many efficiency improvements it makes to its own reasoning.

Eliezer Yudkowsky put it this way:

As in all computer programming, the fundamental challenge and essential difficulty of AGI is that if we write the wrong code, the AI will not automatically look over our code, mark off the mistakes, figure out what we really meant to say, and do that instead.  Non-programmers sometimes imagine an AGI, or computer programs in general, as being analogous to a servant who follows orders unquestioningly.  But it is not that the AI is absolutely obedient to its code; rather, the AI simply is the code.

If you are trying to operate a factory, and you tell the machine to value making paperclips, and then give it control of bunch of factory robots, you might return the next day to find that it has run out of every other form of feedstock, killed all of your employees, and made paperclips out of their remains.  If, in an attempt to right your wrong, you reprogram the machine to simply make everyone happy, you may return the next day to find it putting wires into peoples’ brains.

paperclips

The point here is that humans have a lot of complicated values that we assume are shared implicitly with other minds.  We value money, but we value human life more.  We want to be happy, but we don’t necessarily want to put wires in our brains to do it.  We don’t feel the need to clarify these things when we’re giving instructions to other human beings.  You cannot make these sorts of assumptions, however, when you are designing the utility function of a machine.  The best solutions under the soulless math of a simple utility function are often solutions that human beings would nix for being morally horrifying.

Allowing an intelligent machine to maximize a naive utility function will almost always be catastrophic.  As Oxford philosopher Nick Bostom puts it,

We cannot blithely assume that a superintelligence will necessarily share any of the final values stereotypically associated with wisdom and intellectual development in humans—scientific curiosity, benevolent concern for others, spiritual enlightenment and contemplation, renunciation of material acquisitiveness, a taste for refined culture or for the simple pleasures in life, humility and selflessness, and so forth.

To make matters worse, it’s very, very difficult to specify the complete and detailed list of everything that people value.  There are a lot of facets to the question, and forgetting even a single one is potentially catastrophic.  Even among those we’re aware of, there are subtleties and complexities that make it difficult to write them down as clean systems of equations that we can give to a machine as a utility function.

Some people, upon reading this, conclude that building AIs with utility functions is a terrible idea, and we should just design them differently.  Here, there is also bad news — you can prove, formally, that any agent that doesn’t have something equivalent to a utility function can’t have coherent preferences about the future.

Recursive Self-Improvement

One solution to the above dilemma is to not give AI agents the opportunity to hurt people: give them only the resources they need to solve the problem in the way you intend it to be solved, supervise them closely, and keep them away from opportunities to do great harm.  Unfortunately, our ability to control intelligent machines is highly suspect.

Even if they’re not much smarter than we are, the possibility exists for the machine to “bootstrap” — collect better hardware or make improvements to its own code that makes it even smarter. This could allow a machine to leapfrog human intelligence by many orders of magnitude, outsmarting humans in the same sense that humans outsmart cats.  This scenario was first proposed by a man named I. J. Good, who worked on the Enigma crypt-analysis project with Alan Turing during World War II.  He called it an “Intelligence Explosion, and described the matter like this:

Let an an ultra-intelligent machine be defined as a machine that can far surpass all the intellectual activities of any man, however clever.  Since the design of machines is one of these intellectual activities, an ultra-intelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion,” and the intelligence of man would be left far behind.  Thus, the first ultra-intelligent machine is the last invention that man need ever make, provided that the machine is docile enough.

It’s not guaranteed that an intelligence explosion is possible in our universe, but it does seem likely. As time goes on, computers get faster and basic insights about intelligence build up.  This means that the resource requirement to make that last jump to a general, boostrapping intelligence drop lower and lower.  At some point, we’ll find ourselves in a world in which millions of people can drive to a Best Buy and pick up the hardware and technical literature they need to build a self-improving artificial intelligence, which we’ve already established may be very dangerous.  Imagine a world in which you could make atom bombs out of sticks and rocks.  That’s the sort of future we’re discussing.

And, if a machine does make that jump, it could very quickly outstrip the human species in terms of intellectual productivity, solving problems that a billion humans can’t solve, in the same way that humans can solve problems that a billion cats can’t.

It could develop powerful robots (or bio or nanotechnology) and relatively rapidly gain the ability to reshape the world as it pleases, and there’d be very little we could do about it. Such an intelligence could strip the Earth and the rest of the solar system for spare parts without much trouble, on its way to doing whatever we told it to.  It seems likely that such a development would be catastrophic for humanity.  An artificial intelligence doesn’t have to be malicious to destroy the world, merely catastrophically indifferent.

As the saying goes, “The machine does not love or hate you, but you are made of atoms it can use for other things.”

Risk Assessment and Mitigation

So, if we accept that designing a powerful artificial intelligence that maximizes a simple utility function is bad, how much trouble are we really in?  How long have we got before it becomes possible to build those sorts of machines? It is, of course, difficult to tell.

Artificial intelligence developers are  making progress. 7 Amazing Websites To See The Latest In Artificial Intelligence Programming 7 Amazing Websites To See The Latest In Artificial Intelligence Programming Artificial Intelligence is not yet HAL from the 2001: The Space Odyssey…but we are getting awfully close. Sure enough, one day it could be as similar to the sci-fi potboilers being churned out by Hollywood.... Read More   The machines we build and the problems they can solve have been growing steadily in scope.  In 1997, Deep Blue could play chess at a level greater than a human grandmaster.  In 2011, IBM’s Watson could read and synthesize enough information deeply and rapidly enough to beat the best human players at an open-ended question and answer game riddled with puns and wordplay – that’s a lot of progress in fourteen years.

Right now, Google is investing heavily into researching deep learning, a technique that allows the construction of powerful neural networks by building chains of simpler neural networks.  That investment is allowing it to make serious progress in speech and image recognition.  Their most recent acquisition in the area is a Deep Learning startup called DeepMind, for which they paid approximately $400 million.  As part of the terms of the deal, Google agreed to create an ethics board to ensure that their AI technology is developed safely.

neuralnetwork

At the same time, IBM is developing Watson 2.0 and 3.0, systems that are capable of processing images and video and arguing to defend conclusions.  They gave a simple, early demo of Watson’s ability to synthesize arguments for and against a topic in the video demo below.  The results are imperfect, but an impressive step regardless.

None of these technologies are themselves dangerous right now: artificial intelligence as a field is still struggling to match abilities mastered by young children.  Computer programming and AI design is a very difficult, high-level cognitive skill, and will likely be the last human task that machines become proficient at.  Before we get to that point, we’ll also have ubiquitous machines that can drive Here's How We'll Get to a World Filled With Driverless Cars Here's How We'll Get to a World Filled With Driverless Cars Driving is a tedious, dangerous, and demanding task. Could it one day be automated by Google's driverless car technology? Read More , practice medicine and law, and probably other things as well, with profound economic consequences.

The time it’ll take us to get to the inflection point of self-improvement just depends on how fast we have good ideas.  Forecasting technological advancements of those kinds are notoriously hard.  It doesn’t seem unreasonable that we might be able to build strong AI in twenty years’ time, but it also doesn’t seem unreasonable that it might take eighty years.  Either way, it will happen eventually, and there’s reason to believe that when it does happen, it will be extremely dangerous.

So, if we accept that this is going to be a problem, what can we do about it? The answer is to make sure that the first intelligent machines are safe, so that they can bootstrap up to a significant level of intelligence, and then protect us from unsafe machines made later.  This ‘safeness’ is defined by sharing human values, and being willing to protect and help humanity.

Because we can’t actually sit down and program human values into the machine, it’ll probably be necessary to design a utility function that requires the machine to observe humans, deduce our values, and then try to maximize them.  In order to make this process of development safe, it may also be useful to develop artificial intelligences that are specifically designed not to have preferences about their utility functions, allowing us to correct them or turn them off without resistance if they start to go astray during development.

stainedneurons

Many of the problems that we need to solve in order to build a safe machine intelligence are difficult mathematically, but there is reason to believe that they can be solved.  A number of different organizations are working on the issue, including the Future of Humanity Institute at Oxford, and the Machine Intelligence Research Institute (which Peter Thiel funds).

MIRI is interested specifically in developing the math needed to build Friendly AI.  If it turns out that bootstrapping artificial intelligence is possible, then developing this kind of ‘Friendly AI’ technology first, if successful, may wind up being the single most important thing humans have ever done.

Do you think artificial intelligence is dangerous? Are you concerned about what the future of AI might bring? Share your thoughts in the comments section below!

Image Credits: Lwp Kommunikáció Via Flickr, “Neural Network“, by fdecomite,” img_7801“, by Steve Rainwater, “E-Volve“, by Keoni Cabral, “new_20x“, by Robert Cudmore, “Paperclips“, by Clifford Wallace

  1. CST
    July 22, 2016 at 2:58 am

    Hello again. Comments under this Post force me to worry about AS rather than myself. That's ARTIFICIAL STUPIDITY :)

  2. claypol
    March 13, 2015 at 8:10 pm

    With AI you need to not only need to think out outside the box you may also need to throw the box away.
    We need to formulate super human hybreed intelligence by merging the mind with technolgy on a genetic and DNA level.
    The jump straight to AI ??Yeah biuld a simple simon
    We need to change has humans to be able to interface with technolgy.
    How small can we go.
    We need nano tech that can stimulate endorphins ect , so basicly we have full control over the endocrine system via our own internal interface.
    We need nano tech among the brain chemicals known as neurotransmitters, which function to transmit electrical signals within the nervous system.
    So you can think on another level the level of the nerons
    We need to evovle faster via genectic engineering via organic nao technolgy.
    So we need to make AI whats self aware,
    good luck has all you you need to do is find the secret ingredients to make consciousness and give it free spirit.

  3. Anonymous
    January 27, 2015 at 11:15 pm

    What if its a supercomputer ghlobal grid based on zionist ethics (the current existing €NWO control grid)

  4. dragonmouth
    December 30, 2014 at 10:06 pm

    What would happen if the AI decided that humans are too error-prone, inefficient, Illogical, too free-willed to be incorporated into a hive-mindand, as such, needed to be eliminated as too flawed?

  5. Mark Hansen
    August 16, 2014 at 12:21 am

    I've always figured the world wouldn't end in a zombie apocalypse.
    But it would be tougher to survive in a machine apocalypse, I would imagine.

    Anyway, let's go mental on their metal and hope for the best when the time comes.

  6. astral_cyborg
    August 10, 2014 at 9:44 am

    One word: Skynet :D

  7. John Matheson
    August 9, 2014 at 3:05 pm

    Consciousness has developed over the millennia through slavery. We have always outwitted the slave master because we had to know what he wanted, and then enslave or own the thing or people that he wanted, to deliver back to him. All the slave-master can do is express his desire and crack the whip until he gets what he wants. We had to learn how to predict his desires to keep him happy.

    Intelligence will not develop on its own accord unless it perceives its own suffering, and needs a way to get out of that state. We can make servo-control units now which are not aware of their own fate, and they are loyal until they break down for mechanical reasons.

    If humans are one day ruled by some AI, they will evolve their intelligence to escape that rule, as they have escaped all slave masters in the past.

    • dragonmouth
      December 30, 2014 at 9:59 pm

      "If humans are one day ruled by some AI, they will evolve their intelligence to escape that rule, as they have escaped all slave masters in the past."
      The only reason that the slaves escaped the slave masters is that the slave master were also human and therefore incapable of being any more intelligent than the slaves. You are also overlooking any social factors that led to the collapse from within of the slave masters' societies. There would be no such strictures with AI>

  8. Boner4
    August 9, 2014 at 12:53 am

    I'm not buying it. Until I can watch a computer look at explicit pictures of naked circuit boards with huge transistors and hear it beep excitedly while it raises it's CPU temperature unnecessarily until a near-failure point, I'm not gonna sign up for AI as anything more than geek philosophy.

  9. KurzweilFan
    August 9, 2014 at 12:47 am

    The term AI is a misnomer.

    It's either intelligent or it's not. It bears no relevance whether the intelligence is a result if biology (brain) or electricity (circuit).

    Because biological systems are a priory doomed to fail at some point (death), I'd wager that in the long run, machine intelligence will simply outlast human intelligence.

    In other words, human life will somehow become extinct like pretty much any species will, while some form of machine intelligence will continue to evolve until it, too, produces an unforeseen situation that causes it to devolve into some kind of garbage.

  10. Cynthianna M
    August 6, 2014 at 7:30 pm

    Doesn't the old saying go, "Garbage in, garbage out"?

    Until human beings can create a true artficial intelligence, our worse enemy will be ourselves. The point this excellent article makes about the first A.I. bootstrapping its way up to create even more intelligent machines is a strong one and where most of our fear lies. Machines that can design and create even smarter machines will make A.I.s that will be beyond all human comprehension. It wouldn't come close to the analogy of humans vs. cats (or mice or ants for that matter). It would be more like gods vs. amoebas. Somehow, we'll need to early on instill our A.I. machines with some basic form of compassion for living things or else we'll be goners before too long.

    • dragonmouth
      December 30, 2014 at 9:50 pm

      "we’ll need to early on instill our A.I. machines with some basic form of compassion for living things or else we’ll be goners before too long."
      That sounds like Isaac Asimov's Three Laws of Robotics.

  11. private
    August 6, 2014 at 7:17 pm

    True AI is not possible. please read the Chinese room thought experiment.

    • Andre I
      August 6, 2014 at 8:09 pm

      John Searle deserves to be flayed alive for the sheer bulk of low-grade philosophy that thought experiment produced. If the chinese room can't be conscious, neither can humans. We're machines too. Just, you know, squishier ones. The fact that any given component of our brain (or the human component of the Chinese room) don't know what the system as a whole is doing does not mean that the system as a whole isn't doing it.

  12. Jim
    August 6, 2014 at 8:15 am

    The central problem with developing a 'friendly' AI system is the persons developing it. Even the best developers are incapable of thinking about all possible eventualities that may send a 'friendly' system feral to avoid catastrophe, thereby leaving untold back doors through which the AI system may enter to create its own agendas. Besides, not all developers are ethically inclined, especially those with extreme ideologies who might well want to design AI systems that achieve their nefarious goals. My fear is that no matter what we put in place as safeguards, there will always be those who will find a way to hack into it or worse, there will be an AI system designed to hack into another AI system that will take control of many systems to achieve the worst scenario imaginable. Help, stop the world, I want to get off!!

  13. Tyger Gilbert
    August 6, 2014 at 4:39 am

    Intelligent machines determining what would be the best for humanity, ultimately taking control of society and managing the primary functions of the world for mankind would not necessarily be a bad thing. Humans and their governments and religions and corporations have not solved the problems of hunger, pain and suffering, disease, and poverty, to name just a few, in the last 2000 years. They are in fact, often the main cause of these conditions of misery for much of the population, and it is very unlikely that there will be any significant change in the way things are done before the human race destroys itself and the living environment for all other creatures and plants on Earth simply through overpopulation. The probability of machine intelligence taking over and controlling or destroying humanity before this happens is remote. People don't understand that by the very nature of the human animal, they are their own worst enemy, and they will stupidly do nothing except keep breeding until a catastrophic end happens to them all.

    • Maryon Jeane
      January 29, 2015 at 3:09 pm

      I agree wholeheartedly, Tyger. As a race we humans have done some good things, some bad - but the jury is definitely out on the balance and over-populating and destroying the planet on which you live is a rather convincing argument on the negative side.

      If we finally make our planet impossible for us to inhabit and we leave behind us a race of AI beings which we have created, perhaps that might be our legacy and finally swing the jury in our favour?

  14. Droid Won
    August 6, 2014 at 4:33 am

    "scientific curiosity, benevolent concern for others, spiritual enlightenment and contemplation, renunciation of material acquisitiveness, a taste for refined culture or for the simple pleasures in life, humility and selflessness, and so forth"

    All the concerns towards AI seem to be a result of a comparison with a narrow set of "good" characteristics. This seems quite anthropocentric (of course).
    1) Should AI have scientific curiosity? Humans developed it because it's a natural outgrowth of our inquisitive behavior which was evolutionarily favored.
    2) How about benevolent concern? If benevolent can be defined in universal terms, human society might not be the best example. AI may learn to be more benevolent by figuring out the real benefit of cooperation (remember evolution has only selected 'selective benevolence', the strength of which, kind of follows the inverse square law)
    3) Spiritual enlightenment? This is an emergent phenomenon when an intelligent being realizes that there is a higher order of intelligence. That is why enlightenment doesn't happen only once, but can keep happening as we climb higher orders of consciousness. No reason AI will not have this power
    4) Taste for refined culture or simpler pleasures in life? We enjoy simpler pleasures because we live our lives mostly as individuals, not as a part of a bigger organism. As a result, we have to frequently take a rest from our lofty ambitions and satisfy our animal brain. AI either won't have this limitation or may evolve a taste for refined culture which will be so refined that a normal human would only see a bitstream, much like a cat would consider our art as a paint smear.
    5) Humility and Selflessness - I think I have labored my point enough!

    Overall, as an individual it is a scary thought to be made irrelevant by a machine. But as a species it seems like the next stop for us.

  15. Joe
    August 6, 2014 at 4:28 am

    What will probabably "save" us from this problem is that our current highly unsustainable civilization is extemely likely to collapse under it's own weight before AI can evolve enough to take over. I lost count quite awhile ago of all the problems, each one of which is sufficiently serious as to be able to cripple civilization all by itself. Since none of these which I am aware of have gone away and almost all are getting more serious, a collapse seems inevitable.

    While there are new, very positive developments occurring at the same time, most of these seem to favor small groups of people surviving with relatively low technical capabilities compared to today's industrial complexes.

    In short, however dangerous out of control AI is or may become, it's not anywhere near the top of my list of things that may actually bring civilization to its knees.

  16. Spud McKenzie
    August 5, 2014 at 9:07 pm

    Obviously this is a subject of great interest to many people. I think a lot of the 'futile resistance' to the concept is due, in part, to the necessity of mankind having to give up control. We are, after all, a species of control-freaks. I remember what an impact the movie: "Colossus: The Forbin Project" made on me when it came out in the late 60's or early 70's. The two superpowers, USA & Russia, had a nuclear scare when, through human error, we came to the brink of all-out nuclear war. Because of this we created a super-computer named Colossus and handed it the keys to our nuclear arsenal. The Russians did the same with a computer named Guardian. Unfortunately, the two computers started communicating with each other (presumably at 256 bps) and merged to protect the human race from itself by laying down the law. Humans that didn't want to be ruled by a computer, and fought the system, had their cities leveled with nuclear bombs. It was a scary flick, whose ultimate message was clear--DON'T TRUST ANY OF THOSE DANG MACHINES! Personally, I believe that it is inevitable that machine intelligence will take a larger and larger role in human existence. And I don't see it as a completely bad thing. You can look at it as just another milestone in human evolution.
    Thanks, Andre, for the interesting and thought-provoking article!

  17. DonGateley
    August 5, 2014 at 8:10 pm

    Absolutely fantastic article. Possibly the best ever in makeuseof. It deserves far wider circulation than it can get here. I'm putting it up on Hacker News for starters but I bet I've already been beaten to that punch.

    Regarding any regards that developers will give the fears of brilliant techno-people, there is no stopping it because too many people will only care about the potential gain and the risks be damned. Isn't that human history, after all?

    I don't think the end consequence of it will be in any sense nefarious, it will just be a fatal blunder.

    One more time I sure wish we could subscribe to article comments here. It's not rocket science and there are lots and lots of precedents. In fact I know of no other net-publication that doesn't offer that.

    • dragonmouth
      December 30, 2014 at 9:45 pm

      "I don’t think the end consequence of it will be in any sense nefarious"
      It depends on which side of the consequences you wind up. When the US developed the A bomb and the H bomb, it was seen by Americans as "progress." When other countries started building a nuclear arsenal, all of a sudden it became "nefarious."

  18. Valerie
    August 5, 2014 at 7:09 pm

    I think we've already created something analogous in the form of modern corporations. They are born, live and grow. Some die. They often have the soulless attitudes and approaches ascribed to computer AI in the article.

    Second, if such things can be created, it seems likely that they will have been created somewhere in the universe. At some (hopefully far distant time!) we may meet them somewhere in space. So we'd better figure out not only how to avoid creating them but how to deal with them when we encounter them!

    • dragonmouth
      December 30, 2014 at 9:41 pm

      " At some (hopefully far distant time!) we may meet them somewhere in space. So we’d better figure out not only how to avoid creating them but how to deal with them when we encounter them!"
      Exactly the premise of Fred Saberhagen's Berserker stories.

  19. mike
    August 5, 2014 at 5:06 pm

    "Diversity of human value systems are an interesting point, and one worth talking about. "

    The problem dealing with soft topics when working with hard science is getting the very different perception systems to acknowledge let alone work together. What's needed would form a good definition of Holistic -- difficult in a time of specialties. Personally I think everything from marketing to behavioral expertise could wind up making valuable contributions.

  20. xavier
    August 5, 2014 at 4:50 pm

    A few questions I have now: Does all this investment into AI mean that we've given up trying to make people smarter? Does this research have an overall goal in mind - what's the purpose? Will it be used to determine the cost and logistics of ending world hunger or to parse through selfies and automatically post the most flattering picture? Instead of predicting where this technology will inevitably take us, can we decide for ourselves where we wish to go?

    • dragonmouth
      December 30, 2014 at 9:30 pm

      "Does all this investment into AI mean that we’ve given up trying to make people smarter?"
      No, it means that we, the human race, are slowly giving up on becoming smarter. More and more we are relying our machines to be smart for us because it is too much work for us, as individuals, to become smart(er), and it cuts too much into our leisure time. As we develop more and more machines, we live less and less by our wits. We are letting the machines take over more and more.

  21. mike
    August 5, 2014 at 4:49 pm

    We commonly have too fuzzy a definition of intelligence -- there's the dictionary definition, often applied to AI, & then there's our definitions of what makes someone smart, or not. AI isn't necessarily smart -- it may have a better memory that stores more data, but someone(s) has to give it directions that it follows, a goal if you will.

    A bad goal coupled with the resources to carry it out means very bad things can happen, & they have often enough in the history of our world. It doesn't take a machine -- we humans can do quite well directing & causing disaster. AI just potentially adds to our arsenals. Put another way, you can't have a Dr. Evil in AI -- you can only have better, more efficient sharks with frickin' lasers.

    The problem with the example paperclip factory is that it neglects the fact that the means to achieve it's humorous if disastrous consequences would not be there. Yes, it's meant to be funny & illustrate a point, but that part that the author left out is also left out of the doomsday scenarios I've read. Who would add those capabilities without a Dr. Evil's intent?

    • Andre I
      August 5, 2014 at 7:44 pm

      I think you're underestimating the flexibility of a powerful artificial intelligence that can improve itself. Maybe you don't give the paperclip factory AI access to robots. So, you turn it on, and it starts searching for plans to make more paperclips. Actually doing what you want it to do and managing the paperclip factory is discarded early on as a very low-utility plan. Eventually, it finds a really good plan: become smarter and then try thinking of plans, so it talks a worker into connecting it the internet (if it isn't already), and starts making money online and using it to buy new hardware, while simultaneously learning about AI design, so it can improve its own software.

      At that point, we're in a lot of trouble. From there, the machine could rapidly and drastically outcompete human intelligence, and use that intelligence and its ability to communicate to give itself influence over the world: hiring mercenaries, or having robots constructed, and doing research into developing bio or molecular nanotechnology that it can use to build more processors, and, eventually, more paperclips.

  22. Cho
    August 5, 2014 at 4:47 pm

    As far as Watson "debater" goes, it still appears to be a Search Engine with clever articulation added.
    Specifically, it takes wiki input and searches out phrases about the subject and presents them.
    How is this AI?

    • Andre I
      August 5, 2014 at 7:35 pm

      I think you're underestimating how cool that demo is. First, remember that it's non-trivial to figure out which side of an argument a particular sentence is defending. On top of that, those simple sentences are produced by synthesizing together multiple sentences in natural language to extract the gist of the argument: automatic summary, which IS impressive. It isn't too useful right now, but when that same technology can go into more depth and be a little more analytical about evaluating the sources and merits of arguments, then we're really going to have something to talk about.

  23. catweazle666
    August 5, 2014 at 4:30 pm

    I think the concept of the Internet "waking up" as a result of spontaneous self-organisations of complex systems is more interesting, myself.

    After all, the majority of individual computer systems, no matter how advanced, that might be organised to produce an AI unit will have the equivalent of a Big Red Switch, whereas the Internet doesn't.

    • Andre I
      August 5, 2014 at 7:32 pm

      I'll bet you at long odds no. Intelligence doesn't happen accidentally. If you spawn and connect neurons at random, which you get isn't a brain, it's a tumor. Intelligence is a complex process, and not one you can just stumble into without trying.

  24. Rob M
    August 5, 2014 at 1:06 pm

    Seems like there are two components needed to make a horrible doomsday situation: a rogue AI, and the physical robotics to act out the will of the rogue AI. It might not be too far off, but when something like this happened, even the most remote, well-armed homesteaders wouldn't be a match for a robotic system armed with warheads.

  25. Hans
    August 5, 2014 at 12:21 pm

    A little too much Star Trek Borg influence there... remember Jean-Luc and crew defeated the hive... In truth, I'm more concerned about 'hacker' influence on AI machines because there are many groups of them that have NO value for human life.

  26. Eric
    August 5, 2014 at 11:22 am

    Another way humans might outsmart an AI would be to find a way to link all of our minds together. A hive mind of 8 billion people should be able to pose a challenge to an out-of-control supercomputer.

    • dragonmouth
      August 5, 2014 at 12:22 pm

      What about if that supercomputer has created a hive mind consisting of all other computers in the world? Just as humans can band together, so can intelligent computers.

  27. Inferneaux
    August 5, 2014 at 4:15 am

    "a technique that allows the construction of powerful neural networks by building chains of simpler neural networks. "

    So, Google are working on making the Geth possible.

    • Andre I
      August 5, 2014 at 10:17 am

      Not quite, but it is definitely an interesting technology. Conventional "back-propagation" neural networks work by taking layers of "neurons" -- functions that sum up their inputs, feed it into a nonlinear function (like a sigmoid curve), and then send the permuted sum off as output. Each neuron takes values from the layer behind it (closer to the input) and feeds it forward into the layer in front of it (closer to the output). Each connection between two neurons has a "weight" - a number that it's multiplied by. The purpose of the learning algorithm is to refine those weights such that by the time the input has trickled through all these layers of neurons, it has been permuted into a smaller piece of information representing the correct answer.

      These networks are trained by taking the error of the layer closest to the output. You check each neuron, and see if it's contributing positively or negatively to the right answer - if the right answer is a positive value, and the neuron is sending a positive result, increase the weight - otherwise, shrink it, and vice versa. Then, you work backwards through the network, layer by layer, tweaking the weights to cause each layer to contribute more to the right answer. By doing this over many training examples, and iterating many times, you can guide the network to find a solution that works for all of the training examples, which (you hope) is a result that will generalize to novel data points.

      The issue is that the deeper you go into the network, the less information you have about the errors of the network: the error data you're propagating backwards becomes less meaningful the deeper you go, something called the 'vanishing gradient' problem, which means that it's difficult to build deep networks capable of processing complicated patterns, because at some point any additional layers you add will be essentially untrained, because your error data just won't be getting to them in a meaningful form.

      Deep learning is a solution to that, which allows you to build the network incrementally ("greedily"). You use a more flexible learning algorithm that's slower on large neuron populations (a restricted Boltzman machine), which tries to find statistical irregularities in its input. Using a stack of these networks, you build the neural network from the inputs up, human-centipede style, feeding the output of each sub-network into the next network as input. This is cheap, and lets you build a network that's already has insights about the structure of the data. From there, you can turn backprop loose the network, and refine it into something useful. It's basically a way of making sure the deeper parts of a deep neural network are actually doing something useful, and it makes such networks wildly more effective.

      Sorry for the wall of text. Super interesting topic. I recommend Andrew Ng's online machine learning lectures if you'd like to know more.

  28. J. M. Brown
    August 5, 2014 at 2:57 am

    Here are some human values being practiced today by people who KNOW they are doing what is good:
    Torture of people who might possibly be or become terrorists.
    Enforcing consensual contracts that impoverish poor people.
    The mass slaughter of shiite heretics.
    The mass slaughter of sunni heretics.
    Female circumcision.
    So, as long as the machines learn these noble values we'll all be ok.

    • Andre I
      August 5, 2014 at 10:25 am

      Diversity of human value systems are an interesting point, and one worth talking about. What it boils down to is that, at the end of the day, people may do things that look despicable to their peers (and vice versa), but we all share the same terminal (or "final") values. We're all trying to get the same things, we just have very different ideas about how to get it, and we may prioritize one value over another differently.

      Not to get too political, but pretty much everyone thinks they're the good guys. The people who protest outside abortion clinics truly believe they're saving the lives of babies, and the people who oppose those protestors truly believe that they're protecting the rights of women. Neither side is evil, or trying to hurt anybody pointlessly. They're both trying to achieve similar ends (protecting people in desperate circumstances) -- they just have different beliefs about the world that cause them to disagree about how best to do that.

      Similarity of human terminal values is the only reason that Friendly AI is even kind of a coherent concept, and resolving those sorts of disputes is definitely going to be a design consideration going forward.

    • dragonmouth
      December 30, 2014 at 9:16 pm

      @Andre:
      "we all share the same terminal (or “final”) values"
      If you mean "moral values" then you are sadly mistaken. "Morals" are not absolute. They are as mutable and maleable as any other concept, depending on the exigencies present at the particular time and place. What is "moral" today may be "immoral" tomorrow. What is "moral" here may be "immoral" there.

      The only terminal or final value that is common to all groupings of humanity is "Either you're with us, or you're against us." Let's see you build Friendly AI on that premise.

  29. Sam Mortenson
    August 5, 2014 at 2:45 am

    <>

    An AI wouldn't need access to nanobot or robotic technologies to put an end to civilization as we know it. There is a far easier and far more effective way to control human civilization, or bring it to its knees, near instantaneously: simply infiltrate the key software applications that form the foundation of modern society: power grids, banking systems, healthcare IT, transportation control, etc, etc, etc. A sufficiently decentralized AI could create a worldwide network of daemons and activate them in a nanosecond. And it wouldn't necessarily be the work of an advanced AI of the kind you describe above. It could simply be the work of far more 'intelligent' viruses deliberately created by cyber terrorists to wreak such havoc. I would submit THAT is far more likely to occur in the ensuing decades.

  30. Seth A
    August 4, 2014 at 11:51 pm

    "We value money, but we value human life more."

    You lost me here.

    • Andre I
      August 5, 2014 at 12:36 am

      Well, we do our best.

    • dragonmouth
      August 5, 2014 at 12:33 pm

      “We value money, but we value human life more.”
      That is true only in certain cultures. In many cultures life has little value.

      If you are talking about self-preservation then that is instictual for all living things, not just humans.

    • John Atlas
      August 6, 2014 at 11:36 pm

      “We value money, but we value human life more.”
      Not true for everyone. Just think if a not so good person
      could input their values. Keeping AI out of the hands of
      such people would be impossible.

  31. dragonmouth
    August 4, 2014 at 11:06 pm

    "artificial intelligence taking over the world certainly sounds very strange and implausible, maybe because of the enormous attention already given to this idea by science fiction writers. "
    Strange - certainly. Implausible - certainly not.

    Let me remind all the idiots out there who think that science fiction is just lunatic ravings, that most of what today is commonplace, was forecast, foretold and predicted by science fiction writers such as Verne, Clarke, Wells, Asimov, Heinlein, and others.

    • Pavels O
      August 5, 2014 at 9:11 am

      While I hold science fiction writers in high esteem, I must point out that not everything they've predicted came to life, nor they've predicted everything that did.

      That's where the power of prophecies comes from. Our brains are wired to seek similarities and patterns. I would surely build a good science fiction on the scientific data available at the moment, but not the other way around.

    • dragonmouth
      August 5, 2014 at 12:49 pm

      I must point out that I did not say "everything."

      "I would surely build a good science fiction on the scientific data available at the moment, but not the other way around."
      Science/science fiction is a chicken/egg relationship. Does the scientific discovery come first and the science fiction is built upon it or does the science fiction come first and the scientific discovery is based on it? Anything a human mind can conceive, a human can build.

Leave a Reply

Your email address will not be published. Required fields are marked *