Pinterest Stumbleupon Whatsapp
Ads by Google

Moore’s Law, the truism that the amount of raw computational power available for a dollar tends to double roughly every eighteen months, has been a part of computer science lore since 1965, when Gordon Moore first observed the trend and wrote a paper on it.  At the time, the “Law” bit was a joke.  49 years later, nobody’s laughing.

Right now, computer chips are made using an immensely refined, but very old fabrication method.  Sheets of very pure silicon crystals are coated in various substances, engraved using high-precision laser beams, etched with acid, bombarded with high-energy impurities, and electroplated.

More than twenty layers of this process occur, building nanoscale components with a precision that is, frankly, mind-boggling.  Unfortunately, these trends can’t continue forever.

We are rapidly approaching the point at which the transistors we are engraving will be so small that exotic quantum effects will prevent the basic operation of the machine.  It’s generally agreed that the latest computer technology advances will run into the fundamental limits of silicon around 2020, when computers are about sixteen times faster than they are today.  So, for the general trend of Moore’s Law to continue, we’ll need to part ways with silicon the way we did with vacuum tubes, and start building chips using new technologies that have more room for growth.

4. Neuromorphic Chips

As the electronics market moves toward smarter technologies that adapt to users and automate more intellectual grunt work, many of the problems that computers need to solve are centered around machine learning and optimization.  One powerful technology used to solve such problems are ‘neural networks.’

Ads by Google

Neural networks reflect the structure of the brain: they have nodes that represent neurons, and weighted connections between those nodes that represent synapses.  Information flows through the network, manipulated by the weights, in order to solve problems.  Simple rules dictate how the weights between neurons change, and these changes can be exploited to produce learning and intelligent behavior. This sort of learning is computationally expensive when simulated by a conventional computer.

Neuromorphic chips attempt to address this by using dedicated hardware specifically designed to simulate the behavior and training of neurons.  In this way, an enormous speedup can be achieved, while using neurons that behave more like the real neurons in the brain.

IBM  and DARPA have been leading the charge on neuromorphic chip research via a project called SyNAPSE, which we’ve mentioned before You Won't Believe It: DARPA Future Research Into Advanced Computers You Won't Believe It: DARPA Future Research Into Advanced Computers DARPA is one of the most fascinating and secretive parts of the US government. The following are some of DARPA's most advanced projects that promise to transform the world of technology. Read More . Synapse has the eventual goal of building a system equivalent to a complete human brain, implemented in hardware no larger than a real human brain.  In the nearer term, IBM plans to include neuromorphic chips in its Watson systems, to speed up solving certain sub-problems in the algorithm that depends on neural networks.

IBM’s current system implements a programming language for neuromorphic hardware that allows programmers to use pre-trained fragments of a neural network (called ‘corelets’) and link them together to build robust problem-solving machines.  You probably won’t have neuromorphic chips in your computer for a long time, but you’ll almost certainly be using web services that use servers with neuromorphic chips in just a few years.

3. Micron Hybrid Memory Cube

One of the principle bottlenecks for current computer design is the time it takes to fetch the data from memory that the processor needs to work on.  The time needed to talk to the ultra-fast registers inside a processor is considerably shorter than the time needed to fetch data from RAM, which is in turn vastly faster than fetching data from the ponderous, plodding hard drive.

The result is that, frequently, the processor is left simply waiting for long stretches of time for data to arrive so it can do the next round of computations.  Processor cache memory is about ten times faster than RAM, and RAM is about one hundred thousand times faster than the hard drive.  Put another way, if talking to the processor cache is like walking to the neighbor’s house to get some information, then talking to the RAM is like walking a couple of miles to the store for the same information — getting it from the hard drive is like walking to the moon.

Micron Technology may break the industry from the regular progression of conventional DDR memory technology, replacing it with their own technology, which stacks RAM modules into cubes and uses higher-bandwidth cables to make it faster to talk to those cubes. The cubes are built directly onto the motherboard next to the processor (rather than inserted into slots like convention ram).  The hybrid memory cube architecture offers five times more bandwidth to the processor than the DDR4 ram coming out this year, and uses 70% less power.  The technology is expected to hit the supercomputer market early next year, and the consumer market a few years later.

2. Memristor Storage

A different approach to solving the memory problem is designing computer memory that has the advantage of more than one kind of memory.  Generally, the tradeoffs with memory boil down to cost, access speed, and volatility (volatility is the property of needing a constant supply of power to keep data stored).  Hard drives are very slow, but cheap and non-volatile.

Ram is volatile, but fast and cheap.  Cache and registers are volatile and very expensive, but also very fast.  The best-of-both-worlds technology is one that’s non-volatile, fast to access, and cheap to create.  In theory, memristors offer a way to do that.

Memristors are similar to resistors (devices that reduce the flow of current through a circuit), with the catch that they have memory.  Run current through them one way, and their resistance increases.  Run current through the other way, and their resistance decreases.  The result is that you can build inexpensive, high-speed RAM-style memory cells that are nonvolatile, and can be manufactured cheaply.

This raises the possibility of RAM blocks as large as hard drives that store the entire OS and file system of the computer (like a huge, non-volatile RAM disk What Is A RAM Disk, And How You Can Set One Up What Is A RAM Disk, And How You Can Set One Up Solid state hard drives aren’t the first non-mechanical storage to appear in consumer PCs. RAM has been used for decades, but primarily as a short-term storage solution. The fast access times of RAM makes it... Read More ), all of which can be accessed at the speed of RAM.  No more hard drive. No more walking to the moon.

HP has designed a computer using memristor technology and specialized core design, which uses photonics (light based communication) to speed up networking between computational elements.  This device (called “The Machine”) is capable of doing complex processing on hundreds of terrabytes of data in a fraction of a second.  The memristor memory is 64-128 times denser than conventional RAM, which means that the physical footprint of the device is very small — and, the entire shebang uses far less power than the server rooms it would be replacing.  HP hopes to bring computers based on The Machine to market in the next two to three years.

1. Graphene Processors

Graphene is a material made of strongly bonded lattices of carbon atoms (similar to carbon nanotubes).  It has a number of remarkable properties, including immense physical strength and near-superconductivity.  There are dozens of potential applications for graphene, from space elevators to body armor to better batteries, but the one that’s relevant to this article is their potential role in computer architectures.

Another way of making computers faster, rather than shrinking transistor size, is to simply make those transistors run faster.  Unfortunately, because silicon isn’t a very good conductor, a significant amount of the power sent through the processor winds up converted to heat.  If you try to clock silicon processors up much above nine gigahertz, the heat interferes with the operation of the processor.  The 9 gigahertz requires extraordinary cooling efforts (in some cases involving liquid nitrogen).  Most consumer chips run much more slowly.  (To learn more about how conventional computer processors work, read our article What Is A CPU and What Does It Do? [Technology Explained] What Is A CPU and What Does It Do? [Technology Explained] Read More on the subject).

Graphene, in contrast, is an excellent conductor.  A graphene transistor can, in theory, run up to 500 GHz without any heat problems to speak of — and, you can etch it the same way you etch silicon.  IBM has engraved simple analog graphene chips already, using traditional chip lithography techniques.  Until recently, the issue has been two fold: first, that it’s very difficult to manufacture graphene in large quantities, and, second, that we do not have a good way to create graphene transistors that entirely block the flow of current in their ‘off’ state.

The first problem was solved when electronics giant Samsung announced that its research arm had discovered a way to mass produce whole graphene crystals with high purity.  The second problem is more complicated.  The issue is that, while graphene’s extreme conductivity makes it attractive from a heat perspective, it’s also annoying when you want to make transistors – devices that are intended to stop conducting billions of times a second.  Graphene, unlike silicon, lacks a ‘band gap’ — a rate of current flow that is so low that it causes the material to drop to zero conductivity.  Luckily, it looks like there are a few options on that front.

Samsung has developed a transistor that uses the properties of a silicon-graphene interface in order to produce the desired properties, and built a number of basic logic circuits with it.  While not a pure graphene computer, this scheme would preserve many of the beneficial effects of graphene.  Another option may be the use of ‘negative resistance’ to build a different kind of transistor that could be used to construct logic gates that operate at higher power, but with fewer elements.

Of the technologies discussed in this article, graphene is the farthest away from commercial reality.  It could take up to a decade for the technology to be mature enough to really replace silicon entirely.  However, in the long term, it’s very likely that graphene (or a variant of the material) will be the backbone of the computing platform of the future.

The Next Ten Years

Our civilization and much of our economy has come to depend on Moore’s Law in profound ways, and enormous institutions are investing tremendous amounts of money in trying to forestall its end.  A number of minor refinements (like 3D chip architectures and error-tolerant computing) will help to sustain Moore’s Law past its theoretical six year horizon, but that sort of thing can’t last forever.

At some point in the coming decade, we’ll need to make the jump to a new technology, and the smart money’s on graphene.  That changeover is going to seriously shake up the status quo of the computer industry, and make and lose a lot of fortunes.  Even graphene is not, of course, a permanent solution.  It’s very likely that in a few decades we may find ourselves back here again, debating what new technology is going to take over, now that we’ve reached the limits of graphene.

What direction do you think the latest computer technology is going to take? Which of these technologies do you think has the best chance of taking electronics and computers to the next level?

Image Credits: Female hand in ESD gloves Via Shutterstock

  1. Larry P
    August 15, 2014 at 9:33 pm

    Predictions about the capabilities and limitations of computer electronics has been pretty interesting in my 30+ years in this industry. It started with something about the limitations of the 8088 chips...but then the x286 architecture came along. Chip technology would never be able to exceed the 1 GHz mark because of interference between paths on the boards...and then there's hard drive density...20 MB to 40 Mb... The one constant that you can depend on in this industry is that whatever is the best today will be obsolete before you can blink. I used to build x386 machines for resale to friends/family...but always with the demand that the day they decided what they wanted they had to stop looking at the new stuff...because the prices drop and the performance goes up every day. This is the only industry that I know of where each year you get more for less. How many of us paid upwards of $3,000 for an IBM PC/XT...maybe even without a hard drive? Consider that in 1983 dollars too. Today a classy standard desktop computer (if there really is such a box anymore :) might sell for anywhere from $500 to $850...and be a much better performer based on standards of the day than that $3,500 PC in 1983. Every single roadblock that I've heard over the years has been busted...from substrate limitations to hard drive density to replacing copper with fiber. Our cars, planes, houses, and a lot of the other things we enjoy have changed...but at a much slower pace. That's why I love this industry...like the weather in Tennessee...if you don't like it today stick around...it will be different tomorrow!

  2. Stephen G
    August 12, 2014 at 3:23 pm

    In fact, today we are not in a condition to laugh at Moore’s Law. I am amazed to see how Moore addressed and assessed the concept of raw computational power nearly 50 years ago. The theory is still relevant today. I am sure whatever was maintained at that point of time is going to be applicable all through the human civilization!
    While the level of IT progression that we have been able to achieve through last couple of years is commendable, we should not tend to challenge these legendary theories from geniuses. We should, instead, extract the best out of them and utilize that information to authenticate our current advancements. :) @myscorz

  3. Steve
    August 12, 2014 at 6:49 am

    i am not sure what air travel has to do with moore's law but, if it did apply, starting at 360mph in 1950, we would now be travelling at almost 5 million times the speed of light ;)
    But i don't see how that would fuel the economy, as moore's law certainly does now, when applied to processor density

  4. Marcel Delorme
    August 12, 2014 at 2:38 am

    I don't recall Moore's Law mentioning money. Density of circuitry is what I remember. As an example: the calculators astronauts carried on the last Apollo mission had more computing power than the spacecraft's computer.

  5. sfw1960
    August 12, 2014 at 12:24 am

    LOL @ - SPOT ON!!!

    Computational power will continue to grow!

  6. Frank F
    August 11, 2014 at 11:06 pm

    Now if we could only get past the ISP throttling ;)

  7. FrostyMaine
    August 11, 2014 at 8:52 pm

    Our future isn't in making transportation larger and faster; like computing technology, our future is removing the vehicle, suspending the cargo in a light-weight casing in air and pulling it through a vacuum.

  8. Nash J
    August 11, 2014 at 12:24 am

    Crash free vehicles would be nice

    • Andre I
      August 11, 2014 at 12:40 am

      Google's working on it. Machine intelligence is hard, but if you throw enough money and smart people at it, you make progress.

    • dragonmouth
      August 11, 2014 at 11:33 am

      Andre I:
      All machines achieve is a more efficient way for us to kill ourselves.

    • dragonmouth
      August 11, 2014 at 11:39 am

      @ReadandShare:
      I'm sure that if sufficently fuel-efficient engines, or a more economical propulsion system, can be built, we will have supersonic travel again.

  9. Scott H
    August 9, 2014 at 9:39 am

    it is amazing how memory has change over the years and how fast they're getting bigger and faster and better all people are waiting on now is quantum computer to get cheaper and easier to build and maintain and cheaper cooling system for one lol

  10. Howard B
    August 9, 2014 at 12:45 am

    "Moore’s Law, the truism that the amount of raw computational power available for a dollar tends to double roughly every eighteen months..."
    This is stated wrong; Moore's Law never stated dollar amounts.

    From: http://en.wikipedia.org/wiki/Moore%27s_law

    "Moore's law is the observation that, over the history of computing hardware, the number of transistors in a dense integrated circuit doubles approximately every two years."

    I, too, first heard it stated as "18 months," but this is the first time I've ever heard a dollar amount attached to it.

  11. ReadandShare
    August 8, 2014 at 8:10 pm

    I think I am happy enough with IT progression. Now, if they can just focus on making jetliners fly faster and more efficiently -- that would really make a lot of us happy.

    It's a real shame that we're still flying today at pretty much the same speed as we did back in the 1950's!!

    • Howard B
      August 9, 2014 at 1:05 am

      A Douglas DC-7 (a common airliner of the 1950s) had an average cruising speed of around 360 MPH; a modern Boeing 777's cruising speed is around 560MPH. Not too bad....+200MPH in 50 years, with a much larger passenger load, to boot (64-95 passengers for the DC-7, compared to 314-440 for the 777)

    • dragonmouth
      August 9, 2014 at 11:02 am

      We did fly faster. Both the Concorde and the TU-144 were supersonic commercial airliners.

    • Peter Hood
      August 9, 2014 at 2:03 pm

      @dragonmouth 1) The TU-144 was shortlived, 2) the TU-144 was a blatant copy of Concorde. It might interest you to know that Vladimir Vladimorovich Putin was a KGB Lt. Col in charge of industrial espionage, based in East Germany.

    • dragonmouth
      August 9, 2014 at 10:49 pm

      "The TU-144 was shortlived,"
      Does not change the fact that it was a commercial airliner and it was supersonic.

      "the TU-144 was a blatant copy of Concorde"
      Form follows function. The shape of both planes was the best for SST flight. TU-144 was built and flew before the Concorde.

      Of what relevance is the mention of Putin other than to make a blatant political statement?

    • ReadandShare
      August 10, 2014 at 12:40 am

      @ Dragonmouth

      "Did" -- but not presently. And thus, we STILL fly essentially as slow as our grandparents did way back in 1958! A shame.

Leave a Reply

Your email address will not be published. Required fields are marked *