Laptops, mobiles, and tablets get cheaper, sleeker, more powerful every year, while battery life keeps getting longer. Have you ever wondered why this is and if devices can continue improving forever?

The answer to the first question is explained by three laws discovered by researchers, known as Moore's Law, Dennard scaling, and Koomey's Law. Read on to understand the impact of these laws on computing and where they might lead us in the future.

What is Moore's Law?

Graph showing 120 years of Moores Law
Image Credit: Steve Jurvetson/Flickr

If you're a regular MakeUseOf reader, you're possibly aware of the mythic Moore's Law.

Intel CEO and co-founder Gordon Moore first introduced it in 1965.

He predicted that the number of transistors on a chip would double approximately every two years and become between 20-to-30-percent cheaper to make annually. Intel's first processor was released in 1971 with 2,250 transistors and an area of 12 mm2.  Today’s CPUs hold hundreds of millions of transistors per millimeter square.

While it started as a prediction, the industry also adopted Moore's Law as a roadmap. For five decades, the predictability of the law allowed companies to formulate long-term strategies, knowing that, even if their designs were impossible at the planning stage, Moore's Law would deliver the goods at the appropriate moment.

This had a knock-on effect in many areas, from the ever-improving graphics of games to the ballooning number of megapixels in digital cameras.

However, the law has a shelf-life, and the rate of progress is slowing down. Although chipmakers continue to find new ways around the limits of silicon chips, Moore himself believes it will no longer work by the end of this decade. But, it won't be the first law of technology to disappear.

What Ever Happened to Dennard Scaling?

Robert Dennard
Image Credit: Fred Holland/Wikimedia

In 1974, IBM researcher Robert Dennard observed that, as transistors shrink, their power use remains proportional to their area.

Dennard scaling, as it became known, meant the transistor area reduced by 50 percent every 18 months, leading to a clock speed boost of 40 percent, but with the same level of power consumption.

In other words, the number of calculations per watt would grow at an exponential but reliable rate, and transistors would get faster, cheaper, and use less power.

In the age of Dennard scaling, improving performance used to be a predictable process for chipmakers. They just added more transistors to CPUs and ramped up clock frequencies.

This was also easy for the consumer to understand: a processor running at 3.0 GHz was faster than one running at 2.0 GHz, and processors kept getting faster. Indeed, the International Technology Roadmap for Semiconductors (ITRS) once predicted clock rates would reach 12GHz by 2013!

Yet today, the best processors on the market have a base frequency of just 4.1GHz. What happened?

The End of Dennard Scaling

Clock speeds got stuck in the mud around 2004 when reductions in power usage stopped keeping pace with transistors' shrink rate.

Transistors became too small, and the electrical current began leaking out, causing overheating and high temperatures, leading to errors and equipment damage. That's one of the reasons why your computer chip has a heat sink. Dennard Scaling had reached limits dictated by the laws of physics.

More Cores, More Problems

With customers and entire industries accustomed to continual speed improvements, chip manufacturers needed a solution. So, they started adding cores to processors as a way to keep increasing performance.

However, multiple cores are not as effective as simply upping clock speeds on single-core units. Most software cannot take advantage of multiprocessing. Memory caching and power consumption are additional bottlenecks.

The move to multicore chips also heralded the arrival of dark silicon.

The Dark Age of Silicon

Processor-on-mother-board

It soon became apparent that if too many cores are used simultaneously, the electrical current can leak, resurrecting the overheating problem that killed Dennard scaling on single-core chips.

The result is multicore processors that cannot use all of their cores at once. The more cores you add, the more of a chip's transistors have to be powered off or slowed down, in a process known as "dark silicon."

So, although Moore's Law continues to let more transistors fit on a chip, dark silicon is eating away at CPU real estate. Therefore, adding more cores becomes pointless, as you're unable to use all of them at the same time.

Sustaining Moore's Law using multiple cores seems to be a dead end.

How Moore's Law Could Continue

One remedy is to improve software multiprocessing. Java, C++, and other languages designed for single cores will give way to ones like Go, which are better at running concurrently.

Another option is increasing the use of field-programmable gate arrays (FPGAs), a type of customizable processor that can be reconfigured for specific tasks after purchase. For example, one FPGA could be optimized by a customer to handle video while or could be specially adapted to run artificial intelligence applications.

Building transistors out of different materials, such as graphene, is another area being investigated to squeeze more life out of Moore's prediction. And, way down the line, quantum computing may change the game altogether.

The Future Belongs to Koomey's Law

In 2011, Professor Jonathan Koomey showed that peak-output energy efficiency (the efficiency of a processor running at top speed) echoed the processing power trajectory described by Moore's Law.

Koomey's Law observed that, from the 1940s vacuum-tube beasts to the laptops of the 1990s, computations per joule of energy had reliably doubled every 1.57 years. In other words, the battery used by a certain task halved every 19 months, resulting in the energy needed for a specific computation falling by a factor of 100 every decade.

While Moore's Law and Dennard scaling were hugely important in a world of desktops and laptops, the way we use processors has changed so much that the energy efficiency promised by Koomey's Law is probably more relevant to you.

Your computing life is likely split between many devices: laptops, mobiles, tablets, and miscellaneous gadgets. In this era of proliferate computing, battery life and performance-per-watt are becoming more important than squeezing more GHz out of our many-cored processors.

Likewise, with more of our processing outsourced to massive cloud computing data centers, the energy cost implications of Koomey's Law are of great interest to tech giants.

Dark-data-center

However, since 2000, the industry-wide doubling of energy efficiency described by Koomey's Law has slowed due to the end of Dennard scaling and the deceleration of Moore's Law. Koomey's Law now delivers every 2.6 years, and over the course of a decade, energy efficiency increases by a factor of just 16, rather than 100.

It may be premature to say Koomey's Law is already following Dennard and Moore's into the sunset. In 2020, AMD reported that the energy efficiency of its AMD Ryzen 7 4800H processor rose by a factor of 31.7 compared to its 2014 CPUs, giving Koomey's Law a timely and substantial boost.

Related: Apple's New M1 Chip Is a Game Changer: Everything You Need to Know

Redefining Efficiency to Extend Koomey's Law

Peak-output power efficiency is just one way of evaluating computing efficiency and one which may now be out of date.

This metric made more sense in past decades, when computers were scarce, costly resources that tended to be pushed to their limits by users and applications.

Now, most processors run at peak performance for just a small portion of their lives, when running a video game, for example. Other tasks, like checking messages or browsing the web, require much less power. As such, average energy efficiency is becoming the focus.

Koomey has calculated this “typical-use efficiency” by dividing the number of operations performed per year by the total energy used and argues it should replace the "peak-use efficiency" standard used in his original formulation.

Although analysis is still to be published, between 2008 and 2020, typical-use efficiency is expected to have doubled every 1.5 years or so, returning Koomey's Law to the optimal rate seen when Moore’s Law was in its prime.

One implication of Koomey's Law is that devices will continue to reduce in size and become less power-intensive. Shrinking—but still high-speed—processors may soon be so low-powered that they will be able to draw their energy directly from the environment, such as background heat, light, motion, and other sources.

Such ubiquitous processing devices have the potential to usher in the true age of the Internet of Things (IoT) and make your smartphone look as antiquated as the vacuum-tubed behemoths of the 1940s.

Eniac early computer
Image Credit: terren in Virginia/Flickr

However, as scientists and engineers discover and implement more and more new techniques to optimize “typical-use efficiency,” that portion of a computer's total energy use is likely to drop so much that at typical-use levels, only peak-output will be significant enough to measure.

Peak-output usage will become the yardstick for energy efficiency analysis once more. In this scenario, Koomey's Law will eventually encounter the same laws of physics that are slowing down Moore's Law.

Those laws of physics, which include the second law of thermodynamics, mean Koomey's Law will end around 2048.

Quantum Computing Will Change Everything

The good news is that by then, quantum computing should be well-developed, with transistors based on single atoms commonplace, and a new generation of researchers will have to discover a whole other set of laws to predict the future of computing.