Pinterest Stumbleupon Whatsapp
Ads by Google

There’s absolutely no doubt that competition between chip makers is steadily increasing not only for PC processors but for mobile and other-purpose processors as well. The big five that need to be mentioned are Intel, AMD, nVidia, Qualcomm, and Apple. All these companies have different takes on how to evolve their processors, which will make it interesting to see whose strategy will allow them to rise to the top. My opinion, however, is simple – choose more cores over better cores.

Before I get into the nitty gritty, please note the wording of the article title. This article is about why I believe more cores will beat better cores, which implies that this is will ring true (again, according to my opinion) sometime in the future. As for now, with applications still predominantly single-threaded and unaware of multiple cores, better cores are the winners.

Background

Over the years we’ve seen our processors turn from single-core lightweights all the way to eight-core monsters (or 16-core if you include servers). Obviously, having multiple cores is beneficial and lets the system work on more data at the same time than if the system only had a single core. But at this point, a new question arises – is there a point where it’s more beneficial to stop adding cores and just make them better? Will having 12 cores instead of 8 make much of a difference? We may feel that having 4, 6, or 8 cores reaches the “good enough” plateau as far as the number of cores goes, but we could do a lot better.

Why More Cores Will Be Better

Of course, having more cores and better cores is the best solution, but what if you have to choose? If I was the one choosing, I’d go with more cores. Why? The inspiration to my answer lies in how GPUs work.

Ads by Google

GPUs are packed with cores. In fact, some of the latest cards have 2,048 cores to brag about. They have that ridiculous amount of cores because it lets them work on data at the same time. With more cores, more data can be crunched. Yes, GPU cores are only good at one type of work (which is why we still use CPUs, not GPUs), but the same concept can be applied to CPUs as well.

With more cores, more data can be crunched by the CPU, and you get a speedy system that zips through anything you throw at it, provided that it’s programmed to be aware of all your cores. In short, many good cores will eventually be better than a few great cores.

The Current Plans of Big Chip Makers

Intel currently seems to be maintaining a 4-core limit (6 for their extreme series of products) but is making continued refinements to their cores. However, nVidia is also increasing its number of cores. So is Qualcomm with its Snapdragon processor, although somewhat more slowly while it also makes custom adjustments to the stock ARM designs. Even Apple is gaining cores with its iPhone/iPad processor, but at a very slow rate.

AMD is also trying to make their cores better, but previous roadmaps have shown that AMD is still adding cores and wanted to churn out a 10-core processor for consumers. AMD already has a 16-core behemoth for servers. And yes, they aren’t exactly cores, but that’s how they’re marketed and that’s what I’ll call them. There’s been a lot of controversy about the whole module approach, which you can read about in commentary here 5 Reasons Why Intel Is Being Pushed Against The Wall By AMD 5 Reasons Why Intel Is Being Pushed Against The Wall By AMD Over the years, Intel and AMD have been in quite a battle to bring out the best processors. Eventually a point came where you didn't hear all too much about what AMD was up to,... Read More (pro) and here 5 Reasons Why AMD Processors Are Doomed [Opinion] 5 Reasons Why AMD Processors Are Doomed [Opinion] Back in late 1999, I built my first computer. It used an AMD Athlon processor clocked at 500 MHz which was quick enough to play most games at the time, and also a better value... Read More (against).

Which strategy is the best? Right now, who knows?  Maybe you have an opinion?

Conclusion

What will really happen in the end is something that we can only find out through patience. However, as more software is becoming adaptable to numerous cores, the advantage will eventually shift to those processors that, as an entire component, can output the most work. Until then, we’ll just have to be happy with what currently works the best.

What’s your opinion, more cores or better cores? When do you think we will finally know which choice is better? Any other thoughts? Let us know in the comments!

Image Credits: Olivander, Forrestal_PL, Aaronage

  1. Ashley
    August 12, 2012 at 7:27 am

    I completely disagree! and my inspiration comes from intels i5 3570K to AMD's FX-8150. Compare the two and prove me wrong.

  2. Susendeep Dutta
    March 20, 2012 at 3:52 pm

    Importance must be given to more better cores just like quality over quantity.More cores means more TDP and more investments in cooling solutions as long as it gives you that level of performance.

  3. Kaggy
    March 20, 2012 at 1:45 pm

    Better cores are better.
    Save power
    Lesser heat
    Multi core softwares are rare
    Better cores tend to be cheaper

    • Danny Stieben
      March 23, 2012 at 10:33 pm

      All of those points in favor of better cores are true except for the last one. I'm still sure that AMD's processors with more cores are cheaper than Intel's with less cores.

      • Himanshu Gohil
        March 24, 2012 at 12:35 pm

        I agree with you Danny!
        I'd prefer AMD FX-4100(Quad core) over Intel Core i3-2100(Dual core) which has the same price.

  4. Daniel Louw
    March 20, 2012 at 8:16 am

    I agree with Danny.

    The Author is right, but the reasoning is flawed.

    For anyone interested, go check out Amdahl's Law [http://en.wikipedia.org/wiki/Amdahl's_law] The true advantage of more cores depend purely on the quality of the code. And a proper parallel program is really a bitch to get right.

    And you can't go about comparing CPU's with GPU's. They are entirely different beasts. GPU cores run programs called Shaders. These shaders are tiny programs written in a language close to the GPU's machine language. Each component/object in a 3D environment has some shader applied to it. Be it to reflect stuff, surface finish of the object or some light effect. The shader affects how the object looks on the screen at the end.
    Because of this, GPU's are good at doing parallel work, it's got a huge collection of custom cores only capable of performing tasks you can define in Shader Language. Each Shader is separate from it's neighbour and works independently. This makes GPU work truly parallel.

    Not so with a normal PC. Any OS is usually one big program that loads/unloads sub programs into some execution environment - no one gets access to the hardware. Only when we see software vendors make software (and OS's)that is properly parallel from the word go, will we see a true advantage.

    Don't for one moment think an eight-core 1.4GHz processor will beat a single core 4GHz today when doing some benchmark. The software available to us is not nearly as optimised as it should be.

    • Danny Stieben
      March 23, 2012 at 10:35 pm

      Thanks for the info, Daniel. Great explanation.

      It's true that today's software isn't as optimized as it should be, but still this article is talking about the future. Maybe more programs will work better with more cores by then.

      • Conrep
        March 25, 2012 at 12:39 am

        It's not really the quality of code that's the issue (although it can be) - it's how suited the job is for multitasking. Graphics Processing Units have huge amounts of data that need to be processed in real time - but it's just one type of operation. There's little decision-making to be done based on previous data - it's just doing the finishing work on whatever the CPU has decided.
        Most programs don't work like this, however. The next state of a program is usually highly dependent on the many, many states that have come before it. If you're driving a single car, the choice of path you take depends on many variables - road conditions, traffic, the signs telling you how to get there. If you're in a busy city, you can't always count on being able to take the same routes to your destination. You may know in advance that you want to turn at one intersection, but until you actually get there, you don't know for sure that it's possible. On top of that, you're driving just a single car - you can't take two roads at once.
        If you can find a way to subdivide tasks - tasks that don't require other tasks to come before them, then you can multitask. You can have one thread watching for input while another thread processes input already received, and another one works on displaying data that's already been processed. One still needs tasks to be completed by other threads before they can do anything, but once they have the data, they can work independently while the previous thread is now free to deal with the next batch.
        More cores allows for more to be processed simultaneously; it doesn't actually speed things up. If there isn't any way to efficiently distribute the workload, then more cores do nothing for you; you're constrained by how much work a single core can do in a given time frame. If you CAN break up the workload, and have each core handle its own small, independent piece, then you can get much more done in the same amount of time. That's why multicore processors seem to be faster.

        • Danny Stieben
          March 25, 2012 at 11:15 pm

          Even if the job that a program is trying to perform isn't something that can be easily split into subtasks, having multiple cores should still be advantageous when looking at the whole system, as there are multiple programs running at the same time. If they (and their workloads) and more evenly distributed among the cores, then in general the programs will have more CPU cycles available as there are more CPU cycles total.

  5. Danny Manno
    March 20, 2012 at 1:00 am

    Take it from someone who has had to study the ins and outs of processors both CPUs and GPUs from pipelines to Flynn's taxonomy to RISC and CISC and various architectures and instruction sets; its not that simple.
    Have you ever tried writing multi-core optimised code?

    • HackToHell
      March 20, 2012 at 3:59 am

      Hard as hell :)

  6. Kyle
    March 19, 2012 at 7:19 pm

    I'm just curious where you got the Task Manager screen shot from, only because the background is the eagle logo of the University of North Texas (which most people don't use unless it's a university computer).

    • Danny Stieben
      March 23, 2012 at 10:37 pm

      Someone finally saw! :)

      Yes, that is the eagle wallpaper for UNT, but no it's not a university computer. I'll be attending UNT for my freshman year in this coming Fall, and I just found the wallpaper and have it set on my personal computer. So I guess I'm an exception. :)

  7. Sanyata
    March 19, 2012 at 6:54 pm

    I will prefer better but lesser cores over many cores. Because better cores means less power draw, more optimized for specific applications & they will be used to their full potential & output consistent but smooth performance. I will prefer upto 4 cores because many tests have shown that it is a sweet point for many resource hog applications & there are 90-95% applications out there that are still not optimised for 2 or 4 cores...

    • HackToHell
      March 20, 2012 at 3:58 am

      Many applications still do not use multithreading effectively enough so powerful cores are better.

      • Danny Stieben
        March 23, 2012 at 10:38 pm

        Again, that is the current state. What about the future?

  8. Indronil
    March 19, 2012 at 6:45 pm

    as long as it is economically feasible and works on top ..who cares ..

  9. Richard Kivinen
    March 19, 2012 at 5:58 pm

    Its not the amount of cores, it how they are used, I agree more are better but better cores are better.

  10. HackToHell
    March 19, 2012 at 5:12 pm

    Actually more cores do not affect performance THAT much unless you are video encoding or gaming, that's why i7 was able to beat AMD fx

    • PSK
      March 20, 2012 at 12:09 pm

      for now....wasnt that the point of the article? its an opinion about speculation. thats what makes choosing a stand difficult at the moment....but if i had to choose one i would say better cores (with max limit of 8 cores-servers excluded) will come on top in the long race as well, simply because we value convenience too much.

      • Danny Stieben
        March 23, 2012 at 10:40 pm

        *nods*

        To each his own until we know more, eh?

    • Kaggy
      March 20, 2012 at 1:47 pm

      Actually intel has some sort of technology just for video encoding.
      Can't remember what it was called.

      Processors are quite useless in games, SSD (just loading time) and graphics card have a larger role.

      • Danny Stieben
        March 23, 2012 at 10:40 pm

        Intel QuickSync, I believe. Strange name for something like that.

        CPUs can still act as bottlenecks in games if the GPU is extremely powerful and the game is less graphics-demanding.

Leave a Reply

Your email address will not be published. Required fields are marked *