Pinterest Stumbleupon Whatsapp
Ads by Google

Every computer has a processor, whether it’s a small efficiency processor or a large performance powerhouse, or else it wouldn’t be able to function. Of course, the processor, also called the CPU or Central Processing Unit, is an important part of a functioning system, but it isn’t the only one.

Today’s processors are almost all at least dual-core, meaning that the entire processor itself contains two separate cores with which it can process information. But what are processor cores, and what exactly do they do?

What Are Cores?


A processor core is a processing unit which reads in instructions to perform specific actions. Instructions are chained together so that, when run in real time, they make up your computer experience. Literally everything you do on your computer has to be processed by your processor. Whenever you open a folder, that requires your processor. When you type into a word document, that also requires your processor. Things like drawing the desktop environment, the windows, and game graphics are the job of your graphics card — which contains hundreds of processors to quickly work on data simultaneously — but to some extent they still require your processor as well.

How They Work


The designs of processors are extremely complex and vary widely between companies and even models. Their architectures — currently “Ivy Bridge” for Intel and “Piledriver” for AMD — are constantly being improved to pack in the most amount of performance in the least amount of space and energy consumption. But despite all the architectural differences, processors go through four main steps whenever they process instructions: fetch, decode, execute, and writeback.

Fetch

The fetch step is what you expect it to be. Here, the processor core retrieves instructions that are waiting for it, usually from some sort of memory. This could include RAM, but in modern processor cores, the instructions are usually already waiting for the core inside the processor cache. The processor has an area called the program counter which essentially acts as a bookmark, letting the processor know where the last instruction ended and the next one begins.

Decode

Once it has fetched the immediate instruction, it goes on to decode it. Instructions often involve multiple areas of the processor core — such as arithmetic — and the processor core needs to figure this out. Each part has something called an opcode which tells the processor core what should be done with the information that follows it. Once the processor core has figured this all out, the different areas of the core itself can get to work.

Ads by Google

Execute


The execute step is where the processor knows what it needs to do, and actually goes ahead and does it. What exactly happens here varies greatly depending on which areas of the processor core are being used and what information is put in. As an example, the processor can do arithmetic inside the ALU, or Arithmetic Logic Unit. This unit can connect to different inputs and outputs to crunch numbers and get the desired result. The circuitry inside the ALU does all the magic, and it’s quite complex to explain, so I’ll leave that for your own research if you’re interested.

Writeback

The final step, called writeback, simple places the result of what’s been worked on back into memory. Where exactly the output goes depends on the needs of the running application, but it often stays in processor registers for quick access as the following instructions often use it. From there, it’ll get taken care of until parts of that output need to be processed once again, which can mean that it goes into the RAM.

It’s Just One Cycle

This entire process is called an instruction cycle. These instruction cycles happen ridiculously fast, especially now that we have powerful processors with high frequencies. Additionally, our entire CPU with its multiple cores does this on every core, so data can be crunched roughly as many times faster as your CPU has cores than if it were stuck with only one core of similar performance. CPUs also have optimized instruction sets hardwired into the circuitry which can speed up familiar instructions sent to them. A popular example is SSE.

Conclusion


Don’t forget that this is a very simple description of what processors to — in reality they are far more complex and do a lot more than we realize. The current trend is that processor manufacturers are trying to make their chips as efficient as possible, and that includes shrinking the transistors. Ivy Bridge What You Need To Know About Intel’s Ivy Bridge [MakeUseOf Explains] What You Need To Know About Intel’s Ivy Bridge [MakeUseOf Explains] Intel has just released its new updated processor, code-named Ivy Bridge, for both desktops and laptops. You’ll find these new products listed as the 3000 series and you can buy at least some of them... Read More ‘s transistors are a mere 22nm, and there’s still a bit to go before researchers encounter a physical limit. Imagine all this processing occurring in such a small space. We’ll see how processors improve once we get that far.

Where do you think processors will go next? When do you expect to see quantum processors, especially in personal markets? Let us know in the comments!

Image Credits: Olivander, Bernat Gallemí, Dominik Bartsch, Ioan Sameli, National Nuclear Security Administration

  1. Muhammad Imtiaz
    July 27, 2016 at 10:41 am

    It was a very simple and more effective information about what and how; processors works.
    Thank you very much.

  2. Zohar
    January 26, 2016 at 9:59 am

    This simple description of the basics also highlights the complexity of its technology, which continues to evolve each day. And while l marvel at the genius of man, my inability to process the fact that our planet's still going down the drain is ever growing.

  3. Muhammad imtiaz
    January 9, 2016 at 5:46 am

    I don't get this explanation so well . The thing that i get is :cores are the tracks in processors which are specific for specific function like there are cores for word documents and other are for other functions. Am i right or not please explain!

  4. tahseen
    December 4, 2015 at 4:27 am

    nice info ....thanks

  5. Anonymous
    May 10, 2015 at 9:29 am

    Very useful information for students.....
    Thanks a lot....

  6. Ankit
    March 22, 2015 at 4:59 pm

    Hi,

    This is one of the simplest explanation of what does cores mean. Yo explained it so well. Simple yet so meaningful.

  7. Dimal Chandrasiri
    October 2, 2012 at 12:34 pm

    thanks for taking time to explain this! it''s a really nice article!

  8. Faisal Ahmed
    September 13, 2012 at 1:39 pm

    Tiny device's titanic task explained so simply...

  9. Dimal Chandrasiri
    September 7, 2012 at 6:30 pm

    This article Actually clarified what I was having in my mind! nice explanation! simple bt elegant.

  10. Harshit Jain
    September 7, 2012 at 1:05 pm

    waiting for haswell processors

    • Muhammad Imtiaz
      July 27, 2016 at 10:44 am

      of course

  11. Praveen pandey
    September 5, 2012 at 10:39 am

    very clearly explained topic.

  12. Achraf52
    September 4, 2012 at 3:22 am

    If you got 2 Ghz it means your processor do 2.000.000.000 cycles * Number of cores, so amazing, now what are quantum processors ? please explain .

    • Danny Stieben
      September 19, 2012 at 3:12 am

      Quantum processors work by manipulating individual atoms, but this is such a completely different topic that I can't talk about it in a single comment. It's definitely article-worthy, however.

      • Muhammad imtiaz
        January 9, 2016 at 6:01 am

        I don’t get this explanation so well . The thing that i get is :cores are the tracks in processors which are specific for specific function like there are cores for word documents and other are for other functions. Am i right or not please explain!

        sir please reply me as soon as possible.

  13. japheth godsave
    September 3, 2012 at 4:32 pm

    please i will need an article on how to build a processor, what materials are needed.please i need that.

    • Danny Stieben
      September 19, 2012 at 3:11 am

      I'm sorry, but it's extremely hard to build a processor like the ones you use in regular computers. The parts are extremely tiny and must be manufactured in clean rooms.

  14. japheth godsave
    September 3, 2012 at 4:29 pm

    yeah nice article,I got what i wanted

  15. Naoman Saeed
    September 3, 2012 at 4:02 pm

    So it is about processor and not about core?

    • Danny Stieben
      September 19, 2012 at 3:10 am

      All cores go through the basic process I talked about in the article, so it is about cores. There's not any major distinctions between processors and cores, however, as processors are comprised of cores.

  16. MerVzter Balacuit
    September 3, 2012 at 12:08 pm

    thanks for this a big help

  17. druv vb
    September 3, 2012 at 10:58 am

    Nice article. All of nowadays processors have even more cores than their 2 year old relatives. And it seems that they are going to increase the number of cores using the same processor size. But upto what?
    Till then, the SoC and Embedded Chips and don't know what other nano-level architecture would have taken over.

  18. Ahmed Khalil
    September 3, 2012 at 10:16 am

    Our PC still not make use of the the advantages of multicore

  19. omar elshal
    September 2, 2012 at 10:31 pm

    lol..You made me remember the Computer Architecture course :)
    it was fun
    but i remember cycle was fetch, decode & execute only.

  20. Jeremy Collake
    September 2, 2012 at 10:25 pm

    I apologize for repetitive or redundant comments. Some would show up, but others didn't. It seems they were subject to moderation. I did not mean to sound like a broken record ;p.

  21. Igor Rizvi?
    September 2, 2012 at 9:28 pm

    Btw i just wish that there was soem option to put articles in your favorite (on your makeUseOf profile) that would be great :) or suscribe to the writer

    • Danny Stieben
      September 19, 2012 at 3:07 am

      Maybe that could be implemented at some point! If you use an RSS reader, you can subscribe to the RSS feed on my author page, which can be found by clicking on my name at the very bottom of any MUO page.

  22. Igor Rizvi?
    September 2, 2012 at 9:25 pm

    Finally a full explanation for the multicore cpu,thansk alot.One of the best articles iv found on the web so far :) sharing this

    • Danny Stieben
      September 19, 2012 at 3:06 am

      I'm glad you enjoyed it, Igor!

  23. Judith
    September 2, 2012 at 5:52 pm

    Just from the advances I have seen in my lifetime, I expect the future to be unlimited. One problem I see is that no matter how small you make something, people always want more of it. The "more" always takes up the same amount of space as the less we had before. Then people are still dissatisfied. Too much of the self serving (selfish) ways of some of the younger generation....NOT ALL of the younger generation, but some.

  24. Sebastian Hadinata
    September 2, 2012 at 3:18 pm

    short article but very informative! Thanks!

  25. Pedro Oliva
    September 2, 2012 at 2:30 pm

    excellent!! thanks :)

  26. Ashwin Ramesh
    September 2, 2012 at 10:08 am

    What does a processor-core do? Very well explained in lay-man's terms :) Thanks Danny, for sharing this!

    • Danny Stieben
      September 19, 2012 at 2:59 am

      Glad you liked it, Ashwin!

  27. rama moorthy
    September 2, 2012 at 9:16 am

    Got it .!

  28. Emmanuel
    September 2, 2012 at 7:15 am

    As most people don't know, more and more CPU's are becoming more integrated. Modern CPU's are now integrating other modules in to the dies known as SoC which stands for System on Chip. This means a GPU core, Video, WLAN, Transceiver, Audio modules etc can be integrated into one processor chip. For an example Apple's A5X processor chip, as it's technically a SoC package since it contains other modules and the ARM cores.

  29. Swaroop Super
    September 2, 2012 at 7:00 am

    nice

  30. Emmanuel
    September 2, 2012 at 6:56 am

    I learned about this in my Computer Organization course. Talks about Von Neumann machine, digital logic and gates, CPU architecture, assembly low-level programming, memory management, MIPS, CISC, microcode. Its a fun class:)

  31. Bruce Thomas
    September 2, 2012 at 6:48 am

    I remember back in the old days (not so long ago) when I had to get a math co-processor, which did something inside the black box that I couldn't really tell what, but I knew without it, I was behind the times. Now my smartphone has more power than my company's mainframe back then. Well, that's probably an overstatement, but someday, grandparents will see three-dimensional holograms of grandchildren on their smartphones rather than two-dimensional 5 mp pictures. I wonder how much a holodeck will cost for my attic by then. Better increase the 401k contribution.

  32. Vijaynand Mishra
    September 2, 2012 at 6:44 am

    good info

  33. Kaashif Haja
    September 2, 2012 at 1:04 am

    Few instructions (Floating Point) Instructions require more than ONE cycle!!

  34. Lionel@EngineeringBooks.net
    September 1, 2012 at 10:02 pm

    Thanks for writing this nice informative article.

  35. Paul Girardin
    September 1, 2012 at 9:18 pm

    Thanks for the info!

    Really to the point and enjoyable! :)

  36. Kp Rao
    September 1, 2012 at 9:04 pm

    i like

  37. Jeremy Collake
    September 1, 2012 at 8:53 pm

    The last thing I'll say is do NOT make the mistake that a multi-core / SMP system can improve the performance of a (primarily) single-threaded application. This basic restriction in the flow of logic (instruction X must be done before instruction Y) causes most applications to not benefit from multiple cores in a multiplicative way. In other words, a dual-core CPU isn't going to make everything run twice as fast. It will simply allow more threads to run simultaneously instead of concurrently. Software design is still catching up and attempting to make better use of multiple cores by utilizing more threads.

  38. Jeremy Collake
    September 1, 2012 at 8:17 pm

    Let us not forget that cores these days are changing their definition. At first they were all true physical CPUs. Then Intel added hyper-threaded CPUs, essential 'fake' cores, to improve multi-threaded performance. So now we have logical cores. AMD didn't do this for years, but now they have done a similar thing, except on the opposite end of the spectrum. While Intel's HyperThreaded logical cores only offer maybe 10% of a physical core's performance, AMD's Bulldozer+ (all newer gen) cores are full CPUs *except* they now share *some* computational units with their adjacent cores. For more information on all this, there is http://bitsum.com/pl_when_cpu_affinity_matters.php .

  39. Dhruv Sangvikar
    September 1, 2012 at 7:36 pm

    I am learning computer science. Currently I learnt the 8085 and 8086 processors. They also have the above said working steps. So the basic working process for a processor does not change, right?

    • Jeremy Collake
      September 1, 2012 at 8:37 pm

      At first, a core was simply an additional CPU. Just like an other SMP system. Now, logical cores (like HyperThreading from Intel, or Bulldozer Modules from AMD) share some computational units with their adjacent cores. Thus, they aren't real, fully physical CPUs as they once were. For AMD, this change was recent. The OS scheduler is aware of these designs though and tries to place threads on physical cores when possible, or in the case of AMD, on a core whose pair isn't already occupied. If a thread does get on a low-performance logical core, the scheduler will switch the thread to a better performing core in a later time slice.

      • GrrGrrr
        September 1, 2012 at 9:07 pm

        Hats-off to you Jeremy.

        Thanks for sharing your knowledge. Who could know better than you.

        • Jeremy Collake
          September 2, 2012 at 8:03 am

          Dunno if you are being sarcastic, as I'd like to say I am *sure* there are plenty of people who know more about this subject than me! Still, I am happy to ramble on whenever given a chance, it is my mechanism of avoiding 'real work', lol.

        • GrrGrrr
          September 2, 2012 at 9:58 am

          no, i'm not being sarcastic.

          very few ppl touch these points which u have explained in a lucid way.

      • susendeep dutta
        September 2, 2012 at 6:25 am

        So,is this the reason why AMD processors have less single thread performance than that of Intel and hence gets defeated in all benchmarks? Does Intel's compiler has something to do in this case for AMD Bulldozer's low performance yield?

        • Jeremy Collake
          September 2, 2012 at 7:08 am

          There are many factors, not only how the compiler optimizes the code, but also how the OS CPU scheduler executes it. I've found the Windows scheduler is more tuned for Intel processors, at least when compared to Bulldozer and later AMD architectures. This is why you see some linux CPU schedulers (many exist) out-perform the Windows CPU scheduler on Bulldozer and later AMD platforms. That is one of the best things about linux, being able to pick which CPU scheduler you want, and compile it into the kernel. Also of note is that AMD processors tend to lean more towards the CISC side of the spectrum, while Intel processors lean more towards the RISC side. I don't mean to say one is RISC and the other CISC, I just mean AMD tends to have processors that do more per clock cycle (CISC-like), where-as Intel has processors tend to do less per clock cycle, but have higher frequencies (more clock cycles, more RISC-like). So, lots of variables ;p. The increased granularity of Intel clock cycles also seems to have a positive effect on Windows responsiveness when faced with CPU bound threads equal to or greater than the number of logical processors, based on my tests.

        • Danny Stieben
          September 19, 2012 at 2:12 am

          Thank you so much for all that info! Very helpful. :)

        • Jeremy Collake
          September 2, 2012 at 7:14 am

          I typed a long response... dunno if it will show up after moderation or not. I'll try once more, but make it short. The variables are more than just the compiler's optimization. The other big variable is the OS Scheduler. That is why you will see some linux CPU schedulers out-perform the Windows CPU Scheduler on AMD Bulldozer and above platforms. This demonstrates the deficiency the Windows CPU Scheduler has for newer AMD generation processors (IMHO). Also, AMD processors lean more to the CISC side of the spectrum, where-as Intel processors lean more to the RISC side of the spectrum. Neither is pure RISC or CISC, they are somewhere in between, but I mean to say that AMD tends to do more per clock cycle, where-as Intel does less per clock cycle, but operates at a higher frequency. This increased granularity seems to have a positive effect on responsiveness when faced with a number of CPU bound threads equal to or greater than the number of logical processors.

        • susendeep dutta
          September 3, 2012 at 7:04 am

          You've very good knowledge about processors and you explain everything well.Thanks for this.

      • Dhruv Sangvikar
        September 2, 2012 at 7:35 am

        Thanks!

        • Jeremy Collake
          September 2, 2012 at 7:45 am

          BTW, you may find these benchmarking tools quite useful ...

          CPUEater:
          Normal priority CPU bound threads:
          http://bitsum.com/about_probalance.php#skeptics

          ThreadRacer:
          Normal priority CPU bound threads set to specific CPU affinities
          http://bitsum.com/threadracer.php

          I am working on others ... I have to do considerable research in this area to make sure my software works as it should. The one thing I've not revealed publicly is that Intel processors are, for whatever reason (speculated about above), less affected by CPU monopolization of normal priority threads in Windows than are AMD processors. I am a fan of AMD so do not want to hurt their reputation, and this is not a definitive conclusion, only the results of limited tests on a few processors by each brand.

    • Emmanuel
      September 2, 2012 at 6:58 am

      Ditto. CS student and love assembly code and low level stuff:)

  40. GrrGrrr
    September 1, 2012 at 7:05 pm

    thanks Danny, nice article.

    • Jeremy Collake
      September 1, 2012 at 8:44 pm

      Yes, it is a nice article, though I would argue he is describing a CPU more than a core (which is a CPU). Now the definition has gotten a bit more complicated since cores share computational units for both AMD and Intel processors. However, originally, cores were simply additional CPUs. This article describes the basic procedure of execution of instructions by a CPU/core.

  41. Reý Aetar
    September 1, 2012 at 6:58 pm

    amazing

Leave a Reply

Your email address will not be published. Required fields are marked *