A 1000-core processor sounds cool, but what’s it for?
Researchers at the University of California have created the world's first, true 1000-core processor. But what does that mean for us and should we even care? Let's find out.
To understand what a 1000-core processor really means, we need to understand what a processor is.
Back to basics
Let's get back to basics. A processor manipulates data in binary form—ones and zeroes. Think of it as Morse code, a pattern of dots and dashes that represent something. A short tone is a dot, a long one is a dash. A particular pattern of dots and dashes represents information. In the case of a processor, a high voltage is a one and a low voltage is a zero. A stream of varying voltages translates to a stream of data.
Now come algorithms. These are a particular sequence of steps that must be followed to get a desired output. In a processor, these steps are usually hard-coded, i.e. the transistors are arranged in patterns that will replicate the function of the algorithm. You can't just say “add two numbers” and expect that to happen. It needs to be broken down into a logical pattern of steps, an algorithm that must be followed.
A processor is thus a set of transistors arranged in patterns and these patterns determine how data is processed.
Any processor essentially performs 3 tasks:
Fetch data from memory
Process that data (hence the name)
Write result to memory
The memory referred to here is just a storage area for information that needs to be fed to the processor. This can be RAM, a hard disk, pen drive, etc.
More than anything else, these algorithms are what differentiate one CPU from another. Take your smartphone for instance, while you can watch a video on your smartphone and on your PC, a smartphone does it while consuming far less power and in a more efficient fashion. In the same vein, a dual-core Apple A9 chip can outperform a 10-core mobile chip from MediaTek.
What were we talking about again?
What we're driving at is this: When it comes to processors, the number of cores doesn't matter.
A desktop CPU handles data very differently from a mobile chip, something akin to a car running on petrol vs one running on electricity. Both cars have their positives and negatives and neither is better or worse than the other. You pick one based on your needs.
Apple can get by with a dual-core chip and Android can't because Apple's iOS is optimized for two cores while Android is scalable across more cores. The argument for which approach is better is irrelevant because Android phones and iPhones can perform admirably at their designed tasks. At the same time, Intel's quad core chips (say, the Intel i7 6700K) are more powerful than many of AMD's quad-core and 8-core chips, but then, AMD's chips (say, the FX8350) are cheaper. Everything is an exercise in tradeoffs.
With their 1000-core processor, the researchers at the University of California have only shown that it's possible to create a chipset with a 1000 processors on it. There is still no use-case for such a processor however; it's a proof-of-concept. Five years earlier, scientists at the University of Glasgow had also claimed to have created a 1000-core processor. Only in their case, they encoded 1000 processors on a single FPGA (Field Programmable Grid Array) chip rather than design a chip with 1000 discrete processors.
An explanation of FPGA is beyond the scope of this story, suffice to say that these chips can be programmed in the field. I'd also like to point out that there hasn't been any real world demand for that chip either.
Kudos to the researchers for doing what they did. The effort that went into creating a 1000-core processor might have laid the foundation for computers of the future. Today, however, there's no real use for it, and that's perfectly fine.
To understand what a 1000-core processor really means, we need to understand what a processor is.
Back to basics
Let's get back to basics. A processor manipulates data in binary form—ones and zeroes. Think of it as Morse code, a pattern of dots and dashes that represent something. A short tone is a dot, a long one is a dash. A particular pattern of dots and dashes represents information. In the case of a processor, a high voltage is a one and a low voltage is a zero. A stream of varying voltages translates to a stream of data.
Now come algorithms. These are a particular sequence of steps that must be followed to get a desired output. In a processor, these steps are usually hard-coded, i.e. the transistors are arranged in patterns that will replicate the function of the algorithm. You can't just say “add two numbers” and expect that to happen. It needs to be broken down into a logical pattern of steps, an algorithm that must be followed.
A processor is thus a set of transistors arranged in patterns and these patterns determine how data is processed.
Any processor essentially performs 3 tasks:
Fetch data from memory
Process that data (hence the name)
Write result to memory
The memory referred to here is just a storage area for information that needs to be fed to the processor. This can be RAM, a hard disk, pen drive, etc.
More than anything else, these algorithms are what differentiate one CPU from another. Take your smartphone for instance, while you can watch a video on your smartphone and on your PC, a smartphone does it while consuming far less power and in a more efficient fashion. In the same vein, a dual-core Apple A9 chip can outperform a 10-core mobile chip from MediaTek.
What were we talking about again?
What we're driving at is this: When it comes to processors, the number of cores doesn't matter.
A desktop CPU handles data very differently from a mobile chip, something akin to a car running on petrol vs one running on electricity. Both cars have their positives and negatives and neither is better or worse than the other. You pick one based on your needs.
Apple can get by with a dual-core chip and Android can't because Apple's iOS is optimized for two cores while Android is scalable across more cores. The argument for which approach is better is irrelevant because Android phones and iPhones can perform admirably at their designed tasks. At the same time, Intel's quad core chips (say, the Intel i7 6700K) are more powerful than many of AMD's quad-core and 8-core chips, but then, AMD's chips (say, the FX8350) are cheaper. Everything is an exercise in tradeoffs.
With their 1000-core processor, the researchers at the University of California have only shown that it's possible to create a chipset with a 1000 processors on it. There is still no use-case for such a processor however; it's a proof-of-concept. Five years earlier, scientists at the University of Glasgow had also claimed to have created a 1000-core processor. Only in their case, they encoded 1000 processors on a single FPGA (Field Programmable Grid Array) chip rather than design a chip with 1000 discrete processors.
An explanation of FPGA is beyond the scope of this story, suffice to say that these chips can be programmed in the field. I'd also like to point out that there hasn't been any real world demand for that chip either.
Kudos to the researchers for doing what they did. The effort that went into creating a 1000-core processor might have laid the foundation for computers of the future. Today, however, there's no real use for it, and that's perfectly fine.
Comments
Post a Comment