Why does Moore's Law necessitate parallel computing? [closed]
Asked Answered
M

16

13

This was a question in one of my CS textbooks. I am at a loss. I don't see why it necessarily would lead to parallel computing. Anyone wanna point me in the right direction?

Meris answered 6/2, 2009 at 22:16 Comment(0)
C
18

Moore's law just says that the number of transistors on a reasonably priced integrated circuit tends to double every 2 years.

Observations about speed or transistor density or die size are all somewhat orthogonal to the original observation.

Here's why I think Moore's law leads inevitably to parallel computing:

If you keep doubling the number of transistors, what are you going to do with them all?

  • More instructions!
  • Wider data types!
  • Floating Point Math!
  • More caches (L1, L2, L3)!
  • Micro Ops!
  • More pipeline stages!
  • Branch prediction!
  • Speculative execution!
  • Data Pre-Fetch!
  • Single Instruction Multiple Data!

Eventually, when you've implemented all the tricks you can think of to use all those extra transistors, you eventually think to yourself: why don't we just do all those cool tricks TWICE on the came chip?

Bada bing. Bada boom. Multicore is inevitable.


Incidentally, I think the current trend of CPUs with multiple identical CPU cores will eventually subside as well, and the real processors of the future will have a single master core, a collection of general purpose cores, and a collection of special purpose coprocessors (like a graphics card, but on-die with the CPU and caches).

The IBM Cell processor (in the PS3) is already somewhat like this. It has one master core and seven "synergistic processing units".

Calie answered 6/2, 2009 at 23:12 Comment(0)
E
13

One word - Heat.

Due to an inability to dissipate heat at current transistor levels, engineers are using their every growing transistor budgets to create more cores instead of creating more complex (and hot) pipelines and faster processors.

Moore's law is not at all dead - moore's law is about transistor density at a given cost. It just so happens that for various reasons (like marketing) engineers decided to use their transistor budget to increase clock cycle. Now they decided (because of the heat issue) to start using the transistors for parallelism, plus 64bit computing and reducing power consumption.

Euphonious answered 6/2, 2009 at 22:43 Comment(0)
J
7

Moore's law describes the trend that performance of chips effectively doubles due to the addition of more transistors to a circuit board.

Since devices are not increasing in size (if anything the reverse is true) then clearly the space for these additional transistors only becomes available due to chip technology becoming smaller and manufacturing becoming ever better.

At somepoint however you reach the stage where transistors cannot be minimized any further. It also becomes impossible to increase the size of chips beyond a certain point due to the amount of heat generated and the manufacturing costs involved.

These limits necessitate a means of increasing performance beyond simply producing more complex chips.

One such method is to employ cheaper and less complex chips in parallel architectures, another is to move away from the traditional integrated chip to something like quantum computing - which by it's very definition is parallel processing.

It's worth noting that the title of this question relates more to the observed results of the law (performance increase) rather than the actual law itself which was largely an observation about transistor count.

Jemine answered 6/2, 2009 at 22:22 Comment(2)
The question was "why it would lead to parallel computing". You do not answer that. And I'm astonished that parallel processing should work without an increase in size - so the size of a single transistor can't be the answer why parallel processing is necessary.Curule
The original version of Moore's law is not about the speed of circuits, just their size. But like harddisk density speed is quite often labeled as such. But all those exponential growth phenomena have their own cycle.Mary
M
6

I think it is a reference to the free lunch is over article

basically, the original version of Moore's law, about transistor density, still holds. But one important derived law, about processing speed doubling every xx months, has hit a wall.

So we are facing a future where processor speeds will go up only slightly but we will have more core's and cache to play with.

Mary answered 6/2, 2009 at 22:51 Comment(0)
P
4

That is an odd question. Moore's law doesn't necessitate anything it is just an observation of the progression of computing power, it doesn't dictate that it must increase at a certain rate.

Poussette answered 6/2, 2009 at 22:21 Comment(2)
I see your point, but I think it can be safely assumed that the question really is "If the trend that Moore's Law describes is to be followed for a really long time, if not indefinitely, why is parallel processing necessary."Dissymmetry
Under that interpretation, I agree with Andrew's answer.Poussette
A
2

Increasing the speed of processors would make the operating temperature so extremely high it would burn a hole in your desk. The makers of the chips are running up against certain limitations they can't get around... like the speed of light, for instance. Parallel computing will allow them to speed up the computers without starting a fire.

Amadavat answered 6/2, 2009 at 22:20 Comment(4)
How is the speed of light involved?Euphonious
You have to distribute the clock signal around the chip and keep it synchronized. It takes finite time to get from one side of the chip to the other. As clocks get faster, it makes a difference.Cormier
In all honesty, I'm not positive. I'm not a physicist, but Stephen Hawking is. blog.wired.com/business/2007/09/idf-gordon-mo-1.htmlAmadavat
@Mo At high clock speeds, the speed of light becomes a limiting factor. Roughly speaking, at the speed of light 1 nanosecond is a foot (30cm).Generatrix
B
2

Transistors and cpus and whatnot are getting smaller and smaller and faster and faster. Alas, the heat and voltage costs for computing are going up. The heat and voltage issues are as much of a concern as the actual physical size minimums. A 100ghz chip would suck up too much voltage and get too hot but 100 1ghz chips would have less of an issue with this.

Busby answered 6/2, 2009 at 22:21 Comment(0)
T
2

Interestingly, the idea proposed in the question that parallel computing is "necessitated" is thrown into question by Amdahl's Law, which basically says that having parallel processors will only get you so far unless 100% of your program is parallelizable (which is never the case in the real world).

For example, if you have a program which takes 20 minutes on one processor and is 50% parallelizable, and you buy a large number of processors to speed things up, your minimum time to run would still be over 10 minutes. This is ignoring the cost and other issues involved.

Typecase answered 21/2, 2009 at 1:3 Comment(0)
C
1

The real answer is completely un-technical, not that the hardware explanations aren't fantastic. It's that Moore's Law has become less and less of an observation, and more of an expectation. This expectation of computers growing exponentially has become the driving force of the industry, which necessitates all the parallelism.

Cider answered 6/2, 2009 at 22:56 Comment(0)
K
1

Moore's law says that the number of transistors in an IC relative to cost increases exponentially year on year.

Historically, this was partly due to a decrease in transistor size, and smaller transistors also switched faster. Because you got faster transistors in step with Moore's law, clock speed increased. So there's a confusion that say Moore's law means faster processors rather than just wider.

Heat dissipation caused the speed increase to top out at around 3 GHz for economically produced silicon.

So if you want more cheap computation, it's easier to add more, slower circuits. So the current state-of-the-art commodity processors are multi-core - they are getting wider, but no faster.

Graphene film transistors require less power, and are performing at around 30 GHz, with theoretical limits at around 0.6 THz.

When graphene technology matures to commodity level in a few years, expect there to be another sea change and no-one will care about using parallel cores for performance, and go back to narrow, fast cores. On the other hand, concurrent computing will still matter for the problems it is a natural fit for, so you'll still have to know how to handle more than one execution unit though.

Kop answered 6/2, 2009 at 23:8 Comment(1)
Good explanation of speed vs. density. Do you think graphene film transistors are suitable for mass production?Cormier
C
0

Because orthogonal computing has failed. We should go quantum.

Chloechloette answered 6/2, 2009 at 22:19 Comment(2)
Well, failed is a strong word :-)Cormier
yeah, i know. it's not constructive :)Unrefined
H
0

Moore's law necessitates parallel computing because Moore's law is on the verge of/is dead. So taking that into consideration, if it is becoming harder and harder to cram transistors onto an IC (due to some of the reasons noted elsewhere) then the remaining options are to add more processors ala Parallel processing or go Quantum.

Helmet answered 6/2, 2009 at 22:53 Comment(0)
H
0

Moore's law still holds. Transistor counts are still increasing. The problem is figuring out something useful to do with all those transistors. We can't just keep increasing the instruction level parallelism by making pipelines deeper and wider because the circuitry necessary to prove independence between instructions scales terribly in the number of instructions you need to prove independence of. We can't just keep cranking up clock speeds because of heat. We could just keep increasing cache size, but we've hit a point of diminishing returns here. The only use left for the transistors seems to be putting more cores on a chip, which means that the engineer's job of figuring out what to do with the transistors is just pushed up the abstraction ladder, and now programmers have to figure out what to do with all those cores.

Hooknosed answered 6/2, 2009 at 23:28 Comment(0)
M
0

I don't think Moores law necessitates parallel computing, but it does necessiate an eventual shift away from pure miniturization. Multiple solutions exist. One of them is Parallel computing, another is co-processing (which is realted, but not the same thing as parallel computing. co-processing is when you offload work to a special purpose CPU such as a GPU, DSP, etc..)

Melanie answered 21/2, 2009 at 1:9 Comment(0)
D
-1

I honestly don't really know, but my guess would be that transistors at some point could get no smaller requiring processing power to be spread out in parallel.

Dissymmetry answered 6/2, 2009 at 22:21 Comment(2)
Transistors are getting smaller at the same rate.Euphonious
Yeah... I guess I'm not sure what you are getting at though.Dissymmetry
K
-1

It's because we're all addicted to increasing speed in our processors. Years of conditioning have led us to expect more processing power, year after year. But the physical constraints caused by densely packed transistors have finally put a limit on clock speeds, so increases have to come from a different perspective.

It doesn't have to be this way. The success of the Intel Atom processor shows that processors could just get smaller and cheaper instead. The processor companies will try to keep us on the "bigger, faster" treadmill though, to keep their profits up. And we'll be willing participants, because we'll always find a way to use more power.

Keefe answered 6/2, 2009 at 23:20 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.