### 26.11.13

## Amdahl's law for predicting the future of multicores considered harmful

The following will be the subject of a talk at Edinburgh on Thursday. I won't attend, because it's Thanksgiving, but it looks interesting.

Amdahl's law for predicting the future of multicores considered harmful

B.H.H. Juurlink , C. H. Meenderinck, ACM SIGARCH Computer Architecture News, Volume 40 Issue 2, May 2012, Pages 1-9.

Several recent works predict the future of multicore systems or identify scalability bottlenecks based on Amdahl's law. Amdahl's law implicitly assumes, however, that the problem size stays constant, but in most cases more cores are used to solve larger and more complex problems. There is a related law known as Gustafson's law which assumes that runtime, not the problem size, is constant. In other words, it is assumed that the runtime on p cores is the same as the runtime on 1 core and that the parallel part of an application scales linearly with the number of cores. We apply Gustafson's law to symmetric, asymmetric, and dynamic multicores and show that this leads to fundamentally different results than when Amdahl's law is applied. We also generalize Amdahl's and Gustafson's law and study how this quantitatively effects the dimensioning of future multicore systems.

Labels: Computing, Concurrency

Comments:

<< Home

Shouldn't the parallelism scale with the amount of work done, not with the number of cores? This has implications if the amount of time used is kept constant.

If the fraction of the original (1-core) problem which has to be executed serially is 1/c, and the serial portion scales with the square root of the amount of work done, then you can never do more than c^2 work in the same time.

So, say c=3. So on one core, we can do 100 units of work in 1 unit of time, 33 of which have to be done serially. If we have 9 cores, then we can do 900 units of work in the same 1 unit of time- one core doing the 99 units of work which have to be done serially, and 1 unit of the parallel work, and the other 8 cores doing 100 units of the parallel work each.

On the other hand, if we try to do, say, 1600 units of work, we now have 132 units of serial work that needs to be done. This is going to take 1.32 units of time, no matter how many cores we throw at the problem.

Note that this is still better than Amdahl's law would give you- Amdahl's law says you would have 16*33 = 528 units of serial work to do, and thus 1600 units of work would take 5.28 units of time.

The scaling limits still exist- they're just not quite as limiting as you might expect.

Post a Comment
If the fraction of the original (1-core) problem which has to be executed serially is 1/c, and the serial portion scales with the square root of the amount of work done, then you can never do more than c^2 work in the same time.

So, say c=3. So on one core, we can do 100 units of work in 1 unit of time, 33 of which have to be done serially. If we have 9 cores, then we can do 900 units of work in the same 1 unit of time- one core doing the 99 units of work which have to be done serially, and 1 unit of the parallel work, and the other 8 cores doing 100 units of the parallel work each.

On the other hand, if we try to do, say, 1600 units of work, we now have 132 units of serial work that needs to be done. This is going to take 1.32 units of time, no matter how many cores we throw at the problem.

Note that this is still better than Amdahl's law would give you- Amdahl's law says you would have 16*33 = 528 units of serial work to do, and thus 1600 units of work would take 5.28 units of time.

The scaling limits still exist- they're just not quite as limiting as you might expect.

<< Home