Amdahl's Law is a formula for determining the theoretical speed up when parallelizing a function. For example, imagine we wanted to parallelize a function that spends 90% of its time in one algorithm. If there is a parallel version of that algorithm, how much faster would the entire function run with 2, 4 or more cores?

We define the speedup S, of the function as the ratio between the time it takes for one processor to execute the function verses the time it takes for n processors. The time for the single threaded execution can be normalized to 1 unit. Thus if p is the fraction of the function that is parallelized, then the parallel time is 1-p + p/n. Thus the speed up is:

S = 1/(1-p+p/n)

In the example above, p=0.9. So for 2 cores, we have

1/(0.1+0.9/2) = 1/(11/20) ~ 1.81

For 4 cores we have

1/(0.1+0.9/4) = 1/(13/40) ~ 3.07

We can see that the function is not scaling particularly well. Now, what happens when n goes to infinity?

Limit 1/(1-p+p/n) = 1/(1-p)

For our example

1/(1-p) = 1/(1-0.9) = 10

Thus for our example the maximum speed up is 10 times, even with an unlimited number of processors.

We can also see that the factor limiting the speed up is the fraction of non-parallel code. In fact the most important consequence of Amdahl's Law is that it shows small sections of linear code will limit the potential speed up. Thus Amdahl's Law tells us that if you want a function to have true linear scaling, you must parallelize all of the function.


Please Wait...