next up previous contents
Next: Task Granularity Up: General Performance Considerations Previous: Speedup.

Scalability

Scalability is the ability to take advantage of more processors in order to achieve larger speedup. Unfortunately, Amdahl's law (B.2) (Freeman & Philipps, 1992) states that ``The serial part of the computation (s) sets a lower limit to execution time, even if parallelism (p represents the parallel fraction, p+s=1) is fully accomplished''. For example, if an application can parallelize 80% ( $p=.8 \; s=.2$) of the code (first 20% is initialization code that cannot run in parallel) then at least the application execution time will be the time needed to this 20% to run, even if the other 80% runs on a million processors.


 \begin{displaymath}s(np)=\frac{t(1)}{t(np)}=\frac{1}{p/np+s}
\end{displaymath} (17.2)

This pretty simple and intuitive statement presents a serious blow to parallel computing, because it shows that scalable computing is not possible.This is true if the problem size is fixed. However, in practice, a machine with more processors means that larger problems could be solved. On the other hand, often the serial component of a program will not grow linearly with the problem size.

If our example above was twice, or even better 4 times, it's original size (by example: halving the cell size, increases 4 times the number of grid points over which calculate), then the initialization time could be reduced to 10%, or even less, of the total running time, instead of 20%. Thats mean an increase in the granularity (see below).


next up previous contents
Next: Task Granularity Up: General Performance Considerations Previous: Speedup.
Elias Kaplan M.Sc.
1998-07-22