Data and Task Parallelism
 
 
 

At a high level there are two ways of decomposing work into multiple threads. Data parallelism involves performing the same operation on multiple data elements in parallel. A typical example would be evaluating a loop, where each thread works on a part of the loop range. The benefit of data parallelism is that it can usually scale to high core counts if there is sufficient work in the loop. The downside is that it is limited to certain types of workload.

Task parallelism involves different tasks running in parallel, for example a compute thread, a UI thread and a graphics thread. The benefit is that different parts of the system can be worked in parallel, for example, CPU and GPU. The downside is limited scalability, as the number of tasks is usually rather small and unlikely to be equally expensive.

A well threaded application would include both data and task parallelism, possibly at the same time. In addition it would be designed to be scalable, so increasing core counts in future processors would be automatically utilized by the application.