The execution policy tells which strategy we allow for the automatic parallelization of our standard algorithm calls.
The following three policy types exist in the std::execution namespace:
| Policy | Meaning |
| sequenced_policy |
The algorithm has to be executed in a sequential form similar to the original algorithm without an execution policy. The globally available instance has the name std::execution::seq. |
| parallel_policy |
The algorithm may be executed with multiple threads that share the work in a parallel fashion. The globally available instance has the name std::execution::par. |
| parallel_unsequenced_policy |
The algorithm may be executed with multiple threads sharing the work. In addition to that, it is permissible to vectorize the code. In this case, container access can be interleaved between threads and also within the same thread due to vectorization. The globally available instance has the name std::execution::par_unseq. |
The execution policies imply specific constraints for us. The stricter the specific constraints, the more parallelization strategy measures we can allow:
- All element access functions used by the parallelized algorithm must not cause deadlocks or data races
- In the case of parallelism and vectorization, all the access functions must not use any kind of blocking synchronization
As long as we comply with these rules, we should be free from bugs introduced by using the parallel versions of the STL algorithms.