Jump to Block: (About) 01 02 03 04 05 06 07 08 09 10 11 12
10 Parallel Algorithms
In this block we cover:
- What is a parallel computer?
- How to design code that parallelises
- Parallelism and complexity
- Computation graphs
- How to conceptualise parallelism, including:
- Vectorisation
- Reduce and accumulate
- Map, and Map-Reduce
- Practical experience with parallelism, including:
- Benchmarking code
- the
multiprocessing
parallelisation library - Map, starmap, accumulate, reduce in python
- Asynchronous and synchronous parallelisation
- Running parallelized scripts from inside Jupyter (cross platform solution)
Lectures
- Introduction to Parallelism
- 10.1.1 Introduction to Parallelism (Part 1, Parallel computers) (23:48)
- 10.1.2 Introduction to Parallelism (Part 2, Vectorisation, Mapping and Reducing) (38:03)
Worksheets:
Workshop:
- 10.2 Coding Parallel Algorithms (45:08)
- The following scripts should be placed in the directory you run the workshop notebook from. They are derivatives from the workshop content and easily replicated. Their use is discussed in the Workshop.
References
- Chapter 27 of Cormen et al 2010 Introduction to Algorithms covers some of these concepts.
- Numpy vectorisation
- MapReduce algorithm for matrix multiplication
- A Brief Overview of Parallel Algorithms
- Parallel computing concepts e.g. Amdahl’s Law for the overall speedup
- MISD/MIMD/SIMD/SISD
- Parallel time complexity