Jump to Block: (About) 01 02 03 04 05 06 07 08 09 10 11 12 (Assessments)
10 Parallel Algorithms
In this block we cover:
- What is a parallel computer?
- How to design code that parallelises
- Parallelism and complexity
- Computation graphs
- How to conceptualise parallelism, including:
- Vectorisation
- Reduce and accumulate
- Map, and Map-Reduce
- Practical experience with parallelism, including:
- Benchmarking code
- the
multiprocessing
parallelisation library - Map, starmap, accumulate, reduce in python
- Asynchronous and synchronous parallelisation
- Running parallelized scripts from inside Jupyter (cross platform solution)
Lectures
Workshop:
Assessments:
- Portfolio 10 of the full Portfolio.
- Block10 on Noteable via Blackboard:
References
- Chapter 27 of Cormen et al 2010 Introduction to Algorithms covers some of these concepts.
- Numpy vectorisation
- MapReduce algorithm for matrix multiplication
- A Brief Overview of Parallel Algorithms
- Parallel computing concepts e.g. Amdahl’s Law for the overall speedup
- MISD/MIMD/SIMD/SISD
- Parallel time complexity