Practical Parallel Programming
PPP Acad. year 2020/2021 Summer semester 5 credits
The course covers architecture and programming of parallel systems with functional and data parallelism. First, the parallel system theory and program parallelization are discussed. The detailed description of most proliferated supercomputing systems, interconnection network typologies and routing algorithms is followed by the architecture of parallel and distributed storage systems. The course goes on in message passing programming in standardized interface MPI. Consequently, techniques for parallel debugging and profiling are discussed. Last part of the course is devoted to the description of parallel programming patterns and case studies from the are of linear algebra, physical systems described by partial differential equations, N-Body systems and Monte-Carlo methods.
Language of instruction
Subject specific learning outcomes and competences
Overview of principles of current parallel system design and of interconnection networks, communication techniques and algorithms. Survey of parallelization techniques of fundamental scientific problems, knowledge of parallel programming in MPI. Knowledge of basic parallel programming patterns. Practical experience with the work on supercomputers, ability to identify performance issues and propose their solution.
Generic learning outcomes and competences
Knowledge of capabilities and limitations of parallel processing, ability to estimate performance of parallel applications. Language means for process/thread communication and synchronization. Competence in hardware-software platforms for high-performance computing and simulations.
To get familiar with the architecture of distributed supercomputing systems, their interconnection networks and storage. To orientate oneself in parallel systems on the market, be able to assess communication and computing possibilities of a particular architecture and to predict the performance of parallel applications. Learn how to write portable programs using standardized interfaces and languages, specify parallelism and process communication. To learn how to practically use supercoputer for solving complex engineering problems.
Why is the course taught
This course will take you into the area of high performance computing where a single computer is far from being powerful enough to satisfy application demands. The only solution in such cases is to distribute computation across a supercomputing cluster. This course first examines the architecture of top machines, and then focuses on their software equipment. We will learn the MPI library which is an industry standard in high performance computing. Finally, we will introduce a few typical applications ranging such as physical simulation of heat distribution, fluid dynamics, n-body gravitational and Coulomb systems of galaxies and molecules or Monte-Carlo methods.
Prerequisite kwnowledge and skills
Von-Neumann computer architecture, computer memory hierarchy, cache memories and their organization, programming in assembly and in C/C++. Knowledge gained in courses PRL and AVS.
- Pacecho, P.: Introduction to Parallel Programming. Morgan Kaufman Publishers, 2011, 392 s., ISBN: 9780123742605
- Hennessy, J.L., Patterson, D.A.: Computer Architecture - A Quantitative Approach. 5. vydání, Morgan Kaufman Publishers, Inc., 2012, 856 s., ISBN: 9780123838728
Syllabus of lectures
- Introduction to parallel processing.
- Parallel algorithm design and metodology.
- MPI: Point-to-point communications.
- MPI: Collective communications.
- MPI: Communicators and typologies.
- MPI: Data types.
- MPI: One-sided communications.
- Distributed file systems and MPI-IO.
- Knihovny pro paralelní vstup a výstup.
- Performance analysis of parallel and distributed applications.
- Case studies (Matrix multiplications, BLAS).
- Cace studies (Jacobi methods, FDTD).
- Interconnection networks: topology and routing algorithms, switching, flow control.
Syllabus of computer exercises
- MPI: Point-to-point communications
- MPI: Collective communications
- MPI: Communicators
- MPI: Data types
- MPI: One sided communications
- MPI: Parallel input and output with MPI-IO
- MPI: Parallel input and output with HDF5
- Profiling and tracing of parallel applications
Syllabus - others, projects and individual work of students
- A parallel program in MPI on the supercomputer.
Assessment of a project, 10 hours in total and a midterm examination.
- Missed labs can be substituted in alternative individual dates
- There will be a place for missed labs in the last week of the semester.
To get 20 out of 40 points for projects and midterm examination.
|Mon||lecture||lectures||D0207||08:00||09:50||1MIT 2MIT||NBIO NEMB NHPC xx|
Course inclusion in study plans
- Programme IT-MSC-2, field MBI, MGM, any year of study, Compulsory-Elective group C
- Programme IT-MSC-2, field MBS, MIN, MIS, MMM, any year of study, Elective
- Programme IT-MSC-2, field MPV, MSK, 1st year of study, Compulsory
- Programme MITAI, specialisation NADE, NCPS, NGRI, NIDE, NISD, NISY, NMAL, NMAT, NNET, NSEC, NSEN, NSPE, NVER, NVIZ, any year of study, Elective
- Programme MITAI, specialisation NBIO, any year of study, Compulsory
- Programme MITAI, specialisation NEMB, 2nd year of study, Compulsory
- Programme MITAI, specialisation NHPC, 1st year of study, Compulsory