Faculty of Information Technology, BUT

Course details

Applications of Parallel Computers

PDD Acad. year 2018/2019 Winter semester

The course gives an overview of usable parallel platforms and models of programming, mainly shared-memory programming (OpenMP), message passing (MPI) and data-parallel programming (CUDA, OpenCL). A parallelization methodology is completed by performance studies and applied to a particular problem. The emphasis is put on practical aspects and implementation.

Guarantor

Language of instruction

Czech

Completion

Examination (written+oral)

Time span

39 hrs lectures

Assessment points

100 exam

Department

Lecturer

Subject specific learning outcomes and competences

To learn how to parallelize various classes of problems and predict their performance. To be able to utilize parallelism and communication at thread- and process level. To get acquainted with state-of-the-art standard interfaces, language extensions and other tools for parallel programming (MPI, OpenMP). Write and debug a parallel program for a selected task.

Generic learning outcomes and competences

Parallel architectures with distributed and shared memory,  programming in C/C++ with MPI and OpenMP, GPGPU, parallelization of basic numerical methods.

Learning objectives

To clarify possibilities of parallel programming on multi-core processors, on clusters and on GP GPUs. View over synchronization and communication techniques. Get to know a method of parallelization and performance prediction of selected real-world applications, the design of correct programs and the use parallel computing in practice.

Prerequisites

Prerequisite kwnowledge and skills

Types of parallel computers, programming in C/C++, basic numerical methods

Study literature

Fundamental literature

  • Pacecho, P.: Introduction to Parallel Programming. Morgan Kaufman Publishers, 2011, 392 s., ISBN: 9780123742605  
  • Kirk, D., and Hwu, W.: Programming Massively Parallel Processors: A Hands-on Approach, Elsevier, 2010, s. 256, ISBN: 978-0-12-381472-2

Syllabus of lectures

  • Parallel computer architectures, performance measures and their prediction
  • Patterns for parallel programming
  • Synchronization and communication techniques.
  • Shared variable programming with OpenMP
  • Message-passing programming with MPI
  • Data parallel programming with CUDA/OpenCL
  • Examples of task parallelization and parallel applications

Controlled instruction

Defence of a software project based on a variant of parallel programming.

Schedule

DayTypeWeeksRoomStartEndLect.grpGroupsInfo
Monlecture2018-12-17 L321 13:0014:50

Course inclusion in study plans

Back to top