Course details

Applications of Parallel Computers

PDD Acad. year 2011/2012 Winter semester

Current academic year

The course gives an overview of usable parallel platforms and models of programming, mainly shared-memory programming (OpenMP) and message passing (MPI). A parallelization methodology is completed by performance studies and applied to frequently touched areas such as: dense and sparse linear algebra, graph partitioning, discrete optimization, PDEs, N-body problem, simulation, graphics and visualization, knowledge mining, and the like. The emphasis is put on practical aspects and implementation, frequent examples complete the treatment.

Guarantor

Language of instruction

Czech, English

Completion

Examination

Time span

  • 39 hrs lectures

Department

Subject specific learning outcomes and competences

To learn how to parallelize various classes of problems and predict their performance. To be able to utilize parallelism and communication at thread- and process level. To get acquainted with state-of-the-art standard interfaces, language extensions and other tools for parallel programming (MPI, OpenMP). Write and debug a parallel program for a selected task.

Parallel architectures with distributed and shared memory,  programming in C/C++ with MPI and OpenMP, parallelization of basic numerical methods.

Learning objectives

To clarify possibilities of parallel programming on clusters, SMPs a multi-core processors. Get to know a method of parallelization and performance prediction of selected real-world applications, the design of correct programs and the use parallel computing in practice.

Prerequisite knowledge and skills

Types of parallel computers, programming in C/C++, basic numerical methods

Study literature

Fundamental literature

  • Pacecho, P.: Introduction to Parallel Programming. Morgan Kaufman Publishers, 2011, 392 s., ISBN: 9780123742605   
  • Kirk, D., and Hwu, W.: Programming Massively Parallel Processors: A Hands-on Approach, Elsevier, 2010, s. 256, ISBN: 978-0-12-381472-2

Syllabus of lectures

  • Parallel computer architectures
  • Shared variable and message-passing programming: OpenMP and MPI.
  • Foster's method of parallelization, performance measures and prediction
  • Parallel linear algebra 1
  • Parallel linear algebra 2
  • Graph partitioning
  • Disrete optimization
  • PDEs
  • N-body problem
  • Parallel and distributed simulation
  • Graphics and visualization 
  • Data and knowledge mining
  • Languages, compilers, libraries, and tools.

Progress assessment

Study evaluation is based on marks obtained for specified items. Minimimum number of marks to pass is 50.

Controlled instruction

Defence of a software project based on a variant of parallel programming.

Back to top