Course details

Computation Systems Architectures

AVS Acad. year 2022/2023 Winter semester 5 credits

Current academic year

The course covers architecture of modern computational systems composed of universal as well as special-purpose processors and their memory subsystems. Instruction-level parallelism is studied on scalar, superscalar and VLIW processors. Then the processors with thread-level parallelism are discussed. Data parallelism is illustrated on SIMD streaming instructions and on graphical processors. Programming for shared memory systems in OpenMP follows and then the most proliferated multi-core multiprocessors and the advanced NUMA systems are described. Finally, the generic architecture of the graphics processing units and basic programming techniques using OpenMP are also covered. Techniques of  low-power processors are also explained.

Guarantor

Course coordinator

Language of instruction

Czech

Completion

Credit+Examination (written)

Time span

  • 26 hrs lectures
  • 12 hrs pc labs
  • 14 hrs projects

Assessment points

  • 60 pts final exam (written part)
  • 10 pts mid-term test (written part)
  • 30 pts projects

Department

Lecturer

Instructor

Subject specific learning outcomes and competences

Overview of the architecture of modern computational systems, their capabilities, limits and future trends. The ability to estimate performance of software applications on a given computer system, identify performance issues and propose their rectification. Practical user experience with supercomputers.
Understanding of hardware limitations having impact on the efficiency of software solutions. 

Learning objectives

To familiarize yourself with the architecture of modern computational systems based on x86, ARM and RISC-V multicore processors in configurations with uniform (UMA) and non-uniform (NUMA) shared memory, often accompanied with a GPU accelerator. To understand hardware aspects of computational systems that have a significant impact on the application performance and power consumption. To be able to assess computing possibilities of a particular architecture and to predict the performance of applications. To clarify the role of a compiler and its cooperation with processors. To be able to orientate oneself on the computational system market, to evaluate and compare various systems.

Why is the course taught

There's a large range of problems and programming languages, where the performance of the final application, the amount of consumed memory or electric power is not significant. However, what shall we do in situations where these aspects are of critical importance?
The purpose of the AVS course is to examine and analyze the architecture of current multi-core super-scalar processors, memory subsystems and accelerator cards such as GPUS in order to understand their potential and limits. The practical part of the course is devoted to teaching the OpenMP library allowing efficient parallelization and vectorization on both CPUs and GPUs.

Prerequisite knowledge and skills

Von-Neumann computer architecture, computer memory hierarchy, cache memories and their organization, programming in assembly and in C/C++, compiler's tasks and functions.

Study literature

  • Hennessy, J.L., Patterson, D.A.: Computer Architecture - A Quantitative Approach. 5. vydání, Morgan Kaufman Publishers, Inc., 2012, 1136 s., ISBN 1-55860-596-7. download.
  • Baer, J.L.: Microprocessor Architecture. Cambridge University Press, 2010, 367 s., ISBN 978-0-521-76992-1. info.
  • van der Pas, R., Stotzer, E., and Terboven, T.: Using OpenMP-The Next Step, MIT Press Ltd, ISBN 9780262534789, 2017. info.
  • Materiály ke kurzu Computer Science 152: Computer Architecture and Engineering. http://inst.eecs.berkeley.edu/~cs152/sp13/
  • Agner Fog: Software optimization resources.
  • Aktuální PPT prezentace přednášek v Elearningu.

Fundamental literature

  • Baer, J.L.: Microprocessor Architecture. Cambridge University Press, 2010, 367 s., ISBN 978-0-521-76992-1.
  • Hennessy, J.L., Patterson, D.A.: Computer Architecture - A Quantitative Approach. 5. vydání, Morgan Kaufman Publishers, Inc., 2012, 1136 s., ISBN 1-55860-596-7.
  • van der Pas, R., Stotzer, E., and Terboven, T.: Using OpenMP-The Next Step, MIT Press Ltd, ISBN 9780262534789, 2017.

Syllabus of lectures

  1. Scalar processors, pipelined instruction processing and compiler assistance.
  2. Superscalar processors, dynamic instruction scheduling.
  3. Data flow through the hierarchy of cache memories. 
  4. Branch prediction, optimization of instruction and data fetching.
  5. Processors with data level parallelism.
  6. Multi-threaded and multi-core processors.
  7. Loop parallelism and code vectorization.
  8. Functional parallelism and acceleration of recursive algorithms.
  9. Synchronization on systems with shared memory.
  10. Algorithm for cache coherency.
  11. Architectures with distributed shared memory.
  12. Architecture and programming of graphics processing units.
  13. Low power processors and techniques.

Syllabus of computer exercises

  1. Performance measurement of sequential code 
  2. Cache blocking, loop swapping and unrolling
  3. OpenMP 4.0 vectorization 
  4. OpenMP loops 
  5. OpenMP tasks
  6. OpenMP sections and mutual exclusion 

Syllabus - others, projects and individual work of students

  • Performance evaluation and code optimization using OpenMP.
  • Development of an application in OpenMP on a NUMA node.

Progress assessment

Assessment of two projects, 14 hours in total and, computer laboratories and a midterm examination.

Controlled instruction

  • Missed labs can be substituted in alternative dates.
  • There will be a place for missed labs in the last week of the semester.

Exam prerequisites

To get 20 out of 40 points for projects and midterm examination.

Course inclusion in study plans

Back to top