Course details

Computation Systems Architectures

AVS Acad. year 2024/2025 Winter semester 5 credits

Current academic year

The course covers architecture of modern computational systems composed of universal as well as special-purpose processors and their memory subsystems. Instruction-level parallelism is studied on scalar and superscalar processors. Then the processors with thread-level parallelism are discussed. Data parallelism is illustrated on SIMD streaming instructions and on graphical processors. Programming for shared memory systems in OpenMP follows and then the most proliferated multi-core multiprocessors and the advanced NUMA systems are described. Finally, the generic architecture of the graphics processing units and basic programming techniques using OpenMP are also covered. Techniques of  low-power processors are also explained.

Guarantor

Course coordinator

Language of instruction

Czech

Completion

Credit+Examination (written)

Time span

  • 26 hrs lectures
  • 12 hrs pc labs
  • 14 hrs projects

Assessment points

  • 60 pts final exam (written part)
  • 10 pts mid-term test (written part)
  • 30 pts projects

Department

Lecturer

Instructor

Learning objectives

To familiarize yourself with the architecture of modern computational systems based on x86, ARM and RISC-V multicore processors in configurations with uniform (UMA) and non-uniform (NUMA) shared memory, often accompanied with a GPU accelerator. To understand hardware aspects of computational systems that have a significant impact on the application performance and power consumption. To be able to assess computing possibilities of a particular architecture and to predict the performance of applications. To clarify the role of a compiler and its cooperation with processors. To be able to orientate oneself on the computational system market, to evaluate and compare various systems.

Overview of the architecture of modern computational systems, their capabilities, limits and future trends. The ability to estimate performance of software applications on a given computer system, identify performance issues and propose their rectification. Practical user experience with supercomputers.
Understanding of hardware limitations having impact on the efficiency of software solutions. 

Prerequisite knowledge and skills

Von-Neumann computer architecture, computer memory hierarchy, cache memories and their organization, programming in assembly and in C/C++, compiler's tasks and functions.

Study literature

  • Hennessy, J.L., Patterson, D.A.: Computer Architecture - A Quantitative Approach. 5. vydání, Morgan Kaufman Publishers, Inc., 2012, 1136 s., ISBN 1-55860-596-7. download.
  • Baer, J.L.: Microprocessor Architecture. Cambridge University Press, 2010, 367 s., ISBN 978-0-521-76992-1. info.
  • van der Pas, R., Stotzer, E., and Terboven, T.: Using OpenMP-The Next Step, MIT Press Ltd, ISBN 9780262534789, 2017. info.
  • Materiály ke kurzu Computer Science 152: Computer Architecture and Engineering. http://inst.eecs.berkeley.edu/~cs152/sp13/
  • Agner Fog: Software optimization resources.
  • Aktuální PPT prezentace přednášek v Elearningu.

Fundamental literature

  • Baer, J.L.: Microprocessor Architecture. Cambridge University Press, 2010, 367 s., ISBN 978-0-521-76992-1.
  • Hennessy, J.L., Patterson, D.A.: Computer Architecture - A Quantitative Approach. 5. vydání, Morgan Kaufman Publishers, Inc., 2012, 1136 s., ISBN 1-55860-596-7.
  • van der Pas, R., Stotzer, E., and Terboven, T.: Using OpenMP-The Next Step, MIT Press Ltd, ISBN 9780262534789, 2017.

Syllabus of lectures

  1. Scalar processors, pipelined instruction processing and compiler assistance.
  2. Superscalar processors, dynamic instruction scheduling.
  3. Data flow through the hierarchy of cache memories. 
  4. Branch prediction, optimization of instruction and data fetching.
  5. Processors with data level parallelism.
  6. Multi-threaded and multi-core processors.
  7. Loop parallelism and code vectorization.
  8. Functional parallelism and acceleration of recursive algorithms.
  9. Synchronization on systems with shared memory.
  10. Algorithm for cache coherency.
  11. Architectures with distributed shared memory.
  12. Architecture and programming of graphics processing units.
  13. Low power processors and techniques.

Syllabus of computer exercises

  1. Performance measurement of sequential code 
  2. Cache blocking, loop swapping and unrolling
  3. OpenMP 4.0 vectorization 
  4. OpenMP loops 
  5. OpenMP tasks
  6. OpenMP sections and mutual exclusion 

Syllabus - others, projects and individual work of students

  • Performance evaluation and code optimization using OpenMP.
  • Development of an application in OpenMP on a NUMA node.

Progress assessment

Assessment of two projects, 14 hours in total and, computer laboratories and a midterm examination.

  • Missed labs can be substituted in alternative dates.
  • There will be a place for missed labs in the last week of the semester.

Schedule

DayTypeWeeksRoomStartEndCapacityLect.grpGroupsInfo
Mon comp.lab 1., 2., 3., 4., 5., 6., 8., 9., 10., 11., 12., 13. of lectures N104 N105 14:0015:5020 1MIT 2MIT xx
Mon comp.lab 1., 2., 3., 4., 5., 6., 8., 9., 10., 11., 12., 13. of lectures N104 N105 16:0017:5020 1MIT 2MIT xx
Mon comp.lab 1., 2., 3., 4., 5., 6., 8., 9., 10., 11., 12., 13. of lectures N104 N105 18:0019:5020 1MIT 2MIT xx
Tue comp.lab lectures N104 N105 08:0009:5020 1MIT 2MIT xx
Tue comp.lab lectures N104 N105 12:0013:5020 1MIT 2MIT xx
Wed comp.lab lectures N104 N105 14:0015:5020 1MIT 2MIT xx
Thu comp.lab lectures N104 N105 08:0009:5020 1MIT 2MIT xx
Fri lecture lectures E104 E105 E112 08:0009:50294 1MIT 2MIT NBIO - NSPE NISD - NISY NSEC - NGRI NVER xx Jaroš

Course inclusion in study plans

Back to top