Course details

Convolutional Neural Networks

KNN Acad. year 2021/2022 Summer semester 5 credits

Solutions based on machine learning approaches gradually replace more and more hand-designed solutions in many areas of software development, especially in perceptual task focused on information extraction from unstructured sources like cameras and microphones. Today, the dominant method in machine learning is neural networks and their convolutional variants. These approaches are at the core of many commercially successful applications and they push forward the frontiers of artificial intelligence.


Deputy Guarantor

Language of instruction



Classified Credit

Time span

26 hrs lectures, 26 hrs projects

Assessment points

35 mid-term test, 65 projects




Subject specific learning outcomes and competences

Students will gain basic knowledge of convolutional neural networks, their training (optimization), their building blocks and of the tools and software frameworks used to implement them. Students will gain insights in what factors determine accuracy of networks in real applications including data sets, loss functions, network structure, regularization, optimization, overfitting and multi-task learning. They will receive an overview of state-of-the-art networks in a range of computer vision tasks (classification, object detection, segmentation, identification), speech recognition, language understanding, data generation and reinforcement learning.

Generic learning outcomes and competences

Students will acquire team work experience during project work and they will acquire basic knowledge of python libraries for linear algebra and machine learning.

Learning objectives

Basic knowledge of convolutional neural networks, their capabilities and limitations. Practical applications mostly in computer vision tasks and completed by task from speech recognition and language processing. To allow students to design overall solutions using convolutional networks in practical applications including network architectures, optimization, data collection and testing and evaluation.

Why is the course taught

This course is for you whether your goal is to work as an AI expert in in a large multinational corporation such as Google or Facebook, whether you want to push forward frontiers of artificial intelligence as a part of a top academic team, or whether you just want to broaden your horizons of the state-of-the-art in machine learning. Neural networks are at the core of many commercial applications ranging from speech recognition, content-based image search and intelligent surveillance systems to question answering systems and autonomous cars. At the same time neural networks are the enabling factor of the current rapid advancements in artificial intelligence. This course will enable you to use this powerful tool in practical applications.

Prerequisite kwnowledge and skills

Basics of linear algebra (multiplication of vectors and matrices), differential calculus (partial derivatives, chain rule), Python and intuitive understanding of probability (e.g. conditional probability). Any knowledge of machine learning and image processing is an advantage.

Study literature

  • Goodfellow, I., Bengio, Y., Courville, A.: Deep Learning. MIT Press, 2016.
  • Li, Fei-Fei, et al.: CS231n: Convolutional Neural Networks for Visual Recognition. Stanford, 2018.

Fundamental literature

  • Goodfellow, I., Bengio, Y., Courville, A.: Deep Learning. MIT Press, 2016.
  • Li, Fei-Fei, et al.: CS231n: Convolutional Neural Networks for Visual Recognition. Stanford, 2018.
  • Bishop, C. M.: Pattern Recognition, Springer Science + Business Media, LLC, 2006, ISBN 0-387-31073-8.

Syllabus of lectures

  1. Introduction, linear models. loss function, learning algorithms and evaluation.
  2. Fully connected networks, loss functions for classification and regression. 
  3. Convolutional networks, locality in equivariance of computation.
  4. Generalization, regularization, data augmentation. multi-task learning and pre-training. 
  5. Problems during network optimization and batch normalization. Existing image classification network architectures.
  6. Object detection: MTCNN face detector, R-CNN, Fast R-CNN, Faster R-CNN, YOLO, SSD.
  7. Semantic and instance segmentation. Connections to estimation of depth, surface normals, shading and motion.
  8. Learning similarity and embedding. Person identification. 
  9. Recurrent networks and sequence processing (text and speech). Connectionist Temporal Classification (CTC). Attention networks.
  10. Language models. Basic image captioning networks, question answering and language translation.
  11. Generative models. Autoregressive factorization. Generative Adversarial Networks (GAN, DCGAN, cycle GAN).
  12. Reinforcement learning. Deep Q-network (DQN) and policy gradients.
  13. Overview of emerging applications and cutting edge research.

Syllabus - others, projects and individual work of students

Team project (2-3 students).
Individual assignments - proposed by students, approved by the teacher. Components:

  • Problem Formulation, team formation.
  • Research of existing solutions and usefull tools.
  • Baseline solution and evaluation proposal.
  • Data collection.
  • Experiments, testing and gradual improvement.
  • Final report and Presentation of the project.

Progress assessment

  • Project concluded by public presentation - 65 points.
  • Two tests during the semester - 35 points.

Exam prerequisites

Acquiring at least 50 points.


Thulecturelectures E112 17:0018:50 1MIT 2MIT NBIO NMAL NSPE NVIZ xx

Course inclusion in study plans

Back to top