Dissertation Topic

Application of Transformer Neural Networks for Processing of Domain-Oriented Knowledge in Computer Networks

Academic Year: 2024/2025

Supervisor: Matoušek Petr, doc. Ing., Ph.D., M.A.

Department: Department of Information Systems

Programs:
Information Technology (DIT) - full-time study
Information Technology (DIT-EN) - full-time study

Topic Description:

Transformers are large language models based on deep learning. They are mainly used for natural language processing, but also for document analysis, information retrieval, etc.

The research topic focuses on the application of large LLM-type language models for information retrieval in technical documentation, e.g. technical reports, manuals, domain-specific knowledge bases, etc. The research involves processing the input domain-oriented documents and converting them into a language model to be used in transfer learning technique.

The goal of the research is to apply and optimize transformers for efficient retrieval of domain-specific information, e.g. to support network administrators in handling security incidents, network diagnostics, etc.

References:

  • Tang, Zineng, et al. "Unifying vision, text, and layout for universal document processing." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.
  • Pilault, Jonathan, et al. "On extractive and abstractive neural document summarization with transformer language models." Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). 2020.
  • Rothman, Denis, and Antonio Gulli. Transformers for Natural Language Processing: Build, train, and fine-tune deep neural network architectures for NLP with Python, PyTorch, SensorFlow, BERT, and GPT-3. Packt Publishing Ltd, 2022.
Back to top