Topics Overview

The schedule is preliminary and subject to changes!

The reading for each lecture is given as references to the sources the respective lectures base on. You are not obliged to read anything. However, you are strongly encouraged to read references marked by pin emojis 📌: those are comprehensive overviews on the topics or important works that are beneficial for a better understanding of the key concepts. For the pinned papers, I also specify the pages span for you to focus on the most important fragments. Some of the sources are also marked with a popcorn emoji 🍿: that is misc material you might want to take a look at: blog posts, GitHub repos, leaderboards etc. (also a couple of LLM-based games). For each of the sources, I also leave my subjective estimation of how important this work is for this specific topic: from yellow 🟡 ‘partially useful’ though orange 🟠 ‘useful’ to red 🔴 ‘crucial findings / thoughts’. These estimations will be continuously updated as I revise the materials.

For the labs, you are provided with practical tutorials that respective lab tasks will mostly derive from. The core tutorials are marked with a writing emoji ✍️; you are asked to inspect them in advance (better yet: try them out). On lab sessions, we will only briefly recap them so it is up to you to prepare in advance to keep up with the lab.

Disclaimer: the reading entries are no proper citations; the bibtex references as well as detailed infos about the authors, publish date etc. can be found under the entry links.


Block 1: Intro

Week 1

22.04. Lecture: LLMs as a Form of Intelligence vs LLMs as Statistical Machines

That is an introductory lecture, in which I will briefly introduce the course and we’ll have a warming up discussion about different perspectives on LLMs’ nature. We will focus on two prominent outlooks: LLM is a form of intelligence and LLM is a complex statistical machine. We’ll discuss differences of LLMs with human intelligence and the degree to which LLMs exhibit (self-)awareness.

Key points:

  • Course introduction

  • Different perspectives on the nature of LLMs

  • Similarities and differences between human and artificial intelligence

  • LLMs’ (self-)awareness

Core Reading:

Additional Reading:

24.04. Lecture: LLM & Agent Basics

In this lecture, we’ll recap some basics about LLMs and LLM-based agents to make sure we’re on the same page.

Key points:

  • LLM recap

  • Prompting

  • Structured output

  • Tool calling

  • Piping & Planning

Core Reading:

Additional Reading:


Week 2

29.04. Lab: Intro to LangChain

The final introductory session will guide you through the most basic concepts of LangChain for the further practical sessions.

Reading:

01.05.

Ausfalltermin


Block 2: Core Topics

Part 1: Business Applications

Week 3

06.05. Lecture: Virtual Assistants Pt. 1: Chatbots

The first core topic concerns chatbots. We’ll discuss how chatbots are built, how they (should) handle harmful requests and you can tune it for your use case.

Key points:

  • LLMs alignment

  • Memory

  • Prompting & automated prompt generation

  • Evaluation

Core Reading:

Additional Reading:

08.05. Lab: Basic LLM-based Chatbot

On material of session 06.05

In this lab, we’ll build a chatbot and try different prompts and settings to see how it affects the output.

Reading:


Week 4

13.05. Lecture: Virtual Assistants Pt. 2: RAG

Continuing the first part, the second part will be dedicated to techniques of providing LLMs with custom knowledge to retrieve and use user-specific or up-to-date information.

Key points:

  • RAG workflow

  • RAG Techniques

  • Agentic RAG

  • RAG vs Long-Context LLMs

  • Evaluation

Core Reading:

Additional Reading:

15.05. Lab: RAG Chatbot Pt. 1

On material of session 13.05

In this lab, we’ll start expanding the functionality of the chatbot built at the last lab to connect it to user-specific information. In this first part, we’ll preprocess our custom data for further retrieval.

Reading:


Week 5

20.05. Lab: RAG Chatbot Pt. 2

On material of session 13.05

In this lab, we’ll complete the first part and move from data preprocessing to implementing the RAG chatbot.

Reading: see session 13.05, and session 15.05

22.05. Lecture: Virtual Assistants Pt. 3: Multi-agent Environment

This lectures concludes the Virtual Assistants cycle and directs its attention to automating everyday / business operations in a multi-agent environment. We’ll look at how agents communicate with each other, how their communication can be guided (both with and without involvement of a human), and this all is used in real applications.

Key points:

  • Multi-agent environment

  • Agents communication

  • Examples of pipelines for business operations

Core Reading:

Additional Reading:


Week 6

27.05. Lab: Multi-agent Environment

On material of session 22.05

This lab will introduce a short walkthrough to creation of a multi-agent environment for automated meeting scheduling and preparation. We will see how the coordinator agent will communicate with two auxiliary agents to check time availability and prepare an agenda for the meeting.

Reading:

29.05.

Ausfalltermin


Week 7

03.06. Lecture: LLMs in Software Development

This lecture gives an overview on how LLMs are used to generate reliable code and how generated code is tested and improved to deal with the errors.

Key points:

  • Code generation & Refinement

  • Automated testing

  • End-to-end Software Development

  • Copilots

  • Generated code evaluation

  • Further considerations: reliability, sustainability etc.

Core Reading:

Additional Reading:

05.06 Lab: LLM-powered Website

On material of session 03.06

In this lab, we’ll have the LLM make a website for us: it will both generate the contents of the website and generate all the code required for rendering, styling and navigation.

Reading:


Week 8: Having Some Rest

10.06.

Ausfalltermin

12.06.

Ausfalltermin


Week 9

17.06. Lecture: Other Business Applications: Game Design, Financial Analysis etc.

This lecture will serve a small break and will briefly go over other business scenarios that the LLMs are used in.

Key points:

  • Game design & narrative games

  • Financial applications

  • Content creation

Additional Reading:

19.06.

Ausfalltermin


Week 10: Unplanned Pause

24.06.

Ausfalltermin

26.06.

Ausfalltermin


Part 2: Applications in Science

Week 11

01.07. Lecture: LLMs for Hypothesis Generation & Literature Review

The first lecture dedicated to scientific applications shows how LLMs are used to plan experiments and generate hypothesis to accelerate research.

Key points:

  • Hypothesis generation

  • Literature Review

Core Reading:

Additional Reading:

03.07: Lab: Experiment Planning & Hypothesis Generation

On material of session 01.07

In this lab, we’ll practice in facilitating researcher’s work with LLMs on the example of a toy scientific research.

Reading: see session 27.05


Week 12

08.07. Lecture: Role of AI in Recent Years

The last lecture of the course will turn to societal considerations regarding LLMs and AI in general and will investigate its role and influence on the humanity nowadays.

Key points:

  • Studies on influence of AI in the recent years

  • Studies on AI integration rate

  • Ethical, legal & environmental aspects

Core Reading:

Additional Reading:

Block 3: Wrap-up

10.07. Pitch: RAG Chatbot

On material of session 06.05 and session 13.05

The first pitch will be dedicated to a custom RAG chatbot that the contractors (the presenting students, see the infos about Pitches) will have prepared to present. The RAG chatbot will have to be able to retrieve specific information from the given documents (not from the general knowledge!) and use it in its responses.

Reading: see session 06.05, session 08.05, session 13.05, session 15.05, and session 20.05


Week 13

15.07. Pitch: Handling Customer Requests in a Multi-agent Environment

On material of session 22.05

In the second pitch, the contractors will present their solution to automated handling of customer requests. The solution will have to introduce a multi-agent environment to take off working load from an imaginary support team. The solution will have to read and categorize tickets, generate replies and (in case of need) notify the human that their interference is required.

Reading: see session 22.05 and session 27.05

17.07. Pitch: Agent for Web Resumes

On material of session 03.06

The contractors will present their agent that will have to make up a character biography by the input prompt and generate a minimalistic HTML resume for them by a prompt. For each resume, the agent will also have to generate its own style.

Reading: see session 03.06 and session 05.06


Week 14

22.07. Pitch: LLM-based Research Assistant

On material of session 01.07

The last pitch will introduce an agent that will have to plan the research, generate hypotheses, find the literature etc. for a given scientific problem. It will then have to introduce its results in form of a TODO or a guide for the researcher to start off of. Specific requirements will be released on 03.07.

Reading: see session 01.07 and session 03.07

24.07. Debate: Role of AI in Recent Years

On material of session 17.07

The course will be concluded by the final debates, after which a short Q&A session will be held.

Debate topics:

  • LLM Behavior: Evidence of Awareness or Illusion of Understanding?

  • Should We Limit the Usage of AI?

Reading: see session 17.07