Topics Overview

The schedule is preliminary and subject to changes!

The reading for each lecture is given as references to the sources the respective lectures base on. You are not obliged to read anything. However, you are strongly encouraged to read references marked by pin emojis 📌: those are comprehensive overviews on the topics or important works that are beneficial for a better understanding of the key concepts. For the pinned papers, I also specify the pages span for you to focus on the most important fragments. Some of the sources are also marked with a popcorn emoji 🍿: that is misc material you might want to take a look at: blog posts, GitHub repos, leaderboards etc. (also a couple of LLM-based games). For each of the sources, I also leave my subjective estimation of how important this work is for this specific topic: from yellow 🟡 ‘partially useful’ though orange 🟠 ‘useful’ to red 🔴 ‘crucial findings / thoughts’. These estimations will be continuously updated as I revise the materials.

For the labs, you are provided with practical tutorials that respective lab tasks will mostly derive from. The core tutorials are marked with a writing emoji ✍️; you are asked to inspect them in advance (better yet: try them out). On lab sessions, we will only briefly recap them so it is up to you to prepare in advance to keep up with the lab.

Disclaimer: the reading entries are no proper citations; the bibtex references as well as detailed infos about the authors, publish date etc. can be found under the entry links.


Block 1: Intro

Week 1

22.04. Lecture: LLMs as a Form of Intelligence vs LLMs as Statistical Machines

That is an introductory lecture, in which I will briefly introduce the course and we’ll have a warming up discussion about different perspectives on LLMs’ nature. We will focus on two prominent outlooks: LLM is a form of intelligence and LLM is a complex statistical machine. We’ll discuss differences of LLMs with human intelligence and the degree to which LLMs exhibit (self-)awareness.

Key points:

  • Course introduction

  • Different perspectives on the nature of LLMs

  • Similarities and differences between human and artificial intelligence

  • LLMs’ (self-)awareness

Reading:

24.04. Lecture: LLM & Agent Basics

In this lecture, we’ll recap some basics about LLMs and LLM-based agents to make sure we’re on the same page.

Key points:

  • LLM recap

  • Prompting

  • Structured output

  • Tool calling

  • Piping & Planning

Reading:


Week 2

29.04. Debates: LLMs as a Form of Intelligence vs LLMs as Statistical Machines

On material of session 22.04

The first debates of the course will revolve around the topic raised in the respective lecture and will utilize concrete evidence in support of the two outlooks on LLMs. There will be two debate rounds:

  • LLMs: a Form of Intelligence or a Complex Statistical Machine?

  • LLM Behavior: Evidence of Awareness or Illusion of Understanding?

Reading: see session 22.04

01.05.

Ausfalltermin


Block 2: Core Topics

Part 1: Business Applications

Week 3

06.05. Lecture: Virtual Assistants Pt. 1: Chatbots

The first core topic concerns chatbots. We’ll discuss how chatbots are built, how they (should) handle harmful requests and you can tune it for your use case.

Key points:

  • LLMs under the hood: alignment, harmlessness, honesty

  • Prompting & automated prompt generation

  • Chat memory

Reading:

08.05. Lab: Chatbot

On material of session 06.05

In this lab, we’ll build a chatbot and try different prompts and settings to see how it affects the output.

Reading:


Week 4

13.05. Lecture: Virtual Assistants Pt. 2: RAG

Continuing the first part, the second part will expand scope of chatbot functionality and will teach it to refer to custom knowledge base to retrieve and use user-specific information. Finally, the most widely used deployment methods will be briefly introduced.

Key points:

  • General knowledge vs context

  • Knowledge indexing, retrieval & ranking

  • Retrieval tools

  • Agentic RAG

Reading:

15.05. Lab: RAG Chatbot

On material of session 13.05

In this lab, we’ll expand the functionality of the chatbot built at the last lab to connect it to user-specific information.

Reading:


Week 5

20.05. Lecture: Virtual Assistants Pt. 3: Multi-agent Environment

This lectures concludes the Virtual Assistants cycle and directs its attention to automating everyday / business operations in a multi-agent environment. We’ll look at how agents communicate with each other, how their communication can be guided (both with and without involvement of a human), and this all is used in real applications.

Key points:

  • Multi-agent environment

  • Human in the Loop

  • Examples of pipelines for business operations

Reading:

22.05. Lab: Multi-agent Environment

On material of session 20.05

This lab will introduce a short walkthrough to creation of a multi-agent environment for automated meeting scheduling and preparation. We will see how the coordinator agent will communicate with two auxiliary agents to check time availability and prepare an agenda for the meeting.

Reading:


Week 6

27.05. Lecture: Software Development Pt. 1: Code Generation, Evaluation & Testing

This lectures opens a new lecture mini-cycle dedicated to software development. The first lecture overviews how LLMs are used to generate reliable code and how generated code is tested and improved to deal with the errors.

Key points:

  • Code generation & refining

  • Automated testing

  • Code evaluation & benchmarks

Reading:

29.05.

Ausfalltermin


Week 7

03.06. Lecture: Software Development Pt. 2: Copilots, LLM-powered Websites

The second and the last lecture of the software development cycle focuses on practical application of LLM code generation, in particular, on widely-used copilots (real-time code generation assistants) and LLM-supported web development.

Key points:

  • Copilots & real-time hints

  • LLM-powered websites

  • LLM-supported deployment

  • Further considerations: reliability, sustainability etc.

Reading:

05.06 Lab: LLM-powered Website

On material of session 03.06

In this lab, we’ll have the LLM make a website for us: it will both generate the contents of the website and generate all the code required for rendering, styling and navigation.

Reading:


Week 8: Having Some Rest

10.06.

Ausfalltermin

12.06.

Ausfalltermin


Week 9

17.06. Pitch: RAG Chatbot

On material of session 06.05 and session 13.05

The first pitch will be dedicated to a custom RAG chatbot that the contractors (the presenting students, see the infos about Pitches) will have prepared to present. The RAG chatbot will have to be able to retrieve specific information from the given documents (not from the general knowledge!) and use it in its responses. Specific requirements will be released on 22.05.

Reading: see session 06.05, session 08.05, session 13.05, and session 15.05

19.06.

Ausfalltermin


Week 10

24.06. Pitch: Handling Customer Requests in a Multi-agent Environment

On material of session 20.05

In the second pitch, the contractors will present their solution to automated handling of customer requests. The solution will have to introduce a multi-agent environment to take off working load from an imagined support team. The solution will have to read and categorize tickets, generate replies and (in case of need) notify the human that their interference is required. Specific requirements will be released on 27.05.

Reading: see session 20.05 and session 22.05

26.06. Lecture: Other Business Applications: Game Design, Financial Analysis etc.

This lecture will serve a small break and will briefly go over other business scenarios that the LLMs are used in.

Key points:

  • Game design & narrative games

  • Financial applications

  • Content creation

Reading:


Part 2: Applications in Science

Week 11

01.07. Lecture: LLMs in Research: Experiment Planning & Hypothesis Generation

The first lecture dedicated to scientific applications shows how LLMs are used to plan experiments and generate hypothesis to accelerate research.

Key points:

  • Experiment planning

  • Hypothesis generation

  • Predicting possible results

Reading:

03.07: Lab: Experiment Planning & Hypothesis Generation

On material of session 01.07

In this lab, we’ll practice in facilitating researcher’s work with LLMs on the example of a toy scientific research.

Reading: see session 22.05


Week 12

08.07: Pitch: Agent for Code Generation

On material of session 27.05

This pitch will revolve around the contractors’ implementation of a self-improving code generator. The code generator will have to generate both scripts and test cases for a problem given in the input prompt, run the tests and refine the code if needed. Specific requirements will be released on 17.06.

Reading: see session 27.05 and session 05.06

10.07. Lecture: Other Applications in Science: Drug Discovery, Math etc. & Scientific Reliability

The final core topic will mention other scientific applications of LLMs that were not covered in the previous lectures and address the question of reliability of the results obtained with LLMs.

Key points:

  • Drug discovery, math & other applications

  • Scientific confidence & reliability

Reading:


Block 3: Wrap-up

Week 13

15.07. Pitch: Agent for Web Development

On material of session 03.06

The contractors will present their agent that will have to generate full (minimalistic) websites by a prompt. For each website, the agent will have to generate its own style and a simple menu with working navigation as well as the contents. Specific requirements will be released on 24.06.

Reading: see session 03.06 and session 05.06

17.07. Lecture: Role of AI in Recent Years

The last lecture of the course will turn to societal considerations regarding LLMs and AI in general and will investigate its role and influence on the humanity nowadays.

Key points:

  • Studies on influence of AI in the recent years

  • Studies on AI integration rate

  • Ethical, legal & environmental aspects

Reading:


Week 14

22.07. Pitch: LLM-based Research Assistant

On material of session 01.07

The last pitch will introduce an agent that will have to plan the research, generate hypotheses, find the literature etc. for a given scientific problem. It will then have to introduce its results in form of a TODO or a guide for the researcher to start off of. Specific requirements will be released on 01.07.

Reading: see session 01.07 and session 03.07

24.07. Debate: Role of AI in Recent Years + Wrap-up

On material of session 17.07

The course will be concluded by the final debate (motions will be released on 17.07), after which a short course summary and a Q&A session will be held.

Reading: see session 17.07