October: INTRO
November–December: CORE
Week 3
Week 4
Week 5
Weeks 6-7
Week 8
Week 9
January–February: PROJECTS
Weeks 10-13
Week 14
See detailed descriptions of the sessions below.
Notes:
This schedule may be changed, should the need arise.
You are not required to read anything. However, you are strongly encouraged to read sources marked by pin emojis 📌: those are comprehensive overviews on the topics or important works that are beneficial for a better understanding of the key concepts.
Sources marked with a popcorn emoji 🍿 is misc material you might want to take a look at: blog posts, GitHub repos, leaderboards etc.
For the labs, you are provided with practical tutorials that the respective lab tasks will mostly derive from. The core tutorials are marked with a writing emoji ✍️; you are asked to inspect them in advance (better yet: try them out).
Disclaimer: the reading entries are no proper citations; detailed infos about the authors, publication date, venue etc. can be found under the entry links.
October: INTRO¶
Week 1¶
22.10. Intro¶
That is an introductory meeting, in which I we will cover the contents and the schedule of the course, the class formats and the formalia, and where all your questions, suggestions etc. will be discussed.
Key points:
Course introduction
Q&A
23.10. Lecture: Ontological Status of LLMs¶
This lecture will suggest a warm-up discussion about different perspectives on LLM nature. We will focus on two prominent outlooks: LLM as as a complex statistical machine vs LLM as a form of intelligence. We’ll discuss differences of LLM and human intelligence and the degree to which LLMs exhibit (self-)awareness.
Key points:
Different perspectives on the nature of LLMs
Similarities and differences between human and artificial intelligence
LLMs’ (self-)awareness
Sources:
📌 The Debate Over Understanding in AI’s Large Language Models (pages 1-7),
Santa Fe InstituteMeaning without reference in large language models,
UC Berkeley & DeepMindDissociating language and thought in large language models,
The University of Texas at Austin et al.Do Large Language Models Understand Us?,
Google ResearchSparks of Artificial General Intelligence: Early experiments with GPT-4 (chapters 1-8 & 10),
Microsoft ResearchOn the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜,
University of Washington et al.Large Language Models: The Need for Nuance in Current Debates and a Pragmatic Perspective on Understanding,
Leiden Institute of Advanced Computer Science & Leiden University Medical Centre
Week 2¶
29.10. Lecture: Lecture: LLM & Agent Basics¶
In this lecture, we’ll recap the basics of LLMs and LLM-based agents to make sure we’re on the same page.
Key points:
What makes an LLM
Instruction fine-tuning & alignment
LLM-based agents
Structured output
Tool calling
Piping & Planning
Sources:
A Survey of Large Language Models,
Renmin University of China et al.Emergent Abilities of Large Language Models,
Google Research, Stanford, UNC Chapel Hill, DeepMindThe Llama 3 Herd of Models,
Meta AISelf-Instruct: Aligning Language Models with Self-Generated Instructions,
University of Washington et al.Agent Instructs Large Language Models to be General Zero-Shot Reasoners,
Washington University & UC Berkeley📌 Aligning Large Language Models with Human: A Survey (pages 1-14),
Huawei Noah’s Ark LabTraining language models to follow instructions with human feedback,
OpenAITraining a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback,
Anthropic“We Need Structured Output”: Towards User-centered Constraints on Large Language Model Output,
Google Research & GoogleTool Learning with Large Language Models: A Survey,
Renmin University of China et al.ToolACE: Winning the Points of LLM Function Calling,
Huawei Noah’s Ark Lab et al.Toolformer: Language Models Can Teach Themselves to Use Tools,
Meta AIGranite-Function Calling Model: Introducing Function Calling Abilities via Multi-task Learning of Granular Tasks,
IBM Research🍿 Berkeley Function-Calling Leaderboard,
UC Berkeley(leaderboard)ReAct: Synergizing Reasoning and Acting in Language Models,
Princeton University & Google ResearchA Survey on Multimodal Large Language Models,
University of Science and Technology of China & Tencent YouTu LabEvaluating Large Language Models. A Comprehensive Survey,
Tianjin University
30.10. Lab: Intro to LangChain¶
This is the first lab which will guide you through the basic concepts of LangChain for the further practical sessions.
Sources:
November-December: CORE¶
Week 3¶
05.11. Lecture: Virtual Assistants¶
The first core topic addresses single-LLM virtual assistants such as chatbots and RAG systems. We’ll discuss how these systems are built and how you can tune them for your use case.
Key points:
Recap: Prompting
Memory
RAG workflow & techniques
RAG vs long-context LLMs
Sources:
A Systematic Survey of Prompt Engineering in Large Language Models: Techniques and Applications,
Indian Institute of Technology Patna, Stanford & Amazon AI📌 Chain-of-Thought Prompting Elicits Reasoning in Large Language Models (pages 1-9),
Google ResearchAutomatic Prompt Selection for Large Language Models,
Cinnamon AI, Hung Yen University of Technology and Education & Deakin UniversityPromptGen: Automatically Generate Prompts using Generative Models,
Baidu ResearchA Survey on the Memory Mechanism of Large Language Model based Agents,
Renmin University of China & Huawei Noah’s Ark LabAugmenting Language Models with Long-Term Memory,
UC Santa Barbara & Microsoft ResearchFrom LLM to Conversational Agent: A Memory Enhanced Architecture with Fine-Tuning of Large Language Models,
Beike Inc.📌 Retrieval Augmented Generation or Long-Context LLMs? A Comprehensive Study and Hybrid Approach (pages 1-7),
Google DeepMind & University of MichiganA Survey on Retrieval-Augmented Text Generation for Large Language Models,
York UniversityDon’t Do RAG: When Cache-Augmented Generation is All You Need for Knowledge Tasks,
National Chengchi University & Academia SinicaSelf-RAG: Learning to Retrieve, Generate, and Critique through Self-Reflection,
University of Washington, Allen Institute for AI & IBM Research AIAuto-RAG: Autonomous Retrieval-Augmented Generation for Large Language Models,
Chinese Academy of SciencesAdaptive-RAG: Learning to Adapt Retrieval-Augmented Large Language Models through Question Complexity,
Korea Advanced Institute of Science and TechnologyQuerying Databases with Function Calling,
Weaviate, Contextual AI & Morningstar
06.11. Lab: LLM-based Chatbot¶
On material of Lecture: Virtual Assistants
In this lab, we’ll build a chatbot and try different prompts and settings to see how it affects the output.
Sources:
✍️ Graph API overview,
LangGraphUse the graph API,
LangGraph
Week 4¶
12.11 & 13.11. Labs: RAG¶
On material of Lecture: Virtual Assistants
In this lab, we’ll start expanding the functionality of the chatbot built at the last lab to connect it to our user-specific information. On the first day, we’ll preprocess our custom data for further retrieval. The following day we’ll complete move from data preprocessing to implementing the RAG workflow.
Sources:
✍️ Retrieval,
LangChain✍️ Build a custom RAG agent,
LangChainDocument loaders,
LangChainText splitters,
LangChainEmbedding models,
LangChainVector stores,
LangChainRetrievers,
LangChain✍️ Build a RAG agent with LangChain,
LangGraphAdaptive RAG (deprecated),
LangGraph
Week 5¶
19.11. Lecture: Multi-agent Environment¶
This lectures directs its attention to automating everyday / business operations in a multi-agent environment. We’ll look at how agents communicate with each other, how their communication can be guided (both with and without involvement of a human), and how this is used in real applications.
Key points:
Multi-agent environment
Agents communication
Examples of pipelines for business operations
Sources:
📌 LLM-based Multi-Agent Systems: Techniques and Business Perspectives (pages 1-8),
Shanghai Jiao Tong University & OPPO Research InstituteGenerative Agents: Interactive Simulacra of Human Behavior,
Stanford, Google Research & DeepMindImproving Factuality and Reasoning in Language Models through Multiagent Debate,
MIT & Google BrainExploring Collaboration Mechanisms for LLM Agents: A Social Psychology View,
Zhejiang University, National University of Singapore & DeepMindAutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation,
Microsoft Research et al.🍿 How real-world businesses are transforming with AI — with more than 140 new stories,
Microsoft(blog post)🍿 Built with LangGraph,
LangGraph(website page)🍿 Your AI Companion,
Microsoft(blog post)Plan-Then-Execute: An Empirical Study of User Trust and Team Performance When Using LLM Agents As A Daily Assistant,
Delft University of Technology & The University of Queensland
20.11. Lab: Multi-agent Environment¶
On material of Lecture: Multi-agent Environment
This lab will introduce a short walkthrough to creation of a multi-agent environment for automated meeting scheduling and preparation. We will see how the coordinator agent will communicate with two auxiliary agents to check time availability and prepare an agenda for the meeting.
Sources:
✍️ Multi-agent,
LangGraphHuman-in-the-loop,
LangGraphPlan-and-Execute (deprecated),
LangGraphReflection (deprecated),
LangGraphMulti-agent supervisor (deprecated),
LangGraph
Weeks 6-7¶
26.11 & 27.11 & 03.12 & 04.12. Labs: LLM-powered Website¶
This is a mini-cycle of labs, where you will individually build a multi-agent system. These labs are needed for you to practice the technical implementation of such systems, identify and work on your weak spots, as well as discuss all the doubts and difficulties you encounter. Consider it preparation for the final project. As a result, you will create a workflow to generate websites with LLMs. The LLMs will generate both the contents and the code required for rendering, styling and navigation by communicating in a semi-centralized manner and using reasoning and critique to improve the output.
Sources:
✍️ HTML: Creating the content,
MDN✍️ Getting started with CSS,
MDN
Week 8¶
10.12. Lecture: Role of AI in Recent Years¶
The last lecture of the course will turn to societal considerations regarding LLMs and AI in general and will investigate its role and influence on the humanity nowadays.
Key points:
Studies on influence of AI in the recent years
Studies on AI integration rate
Ethical, legal & environmental aspects
Sources:
📌 Protecting Human Cognition in the Age of AI (pages 1-5), The University of Texas at Austin et al.
📌 Artificial intelligence governance: Ethical considerations and implications for social responsibility (pages 1-12),
University of MaltaAugmenting Minds or Automating Skills: The Differential Role of Human Capital in Generative AI’s Impact on Creative Tasks,
Tsinghua University & Wuhan University of TechnologyHuman Creativity in the Age of LLMs: Randomized Experiments on Divergent and Convergent Thinking,
University of TorontoEmpirical evidence of Large Language Model’s influence on human spoken communication,
Max-Planck Institute for Human Development🍿 The 2025 AI Index Report: Top Takeaways,
StanfordGrowing Up: Navigating Generative AI’s Early Years – AI Adoption Report: Executive Summary,
AI at WhartonEthical Implications of AI in Data Collection: Balancing Innovation with Privacy,
AI Data ChroniclesLegal and ethical implications of AI-based crowd analysis: the AI Act and beyond,
Vrije UniversiteitA Survey of Sustainability in Large Language Models: Applications, Economics, and Challenges,
Cleveland State University et al.
11.12. Wrap-up¶
This informal meeting will give a small summary with key takeaways from the course. We will also discuss the next steps such as project requirements, proposal procedure etc.
Key points:
Summary
Project discussion
Q&A
Week 9¶
17.12. Debate: Role of AI in Recent Years¶
On material of Lecture: Role of AI in Recent Years
The core block of the course will be concluded by the final debates about the role of AI in recent years. Debate topics well be announced on 10.12.
Sources: see Lecture: Role of AI in Recent Years
18.12. Project Proposals¶
In this meeting, you will introduce your project proposals. The goal of this session is to receive feedback on your idea from your peer students and me in order to adjust the idea if necessary. Additionally, the intermediate consultations for the project groups will be scheduled.
Key points:
Project proposals
Consultation scheduling
January-February: PROJECTS¶
Weeks 10-13¶
This time is given to you to implement the projects as well as to prepare a short presentation for the final week. During this time, there will be a few consultations for the project groups, where we will be inspecting the intermediate progress and addressing the issues.
Week 14¶
04.02 & 05.02. Project Presentations¶
Finally, the last two sessions of the course will be dedicated to your project presentations.
- Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜. Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610–623. 10.1145/3442188.3445922