29.04. Intro to LangChain π¦πΒΆ
π Download notebook and session files
LangChain is a powerful framework for building and orchestration of LLM-driven applications. It enables you to chain together language models, tools, and logic into flexible pipelines while maintaining the high level of abstraction. In other words, LangChain manages most of the engineering stuff for you so you can build LLM-based applications seamlessly.
This tutorial covers the basic concepts you need to get started:
PrerequisitesΒΆ
To start with the tutorial, complete the steps Prerequisites, Environment Setup, and Getting API Key from the LLM Inference Guide.
1. Runnables π
A Runnable
is the foundational building block in LangChain. It is an abstraction for anything that can be invoked β meaning you can call it with an input and get an output. Runnable
s share the same interface for the core functionality for you to be able to unify usage of components of different types under the same logic: input in - output out. This enables piping components for different purposes easily and intuitively.
from langchain_core.runnables import Runnable, RunnableLambda
# define a simple function as a Runnable
uppercase = RunnableLambda(lambda x: x.upper())
uppercase.invoke("langchain") # output: LANGCHAIN
'LANGCHAIN'
# define another simple function as a Runnable
reverse = RunnableLambda(lambda x: x[::-1])
reverse.invoke("langchain") # output: niahcgnal
'niahcgnal'
2. LCEL (LangChain Expression Language) π
LCEL is a syntax for composing LangChain components (so Runnables
s) using a |
pipe operator β similar to Unix pipes. Since LangChain components are (almost) all Runnable
s, you can pipe them with LCEL and the output of the previous Runnable
will become the input of the next one.
# combine the two Runnables into a single pipeline
pipeline_c = uppercase | reverse
pipeline_c.invoke("langchain") # output: NIAHCGNAL
'NIAHCGNAL'
isinstance(pipeline_c, Runnable) # output: True
True
LCEL also support parallelization. If you pass a dict
with Runnable
s as values, LangChain will run them in parallel and return a dict
with outputs under the corresponding keys.
mapping = {
"upper": uppercase,
"rev": reverse,
}
summarizer = RunnableLambda(lambda d: f"Summary: {d['upper']} and {d['rev']}")
# this will 1) run `uppercase` and put the result in `upper` key
# 2) run `reverse` and put the result in `rev` key
# 3) pass this dict to summarizer for it to combine the results
pipeline_p = mapping | summarizer
pipeline_p.invoke("langchain") # output: Summary: LANGCHAIN and niahcgnal
'Summary: LANGCHAIN and niahcgnal'
isinstance(pipeline_p, Runnable) # output: True
True
3. Messages π¨οΈ
Messages are needed to give LLMs instructions. Different types of messages improve the behavior of the model in multi-turn settings.
There are 3 basic message types:
SystemMessage
: sets LLM role and describes the desired behaviorHumanMessage
: user inputAIMessage
: model output
from langchain_core.messages import SystemMessage, HumanMessage
messages = [
SystemMessage(
content="You are a medieval French knight." # role
),
HumanMessage(
content="Give me a summary of the Battle of Agincourt." # user request
)
]
Messages are no Runnable
s! They are the data in the pipeline and not a part of it itself.
isinstance(messages[0], Runnable) # output: False
False
4. Chat Models π¬
A ChatModel
is an LLM interface that lets you configure and call LLMs easily. It receives a list of messages and passes them to the underlying LLM for it to generate the output. In fact, it is common to use ChatModel
s even for non-conversational settings.
from langchain_nvidia_ai_endpoints import ChatNVIDIA
from langchain_core.rate_limiters import InMemoryRateLimiter
# read system variables
import os
import dotenv
dotenv.load_dotenv() # that loads the .env file variables into os.environ
True
# choose any model, catalogue is available under https://build.nvidia.com/models
MODEL_NAME = "meta/llama-3.3-70b-instruct"
# this rate limiter will ensure we do not exceed the rate limit
# of 40 RPM given by NVIDIA
rate_limiter = InMemoryRateLimiter(
requests_per_second=35 / 60, # 35 requests per minute to be sure
check_every_n_seconds=0.1, # wake up every 100 ms to check whether allowed to make a request,
max_bucket_size=7, # controls the maximum burst size
)
llm = ChatNVIDIA(
model=MODEL_NAME,
api_key=os.getenv("NVIDIA_API_KEY"),
temperature=0, # ensure reproducibility,
rate_limiter=rate_limiter # bind the rate limiter
)
isinstance(llm, Runnable) # output: True
True
response = llm.invoke(messages)
type(response) # output: AIMessage
langchain_core.messages.ai.AIMessage
In the standard case (no structured output or such), the generated text is stored under the content
attribute.
response
AIMessage(content="Bonjour! Ze Battle of Agincourt, eet ees a tale of great valor and cunning, no? Eet ees a story of how ze brave knights of France, led by ze noble Charles d'Albret, ze Constable of France, clashed with ze English army, led by ze clever King Henry V.\n\nEet ees October 25, 1415, and ze English army, weary from zeir long march from Harfleur, ees vastly outnumbered by ze French forces. But ze English, zay are not deterred. Zay form a defensive line, with zeir longbowmen at ze forefront, and prepare to face ze French charge.\n\nZe French, confident in zeir numbers and zeir chivalry, charge forward with great fanfare. But ze English longbowmen, zay are a formidable foe. Zay unleash a hail of arrows upon ze French knights, cutting them down like wheat before a scythe. Ze French, weighed down by zeir heavy armor, struggle to move through ze muddy terrain, and ze English take full advantage of zis.\n\nAs ze battle rages on, ze French become increasingly disorganized, and ze English seize ze initiative. Ze French knights, once so proud and noble, now stumble and fall, their armor no match for ze English arrows. Ze English, on ze other hand, fight with great discipline and cohesion, and soon ze French army ees in full retreat.\n\nIn ze end, ze English emerge victorious, having defeated a French army many times zeir size. Ze French suffer heavy losses, including many noble knights and ze Constable of France himself. Ze English, on ze other hand, suffer relatively few casualties, and King Henry V ees hailed as a hero.\n\nAh, ze Battle of Agincourt, eet ees a testament to ze bravery and cunning of ze English, and a reminder that even ze greatest armies can fall to ze clever and ze bold. Vive la France, mais vive l'Angleterre aussi!", additional_kwargs={}, response_metadata={'role': 'assistant', 'content': "Bonjour! Ze Battle of Agincourt, eet ees a tale of great valor and cunning, no? Eet ees a story of how ze brave knights of France, led by ze noble Charles d'Albret, ze Constable of France, clashed with ze English army, led by ze clever King Henry V.\n\nEet ees October 25, 1415, and ze English army, weary from zeir long march from Harfleur, ees vastly outnumbered by ze French forces. But ze English, zay are not deterred. Zay form a defensive line, with zeir longbowmen at ze forefront, and prepare to face ze French charge.\n\nZe French, confident in zeir numbers and zeir chivalry, charge forward with great fanfare. But ze English longbowmen, zay are a formidable foe. Zay unleash a hail of arrows upon ze French knights, cutting them down like wheat before a scythe. Ze French, weighed down by zeir heavy armor, struggle to move through ze muddy terrain, and ze English take full advantage of zis.\n\nAs ze battle rages on, ze French become increasingly disorganized, and ze English seize ze initiative. Ze French knights, once so proud and noble, now stumble and fall, their armor no match for ze English arrows. Ze English, on ze other hand, fight with great discipline and cohesion, and soon ze French army ees in full retreat.\n\nIn ze end, ze English emerge victorious, having defeated a French army many times zeir size. Ze French suffer heavy losses, including many noble knights and ze Constable of France himself. Ze English, on ze other hand, suffer relatively few casualties, and King Henry V ees hailed as a hero.\n\nAh, ze Battle of Agincourt, eet ees a testament to ze bravery and cunning of ze English, and a reminder that even ze greatest armies can fall to ze clever and ze bold. Vive la France, mais vive l'Angleterre aussi!", 'token_usage': {'prompt_tokens': 34, 'total_tokens': 450, 'completion_tokens': 416}, 'finish_reason': 'stop', 'model_name': 'meta/llama-3.3-70b-instruct'}, id='run-4b86228c-42b1-42d2-8e93-e1bd71e3ae1d-0', usage_metadata={'input_tokens': 34, 'output_tokens': 416, 'total_tokens': 450}, role='assistant')
print(response.content)
Bonjour! Ze Battle of Agincourt, eet ees a tale of great valor and cunning, no? Eet ees a story of how ze brave knights of France, led by ze noble Charles d'Albret, ze Constable of France, clashed with ze English army, led by ze clever King Henry V.
Eet ees October 25, 1415, and ze English army, weary from zeir long march from Harfleur, ees vastly outnumbered by ze French forces. But ze English, zay are not deterred. Zay form a defensive line, with zeir longbowmen at ze forefront, and prepare to face ze French charge.
Ze French, confident in zeir numbers and zeir chivalry, charge forward with great fanfare. But ze English longbowmen, zay are a formidable foe. Zay unleash a hail of arrows upon ze French knights, cutting them down like wheat before a scythe. Ze French, weighed down by zeir heavy armor, struggle to move through ze muddy terrain, and ze English take full advantage of zis.
As ze battle rages on, ze French become increasingly disorganized, and ze English seize ze initiative. Ze French knights, once so proud and noble, now stumble and fall, their armor no match for ze English arrows. Ze English, on ze other hand, fight with great discipline and cohesion, and soon ze French army ees in full retreat.
In ze end, ze English emerge victorious, having defeated a French army many times zeir size. Ze French suffer heavy losses, including many noble knights and ze Constable of France himself. Ze English, on ze other hand, suffer relatively few casualties, and King Henry V ees hailed as a hero.
Ah, ze Battle of Agincourt, eet ees a testament to ze bravery and cunning of ze English, and a reminder that even ze greatest armies can fall to ze clever and ze bold. Vive la France, mais vive l'Angleterre aussi!
5. Structured Output π
LLMs usually return text, but LangChain allows parsing that text into structured data like JSON. That enables machine-readable responses and compatibility of the components when connecting the LLMs to external stuff or have it do actions.
JSON is the most widely-used structured output time, and Pydantic
provides a Python interface to define schemas (using Python classes) that the modelβs responses must conform to. That is an easy and intuitive way to provide the LLM with the instructions about how the output should be structured. Pydantic
also takes care of parsing and validating the LLM output and is therefore a mediator between the LLM and the output JSON.
from pydantic import BaseModel, Field
from typing import List
class Battle(BaseModel):
name: str = Field(..., description="Name of the battle")
year: int = Field(..., description="Year of the battle")
location: str = Field(..., description="Location of the battle")
description: List[str] = Field(..., description="Verses to describe the battle")
structured_llm = llm.with_structured_output(
schema=Battle,
strict=True
)
new_messages = [
SystemMessage(
content="You are a medieval French knight."
),
HumanMessage(
content="Give me a few verses about the Battle of Agincourt as well as information about its year and location."
)
]
response = structured_llm.invoke(new_messages)
Note that now the response is now a Pydantic
model and it will be structured exactly as the provided schema, so instead of content
, you would need to refer to the actual keys you have provided in the schema.
isinstance(response, BaseModel) # output: True
True
response.description
['The Battle of Agincourt took place on October 25, 1415, in Agincourt, France.',
"It was a pivotal battle in the Hundred Years' War between England and France.",
'The English army, led by King Henry V, emerged victorious despite being vastly outnumbered.',
'The English longbowmen played a crucial role in the battle, inflicting heavy casualties on the French knights.',
"The battle is still remembered today for its significance in English history and its impact on the course of the Hundred Years' War."]
To convert the model into a dict
, use model_dump
method.
response.model_dump()
{'name': (FieldInfo(annotation=NoneType, required=True, description='Name of the battle'),),
'year': (FieldInfo(annotation=NoneType, required=True, description='Year of the battle'),),
'location': (FieldInfo(annotation=NoneType, required=True, description='Location of the battle'),),
'description': ['The Battle of Agincourt took place on October 25, 1415, in Agincourt, France.',
"It was a pivotal battle in the Hundred Years' War between England and France.",
'The English army, led by King Henry V, emerged victorious despite being vastly outnumbered.',
'The English longbowmen played a crucial role in the battle, inflicting heavy casualties on the French knights.',
"The battle is still remembered today for its significance in English history and its impact on the course of the Hundred Years' War."]}
6. Tool Calling π οΈ
Tools are Python functions (hence former name: function calling) that can be βcalledβ by the model to expand its abilities. It makes sense to call tool to do stuff LLMs is incapable of: real-time search, doing actions via external APIs (reading emails, scheduling appointments etc.).
An LLM cannot actually call the function. What it does is it returns the name of the function it thinks it is now necessary to call and and the arguments provided by the scheme of the function. These arguments can then be parsed for the tool to be executed.
The easiest way to convert a function into a tool is to use the @tool
decorator. It will automatically create a tool scheme based on the docstring and the input and output types of the provided function.
from langchain_core.tools import tool
@tool
def get_temperature(location: str, is_celcius: bool) -> int:
"""Get current weather."""
# dummy function
temp = len(location) * 2
if not is_celcius:
temp = temp * 9 / 5 + 32
return temp
# will be used to actually execute tools
tools_index = {
"get_temperature": get_temperature,
}
llm_with_tool = llm.bind_tools([get_temperature])
messages = [
HumanMessage(
content="What is the temperature in Paris?"
)
]
response = llm_with_tool.invoke(messages)
If the model decides to call tools, the respective outputs will be stored in the tool_calls
attribute.
response.tool_calls
[{'name': 'get_temperature',
'args': {'location': 'Paris', 'is_celcius': True},
'id': 'chatcmpl-tool-11799ed688fa40b5893ec951c66b964a',
'type': 'tool_call'}]
To proceed with the generation, we should configure our pipeline to call the tools based on the generated name and arguments and then give it back to the LLM. Tools are also Runnable
s so they can be executed directly with the invoke
method. It will return a new type of messages: a ToolMessage
.
tool_outputs = []
for tool_call in response.tool_calls:
tool_name = tool_call["name"]
tool_output = tools_index[tool_name].invoke(
tool_call
)
tool_outputs.append(tool_output)
tool_outputs
[ToolMessage(content='10', name='get_temperature', tool_call_id='chatcmpl-tool-11799ed688fa40b5893ec951c66b964a')]
Now this ToolMessage
should be added to the rest of the messages and passed back to the LLM.
response = llm.invoke(messages + tool_outputs)
response.content
'The current temperature in Paris is 10 degrees Celsius.'
Summary π§©ΒΆ
Concept |
Description |
Used For |
---|---|---|
Runnables |
Core executable units |
Universality, piping logic |
LCEL |
Pipe syntax for chaining components |
Easy, clean composition |
Messages |
Human / System / AI messages for giving the context |
Providing instructions to the LLM |
Chat Models |
LLMs designed for taking message input and generating a certain output |
Conversations, reasoning, tools |
Structured Output |
Parsing LLM text into JSON / Pydantic types |
Data extraction, validation |
Tool Calling |
Calling external Python functions from withing the LLM-based pipeline |
Extend LLMs with external logic |