This quickstart demonstrates how to build a calculator agent using the LangGraph Graph API or the Functional API.Documentation Index
Fetch the complete documentation index at: https://langchain-5e9cc07a-preview-apicon-1778618722-8724f87.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
Using an AI coding assistant?
- Install the LangChain Docs MCP server to give your agent access to up-to-date LangChain documentation and examples.
- Install LangChain Skills to improve your agent’s performance on LangChain ecosystem tasks.
- Use the Graph API if you prefer to define your agent as a graph of nodes and edges.
- Use the Functional API if you prefer to define your agent as a single function.
For this example, you will need to set up a Claude (Anthropic) account and get an API key. Then, set the
ANTHROPIC_API_KEY environment variable in your terminal.- Use the Graph API
- Use the Functional API
1. Define tools and model
In this example, we’ll use the Claude Sonnet 4.5 model and define tools for addition, multiplication, and division.import { ChatAnthropic } from "@langchain/anthropic";
import { tool } from "@langchain/core/tools";
import * as z from "zod";
const model = new ChatAnthropic({
model: "claude-sonnet-4-6",
temperature: 0,
});
// Define tools
const add = tool(({ a, b }) => a + b, {
name: "add",
description: "Add two numbers",
schema: z.object({
a: z.number().describe("First number"),
b: z.number().describe("Second number"),
}),
});
const multiply = tool(({ a, b }) => a * b, {
name: "multiply",
description: "Multiply two numbers",
schema: z.object({
a: z.number().describe("First number"),
b: z.number().describe("Second number"),
}),
});
const divide = tool(({ a, b }) => a / b, {
name: "divide",
description: "Divide two numbers",
schema: z.object({
a: z.number().describe("First number"),
b: z.number().describe("Second number"),
}),
});
// Augment the LLM with tools
const toolsByName = {
[add.name]: add,
[multiply.name]: multiply,
[divide.name]: divide,
};
const tools = Object.values(toolsByName);
const modelWithTools = model.bindTools(tools);
2. Define state
The graph’s state is used to store the messages and the number of LLM calls.State in LangGraph persists throughout the agent’s execution.The
MessagesValue provides a built-in reducer for appending messages. The llmCalls field uses a ReducedValue with (x, y) => x + y to accumulate the count.import {
StateGraph,
StateSchema,
MessagesValue,
ReducedValue,
GraphNode,
ConditionalEdgeRouter,
START,
END,
} from "@langchain/langgraph";
import { z } from "zod/v4";
const MessagesState = new StateSchema({
messages: MessagesValue,
llmCalls: new ReducedValue(
z.number().default(0),
{ reducer: (x, y) => x + y }
),
});
3. Define model node
The model node is used to call the LLM and decide whether to call a tool or not.import { SystemMessage } from "@langchain/core/messages";
const llmCall: GraphNode<typeof MessagesState> = async (state) => {
const response = await modelWithTools.invoke([
new SystemMessage(
"You are a helpful assistant tasked with performing arithmetic on a set of inputs."
),
...state.messages,
]);
return {
messages: [response],
llmCalls: 1,
};
};
4. Define tool node
The tool node is used to call the tools and return the results.import { AIMessage, ToolMessage } from "@langchain/core/messages";
const toolNode: GraphNode<typeof MessagesState> = async (state) => {
const lastMessage = state.messages.at(-1);
if (lastMessage == null || !AIMessage.isInstance(lastMessage)) {
return { messages: [] };
}
const result: ToolMessage[] = [];
for (const toolCall of lastMessage.tool_calls ?? []) {
const tool = toolsByName[toolCall.name];
const observation = await tool.invoke(toolCall);
result.push(observation);
}
return { messages: result };
};
5. Define end logic
The conditional edge function is used to route to the tool node or end based upon whether the LLM made a tool call.const shouldContinue: ConditionalEdgeRouter<typeof MessagesState, "toolNode"> = (state) => {
const lastMessage = state.messages.at(-1);
// Check if it's an AIMessage before accessing tool_calls
if (!lastMessage || !AIMessage.isInstance(lastMessage)) {
return END;
}
// If the LLM makes a tool call, then perform an action
if (lastMessage.tool_calls?.length) {
return "toolNode";
}
// Otherwise, we stop (reply to the user)
return END;
};
6. Build and compile the agent
The agent is built using theStateGraph class and compiled using the compile method.const agent = new StateGraph(MessagesState)
.addNode("llmCall", llmCall)
.addNode("toolNode", toolNode)
.addEdge(START, "llmCall")
.addConditionalEdges("llmCall", shouldContinue, ["toolNode", END])
.addEdge("toolNode", "llmCall")
.compile();
// Invoke
import { HumanMessage } from "@langchain/core/messages";
const result = await agent.invoke({
messages: [new HumanMessage("Add 3 and 4.")],
});
for (const message of result.messages) {
console.log(`[${message.type}]: ${message.text}`);
}
Trace and debug your agent with LangSmith. Follow the tracing quickstart to get set up. When ready for production, see Deploy for hosting options.
Full code example
Full code example
// Step 1: Define tools and model
import { ChatAnthropic } from "@langchain/anthropic";
import { tool } from "@langchain/core/tools";
import * as z from "zod";
const model = new ChatAnthropic({
model: "claude-sonnet-4-6",
temperature: 0,
});
// Define tools
const add = tool(({ a, b }) => a + b, {
name: "add",
description: "Add two numbers",
schema: z.object({
a: z.number().describe("First number"),
b: z.number().describe("Second number"),
}),
});
const multiply = tool(({ a, b }) => a * b, {
name: "multiply",
description: "Multiply two numbers",
schema: z.object({
a: z.number().describe("First number"),
b: z.number().describe("Second number"),
}),
});
const divide = tool(({ a, b }) => a / b, {
name: "divide",
description: "Divide two numbers",
schema: z.object({
a: z.number().describe("First number"),
b: z.number().describe("Second number"),
}),
});
// Augment the LLM with tools
const toolsByName = {
[add.name]: add,
[multiply.name]: multiply,
[divide.name]: divide,
};
const tools = Object.values(toolsByName);
const modelWithTools = model.bindTools(tools);
// Step 2: Define state
import {
StateGraph,
StateSchema,
MessagesValue,
ReducedValue,
GraphNode,
ConditionalEdgeRouter,
START,
END,
} from "@langchain/langgraph";
import * as z from "zod";
const MessagesState = new StateSchema({
messages: MessagesValue,
llmCalls: new ReducedValue(
z.number().default(0),
{ reducer: (x, y) => x + y }
),
});
// Step 3: Define model node
import { SystemMessage, AIMessage, ToolMessage } from "@langchain/core/messages";
const llmCall: GraphNode<typeof MessagesState> = async (state) => {
return {
messages: [await modelWithTools.invoke([
new SystemMessage(
"You are a helpful assistant tasked with performing arithmetic on a set of inputs."
),
...state.messages,
])],
llmCalls: 1,
};
};
// Step 4: Define tool node
const toolNode: GraphNode<typeof MessagesState> = async (state) => {
const lastMessage = state.messages.at(-1);
if (lastMessage == null || !AIMessage.isInstance(lastMessage)) {
return { messages: [] };
}
const result: ToolMessage[] = [];
for (const toolCall of lastMessage.tool_calls ?? []) {
const tool = toolsByName[toolCall.name];
const observation = await tool.invoke(toolCall);
result.push(observation);
}
return { messages: result };
};
// Step 5: Define logic to determine whether to end
import { ConditionalEdgeRouter, END } from "@langchain/langgraph";
const shouldContinue: ConditionalEdgeRouter<typeof MessagesState, "toolNode"> = (state) => {
const lastMessage = state.messages.at(-1);
// Check if it's an AIMessage before accessing tool_calls
if (!lastMessage || !AIMessage.isInstance(lastMessage)) {
return END;
}
// If the LLM makes a tool call, then perform an action
if (lastMessage.tool_calls?.length) {
return "toolNode";
}
// Otherwise, we stop (reply to the user)
return END;
};
// Step 6: Build and compile the agent
import { HumanMessage } from "@langchain/core/messages";
import { StateGraph, START, END } from "@langchain/langgraph";
const agent = new StateGraph(MessagesState)
.addNode("llmCall", llmCall)
.addNode("toolNode", toolNode)
.addEdge(START, "llmCall")
.addConditionalEdges("llmCall", shouldContinue, ["toolNode", END])
.addEdge("toolNode", "llmCall")
.compile();
// Invoke
const result = await agent.invoke({
messages: [new HumanMessage("Add 3 and 4.")],
});
for (const message of result.messages) {
console.log(`[${message.type}]: ${message.text}`);
}
1. Define tools and model
In this example, we’ll use the Claude Sonnet 4.5 model and define tools for addition, multiplication, and division.import { ChatAnthropic } from "@langchain/anthropic";
import { tool } from "@langchain/core/tools";
import * as z from "zod";
const model = new ChatAnthropic({
model: "claude-sonnet-4-6",
temperature: 0,
});
// Define tools
const add = tool(({ a, b }) => a + b, {
name: "add",
description: "Add two numbers",
schema: z.object({
a: z.number().describe("First number"),
b: z.number().describe("Second number"),
}),
});
const multiply = tool(({ a, b }) => a * b, {
name: "multiply",
description: "Multiply two numbers",
schema: z.object({
a: z.number().describe("First number"),
b: z.number().describe("Second number"),
}),
});
const divide = tool(({ a, b }) => a / b, {
name: "divide",
description: "Divide two numbers",
schema: z.object({
a: z.number().describe("First number"),
b: z.number().describe("Second number"),
}),
});
// Augment the LLM with tools
const toolsByName = {
[add.name]: add,
[multiply.name]: multiply,
[divide.name]: divide,
};
const tools = Object.values(toolsByName);
const modelWithTools = model.bindTools(tools);
2. Define model node
The model node is used to call the LLM and decide whether to call a tool or not.import { task, entrypoint } from "@langchain/langgraph";
import { SystemMessage } from "@langchain/core/messages";
const callLlm = task({ name: "callLlm" }, async (messages: BaseMessage[]) => {
return modelWithTools.invoke([
new SystemMessage(
"You are a helpful assistant tasked with performing arithmetic on a set of inputs."
),
...messages,
]);
});
3. Define tool node
The tool node is used to call the tools and return the results.import type { ToolCall } from "@langchain/core/messages/tool";
const callTool = task({ name: "callTool" }, async (toolCall: ToolCall) => {
const tool = toolsByName[toolCall.name];
return tool.invoke(toolCall);
});
4. Define agent
import { addMessages } from "@langchain/langgraph";
import { type BaseMessage } from "@langchain/core/messages";
const agent = entrypoint({ name: "agent" }, async (messages: BaseMessage[]) => {
let modelResponse = await callLlm(messages);
while (true) {
if (!modelResponse.tool_calls?.length) {
break;
}
// Execute tools
const toolResults = await Promise.all(
modelResponse.tool_calls.map((toolCall) => callTool(toolCall))
);
messages = addMessages(messages, [modelResponse, ...toolResults]);
modelResponse = await callLlm(messages);
}
return messages;
});
// Invoke
import { HumanMessage } from "@langchain/core/messages";
const result = await agent.invoke([new HumanMessage("Add 3 and 4.")]);
for (const message of result) {
console.log(`[${message.getType()}]: ${message.text}`);
}
Trace and debug your agent with LangSmith. Follow the tracing quickstart to get set up. When ready for production, see Deploy for hosting options.
Full code example
Full code example
import { ChatAnthropic } from "@langchain/anthropic";
import { tool } from "@langchain/core/tools";
import {
task,
entrypoint,
addMessages,
} from "@langchain/langgraph";
import {
SystemMessage,
HumanMessage,
type BaseMessage,
} from "@langchain/core/messages";
import type { ToolCall } from "@langchain/core/messages/tool";
import * as z from "zod";
// Step 1: Define tools and model
const model = new ChatAnthropic({
model: "claude-sonnet-4-6",
temperature: 0,
});
// Define tools
const add = tool(({ a, b }) => a + b, {
name: "add",
description: "Add two numbers",
schema: z.object({
a: z.number().describe("First number"),
b: z.number().describe("Second number"),
}),
});
const multiply = tool(({ a, b }) => a * b, {
name: "multiply",
description: "Multiply two numbers",
schema: z.object({
a: z.number().describe("First number"),
b: z.number().describe("Second number"),
}),
});
const divide = tool(({ a, b }) => a / b, {
name: "divide",
description: "Divide two numbers",
schema: z.object({
a: z.number().describe("First number"),
b: z.number().describe("Second number"),
}),
});
// Augment the LLM with tools
const toolsByName = {
[add.name]: add,
[multiply.name]: multiply,
[divide.name]: divide,
};
const tools = Object.values(toolsByName);
const modelWithTools = model.bindTools(tools);
// Step 2: Define model node
const callLlm = task({ name: "callLlm" }, async (messages: BaseMessage[]) => {
return modelWithTools.invoke([
new SystemMessage(
"You are a helpful assistant tasked with performing arithmetic on a set of inputs."
),
...messages,
]);
});
// Step 3: Define tool node
const callTool = task({ name: "callTool" }, async (toolCall: ToolCall) => {
const tool = toolsByName[toolCall.name];
return tool.invoke(toolCall);
});
// Step 4: Define agent
const agent = entrypoint({ name: "agent" }, async (messages: BaseMessage[]) => {
let modelResponse = await callLlm(messages);
while (true) {
if (!modelResponse.tool_calls?.length) {
break;
}
// Execute tools
const toolResults = await Promise.all(
modelResponse.tool_calls.map((toolCall) => callTool(toolCall))
);
messages = addMessages(messages, [modelResponse, ...toolResults]);
modelResponse = await callLlm(messages);
}
return messages;
});
// Invoke
const result = await agent.invoke([new HumanMessage("Add 3 and 4.")]);
for (const message of result) {
console.log(`[${message.type}]: ${message.text}`);
}
Connect these docs to Claude, VSCode, and more via MCP for real-time answers.