This commit is contained in:
William Fu-Hinthorn
2024-09-13 17:06:33 -07:00
parent ef0d1af289
commit 609708eafe
8 changed files with 60 additions and 248 deletions

244
README.md
View File

@@ -1,25 +1,22 @@
# LangGraph ReAct Agent Template # LangGraph Simple Chatbot Template
[![CI](https://github.com/langchain-ai/react-agent/actions/workflows/unit-tests.yml/badge.svg)](https://github.com/langchain-ai/react-agent/actions/workflows/unit-tests.yml) [![CI](https://github.com/langchain-ai/new-langgraph-project/actions/workflows/unit-tests.yml/badge.svg)](https://github.com/langchain-ai/new-langgraph-project/actions/workflows/unit-tests.yml)
[![Integration Tests](https://github.com/langchain-ai/react-agent/actions/workflows/integration-tests.yml/badge.svg)](https://github.com/langchain-ai/react-agent/actions/workflows/integration-tests.yml) [![Integration Tests](https://github.com/langchain-ai/new-langgraph-project/actions/workflows/integration-tests.yml/badge.svg)](https://github.com/langchain-ai/new-langgraph-project/actions/workflows/integration-tests.yml)
This template showcases a [ReAct agent](https://arxiv.org/abs/2210.03629) implemented using [LangGraph](https://github.com/langchain-ai/langgraph), designed for [LangGraph Studio](https://github.com/langchain-ai/langgraph-studio). ReAct agents are uncomplicated, prototypical agents that can be flexibly extended to many tools. This template demonstrates a simple chatbot implemented using [LangGraph](https://github.com/langchain-ai/langgraph), designed for [LangGraph Studio](https://github.com/langchain-ai/langgraph-studio). The chatbot maintains persistent chat memory, allowing for coherent conversations across multiple interactions.
![Graph view in LangGraph studio UI](./static/studio_ui.png) The core logic, defined in `src/agent/graph.py`, showcases a straightforward chatbot that responds to user queries while maintaining context from previous messages.
The core logic, defined in `src/agent/graph.py`, demonstrates a flexible ReAct agent that iteratively reasons about user queries and executes actions, showcasing the power of this approach for complex problem-solving tasks.
## What it does ## What it does
The ReAct agent: The simple chatbot:
1. Takes a user **query** as input 1. Takes a user **message** as input
2. Reasons about the query and decides on an action 2. Maintains a history of the conversation
3. Executes the chosen action using available tools 3. Generates a response based on the current message and conversation history
4. Observes the result of the action 4. Updates the conversation history with the new interaction
5. Repeats steps 2-4 until it can provide a final answer
By default, it's set up with a basic set of tools, but can be easily extended with custom tools to suit various use cases. This template provides a foundation that can be easily customized and extended to create more complex conversational agents.
## Getting Started ## Getting Started
@@ -33,8 +30,6 @@ cp .env.example .env
2. Define required API keys in your `.env` file. 2. Define required API keys in your `.env` file.
The primary [search tool](./src/agent/tools.py) [^1] used is [Tavily](https://tavily.com/). Create an API key [here](https://app.tavily.com/sign-in).
<!-- <!--
Setup instruction auto-generated by `langgraph template lock`. DO NOT EDIT MANUALLY. Setup instruction auto-generated by `langgraph template lock`. DO NOT EDIT MANUALLY.
--> -->
@@ -45,29 +40,34 @@ Set up your LLM API keys. This repo defaults to using [Claude](https://console.a
End setup instructions End setup instructions
--> -->
3. Customize whatever you'd like in the code. 3. Customize the code as needed.
4. Open the folder LangGraph Studio! 4. Open the folder in LangGraph Studio!
## How to customize ## How to customize
1. **Add new tools**: Extend the agent's capabilities by adding new tools in [tools.py](./src/agent/tools.py). These can be any Python functions that perform specific tasks. 1. **Modify the system prompt**: The default system prompt is defined in [configuration.py](./src/agent/configuration.py). You can easily update this via configuration in the studio to change the chatbot's personality or behavior.
2. **Select a different model**: We default to Anthropic's Claude 3 Sonnet. You can select a compatible chat model using `provider/model-name` via configuration. Example: `openai/gpt-4-turbo-preview`. 2. **Select a different model**: We default to Anthropic's Claude 3 Sonnet. You can select a compatible chat model using `provider/model-name` via configuration. Example: `openai/gpt-4-turbo-preview`.
3. **Customize the prompt**: We provide a default system prompt in [configuration.py](./src/agent/configuration.py). You can easily update this via configuration in the studio. 3. **Extend the graph**: The core logic of the chatbot is defined in [graph.py](./src/agent/graph.py). You can modify this file to add new nodes, edges, or change the flow of the conversation.
You can also quickly extend this template by: You can also quickly extend this template by:
- Modifying the agent's reasoning process in [graph.py](./src/agent/graph.py). - Adding custom tools or functions to enhance the chatbot's capabilities.
- Adjusting the ReAct loop or adding additional steps to the agent's decision-making process. - Implementing additional logic for handling specific types of user queries or tasks.
- Integrating external APIs or databases to provide more dynamic responses.
## Development ## Development
While iterating on your graph, you can edit past state and rerun your app from past states to debug specific nodes. Local changes will be automatically applied via hot reload. Try adding an interrupt before the agent calls tools, updating the default system message in `src/agent/configuration.py` to take on a persona, or adding additional nodes and edges! While iterating on your graph, you can edit past state and rerun your app from previous states to debug specific nodes. Local changes will be automatically applied via hot reload. Try experimenting with:
Follow up requests will be appended to the same thread. You can create an entirely new thread, clearing previous history, using the `+` button in the top right. - Modifying the system prompt to give your chatbot a unique personality.
- Adding new nodes to the graph for more complex conversation flows.
- Implementing conditional logic to handle different types of user inputs.
You can find the latest (under construction) docs on [LangGraph](https://github.com/langchain-ai/langgraph) here, including examples and other references. Using those guides can help you pick the right patterns to adapt here for your use case. Follow-up requests will be appended to the same thread. You can create an entirely new thread, clearing previous history, using the `+` button in the top right.
LangGraph Studio also integrates with [LangSmith](https://smith.langchain.com/) for more in-depth tracing and collaboration with teammates. For more advanced features and examples, refer to the [LangGraph documentation](https://github.com/langchain-ai/langgraph). These resources can help you adapt this template for your specific use case and build more sophisticated conversational agents.
LangGraph Studio also integrates with [LangSmith](https://smith.langchain.com/) for more in-depth tracing and collaboration with teammates, allowing you to analyze and optimize your chatbot's performance.
<!-- <!--
Configuration auto-generated by `langgraph template lock`. DO NOT EDIT MANUALLY. Configuration auto-generated by `langgraph template lock`. DO NOT EDIT MANUALLY.
@@ -78,7 +78,7 @@ Configuration auto-generated by `langgraph template lock`. DO NOT EDIT MANUALLY.
"properties": { "properties": {
"system_prompt": { "system_prompt": {
"type": "string", "type": "string",
"default": "You are a helpful AI assistant.\n\nSystem time: {system_time}" "default": "You are a helpful (if not sassy) personal assistant.\n\nSystem time: {system_time}"
}, },
"model_name": { "model_name": {
"type": "string", "type": "string",
@@ -265,196 +265,6 @@ Configuration auto-generated by `langgraph template lock`. DO NOT EDIT MANUALLY.
"variables": "OPENAI_API_KEY" "variables": "OPENAI_API_KEY"
} }
] ]
},
"scraper_tool_model_name": {
"type": "string",
"default": "accounts/fireworks/models/firefunction-v2",
"environment": [
{
"value": "anthropic/claude-1.2",
"variables": "ANTHROPIC_API_KEY"
},
{
"value": "anthropic/claude-2.0",
"variables": "ANTHROPIC_API_KEY"
},
{
"value": "anthropic/claude-2.1",
"variables": "ANTHROPIC_API_KEY"
},
{
"value": "anthropic/claude-3-5-sonnet-20240620",
"variables": "ANTHROPIC_API_KEY"
},
{
"value": "anthropic/claude-3-haiku-20240307",
"variables": "ANTHROPIC_API_KEY"
},
{
"value": "anthropic/claude-3-opus-20240229",
"variables": "ANTHROPIC_API_KEY"
},
{
"value": "anthropic/claude-3-sonnet-20240229",
"variables": "ANTHROPIC_API_KEY"
},
{
"value": "anthropic/claude-instant-1.2",
"variables": "ANTHROPIC_API_KEY"
},
{
"value": "fireworks/gemma2-9b-it",
"variables": "FIREWORKS_API_KEY"
},
{
"value": "fireworks/llama-v3-70b-instruct",
"variables": "FIREWORKS_API_KEY"
},
{
"value": "fireworks/llama-v3-70b-instruct-hf",
"variables": "FIREWORKS_API_KEY"
},
{
"value": "fireworks/llama-v3-8b-instruct",
"variables": "FIREWORKS_API_KEY"
},
{
"value": "fireworks/llama-v3-8b-instruct-hf",
"variables": "FIREWORKS_API_KEY"
},
{
"value": "fireworks/llama-v3p1-405b-instruct",
"variables": "FIREWORKS_API_KEY"
},
{
"value": "fireworks/llama-v3p1-405b-instruct-long",
"variables": "FIREWORKS_API_KEY"
},
{
"value": "fireworks/llama-v3p1-70b-instruct",
"variables": "FIREWORKS_API_KEY"
},
{
"value": "fireworks/llama-v3p1-8b-instruct",
"variables": "FIREWORKS_API_KEY"
},
{
"value": "fireworks/mixtral-8x22b-instruct",
"variables": "FIREWORKS_API_KEY"
},
{
"value": "fireworks/mixtral-8x7b-instruct",
"variables": "FIREWORKS_API_KEY"
},
{
"value": "fireworks/mixtral-8x7b-instruct-hf",
"variables": "FIREWORKS_API_KEY"
},
{
"value": "fireworks/mythomax-l2-13b",
"variables": "FIREWORKS_API_KEY"
},
{
"value": "fireworks/phi-3-vision-128k-instruct",
"variables": "FIREWORKS_API_KEY"
},
{
"value": "fireworks/phi-3p5-vision-instruct",
"variables": "FIREWORKS_API_KEY"
},
{
"value": "fireworks/starcoder-16b",
"variables": "FIREWORKS_API_KEY"
},
{
"value": "fireworks/yi-large",
"variables": "FIREWORKS_API_KEY"
},
{
"value": "openai/gpt-3.5-turbo",
"variables": "OPENAI_API_KEY"
},
{
"value": "openai/gpt-3.5-turbo-0125",
"variables": "OPENAI_API_KEY"
},
{
"value": "openai/gpt-3.5-turbo-0301",
"variables": "OPENAI_API_KEY"
},
{
"value": "openai/gpt-3.5-turbo-0613",
"variables": "OPENAI_API_KEY"
},
{
"value": "openai/gpt-3.5-turbo-1106",
"variables": "OPENAI_API_KEY"
},
{
"value": "openai/gpt-3.5-turbo-16k",
"variables": "OPENAI_API_KEY"
},
{
"value": "openai/gpt-3.5-turbo-16k-0613",
"variables": "OPENAI_API_KEY"
},
{
"value": "openai/gpt-4",
"variables": "OPENAI_API_KEY"
},
{
"value": "openai/gpt-4-0125-preview",
"variables": "OPENAI_API_KEY"
},
{
"value": "openai/gpt-4-0314",
"variables": "OPENAI_API_KEY"
},
{
"value": "openai/gpt-4-0613",
"variables": "OPENAI_API_KEY"
},
{
"value": "openai/gpt-4-1106-preview",
"variables": "OPENAI_API_KEY"
},
{
"value": "openai/gpt-4-32k",
"variables": "OPENAI_API_KEY"
},
{
"value": "openai/gpt-4-32k-0314",
"variables": "OPENAI_API_KEY"
},
{
"value": "openai/gpt-4-32k-0613",
"variables": "OPENAI_API_KEY"
},
{
"value": "openai/gpt-4-turbo",
"variables": "OPENAI_API_KEY"
},
{
"value": "openai/gpt-4-turbo-preview",
"variables": "OPENAI_API_KEY"
},
{
"value": "openai/gpt-4-vision-preview",
"variables": "OPENAI_API_KEY"
},
{
"value": "openai/gpt-4o",
"variables": "OPENAI_API_KEY"
},
{
"value": "openai/gpt-4o-mini",
"variables": "OPENAI_API_KEY"
}
]
},
"max_search_results": {
"type": "integer",
"default": 10
} }
} }
} }

View File

@@ -9,6 +9,7 @@ readme = "README.md"
license = { text = "MIT" } license = { text = "MIT" }
requires-python = ">=3.9" requires-python = ">=3.9"
dependencies = [ dependencies = [
"langchain>=0.3.0",
"langchain-anthropic>=0.2.0", "langchain-anthropic>=0.2.0",
"langgraph>=0.2.6", "langgraph>=0.2.6",
"python-dotenv>=1.0.1", "python-dotenv>=1.0.1",

View File

@@ -27,7 +27,7 @@ class Configuration:
"kind": "llm", "kind": "llm",
} }
}, },
] = "claude-3-5-sonnet-20240620" ] = "anthropic/claude-3-5-sonnet-20240620"
"""The name of the language model to use for our chatbot.""" """The name of the language model to use for our chatbot."""
@classmethod @classmethod

View File

@@ -6,18 +6,17 @@ Works with a chat model with tool calling support.
from datetime import datetime, timezone from datetime import datetime, timezone
from typing import Any, Dict, List from typing import Any, Dict, List
import anthropic
from agent.configuration import Configuration
from agent.state import State
from langchain_core.runnables import RunnableConfig from langchain_core.runnables import RunnableConfig
from langgraph.graph import StateGraph from langgraph.graph import StateGraph
from agent.configuration import Configuration
from agent.state import State
from agent.utils import load_chat_model
# Define the function that calls the model # Define the function that calls the model
async def call_model( async def call_model(state: State, config: RunnableConfig) -> Dict[str, List[Any]]:
state: State, config: RunnableConfig
) -> Dict[str, List[Dict[str, Any]]]:
"""Call the LLM powering our "agent". """Call the LLM powering our "agent".
This function prepares the prompt, initializes the model, and processes the response. This function prepares the prompt, initializes the model, and processes the response.
@@ -33,23 +32,11 @@ async def call_model(
system_prompt = configuration.system_prompt.format( system_prompt = configuration.system_prompt.format(
system_time=datetime.now(tz=timezone.utc).isoformat() system_time=datetime.now(tz=timezone.utc).isoformat()
) )
toks = [] model = load_chat_model(configuration.model_name)
async with anthropic.AsyncAnthropic() as client: res = await model.ainvoke([("system", system_prompt), *state.messages])
async with client.messages.stream(
model=configuration.model_name,
max_tokens=1024,
system=system_prompt,
messages=state.messages,
) as stream:
async for text in stream.text_stream:
toks.append(text)
# Return the model's response as a list to be added to existing messages # Return the model's response as a list to be added to existing messages
return { return {"messages": [res]}
"messages": [
{"role": "assistant", "content": [{"type": "text", "text": "".join(toks)}]}
]
}
# Define a new graph # Define a new graph

View File

@@ -2,10 +2,11 @@
from __future__ import annotations from __future__ import annotations
import operator
from dataclasses import dataclass, field from dataclasses import dataclass, field
from typing import Sequence from typing import List
from langchain_core.messages import AnyMessage
from langgraph.graph import add_messages
from typing_extensions import Annotated from typing_extensions import Annotated
@@ -16,7 +17,7 @@ class State:
This class is used to define the initial state and structure of incoming data. This class is used to define the initial state and structure of incoming data.
""" """
messages: Annotated[Sequence[dict], operator.add] = field(default_factory=list) messages: Annotated[List[AnyMessage], add_messages] = field(default_factory=list)
""" """
Messages tracking the primary execution state of the agent. Messages tracking the primary execution state of the agent.

14
src/agent/utils.py Normal file
View File

@@ -0,0 +1,14 @@
"""Utility & helper functions."""
from langchain.chat_models import init_chat_model
from langchain_core.language_models import BaseChatModel
def load_chat_model(fully_specified_name: str) -> BaseChatModel:
"""Load a chat model from a fully specified name.
Args:
fully_specified_name (str): String in the format 'provider/model'.
"""
provider, model = fully_specified_name.split("/", maxsplit=1)
return init_chat_model(model, model_provider=provider)

View File

@@ -1,12 +1,11 @@
import pytest import pytest
from agent import graph
from langsmith import expect, unit from langsmith import expect, unit
from agent import graph
@pytest.mark.asyncio @pytest.mark.asyncio
@unit @unit
async def test_agent_simple_passthrough() -> None: async def test_agent_simple_passthrough() -> None:
res = await graph.ainvoke( res = await graph.ainvoke({"messages": ["user", "What's 62 - 19?"]})
{"messages": [{"role": "user", "content": "What's 62 - 19?"}]} expect(str(res["messages"][-1].content)).to_contain("43")
)
expect(res["messages"][-1]["content"][0]["text"]).to_contain("43")

View File

@@ -1,5 +1,5 @@
from agent.configuration import Configuration from agent.configuration import Configuration
def test_configuration_empty(): def test_configuration_empty() -> None:
Configuration.from_runnable_config({}) Configuration.from_runnable_config({})