本文主要是介绍简单的LangGraph示例,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!
在学习智能体,然后又接触到LangGraph,参照文档尝试了一个简单的LangGraph demo。
一、环境准备:
pip install langchain pip install langchain_openai pip install langgraph
二、代码:
from typing import TypedDict, Annotated, Sequence
import operator
from langchain_core.messages import BaseMessage
from langchain.tools.render import format_tool_to_openai_function
from langchain_openai import ChatOpenAI
from langgraph.prebuilt import ToolExecutor
from langchain_community.tools.tavily_search import TavilySearchResults
from langgraph.prebuilt import ToolInvocation
import json
from langchain_core.messages import FunctionMessage
from langgraph.graph import StateGraph, END
from langchain_core.messages import HumanMessage# Import things that are needed generically
from langchain.pydantic_v1 import BaseModel, Field
from langchain.tools import BaseTool, StructuredTool, tool# 加载 .env 到环境变量,这样就能读取到 .env文件中的 OPENAI_API_KEY和OPENAI_BASE_URL这个设置
from dotenv import load_dotenv, find_dotenv
_ = load_dotenv(find_dotenv())# 自定义工具
@tool
def search(query: str) -> str:"""Look up things online."""print(f"search: {query}")return "sunny"@tool
def multiply(a: int, b: int) -> int:"""Multiply two numbers."""return a * b tools = [search,multiply]tool_executor = ToolExecutor(tools)# We will set streaming=True so that we can stream tokens
# See the streaming section for more information on this.
model = ChatOpenAI(temperature=0, streaming=True)functions = [format_tool_to_openai_function(t) for t in tools]
model = model.bind_functions(functions)class AgentState(TypedDict):messages: Annotated[Sequence[BaseMessage], operator.add]# Define the function that determines whether to continue or not
def should_continue(state):messages = state['messages']last_message = messages[-1]# If there is no function call, then we finishif "function_call" not in last_message.additional_kwargs:return "end"# Otherwise if there is, we continueelse:return "continue"# Define the function that calls the model
def call_model(state):messages = state['messages']response = model.invoke(messages)# We return a list, because this will get added to the existing listreturn {"messages": [response]}# Define the function to execute tools
def call_tool(state):messages = state['messages']# Based on the continue condition# we know the last message involves a function calllast_message = messages[-1]# We construct an ToolInvocation from the function_callaction = ToolInvocation(tool=last_message.additional_kwargs["function_call"]["name"],tool_input=json.loads(last_message.additional_kwargs["function_call"]["arguments"]),)# We call the tool_executor and get back a responseresponse = tool_executor.invoke(action)print(f"response:{response}")# We use the response to create a FunctionMessagefunction_message = FunctionMessage(content=str(response), name=action.tool)print(f"function_message:{function_message}")# We return a list, because this will get added to the existing listreturn {"messages": [function_message]} # Define a new graph
workflow = StateGraph(AgentState)# Define the two nodes we will cycle between
workflow.add_node("agent", call_model)
workflow.add_node("action", call_tool)# Set the entrypoint as `agent`
# This means that this node is the first one called
workflow.set_entry_point("agent")# We now add a conditional edge
workflow.add_conditional_edges(# First, we define the start node. We use `agent`.# This means these are the edges taken after the `agent` node is called."agent",# Next, we pass in the function that will determine which node is called next.should_continue,# Finally we pass in a mapping.# The keys are strings, and the values are other nodes.# END is a special node marking that the graph should finish.# What will happen is we will call `should_continue`, and then the output of that# will be matched against the keys in this mapping.# Based on which one it matches, that node will then be called.{# If `tools`, then we call the tool node."continue": "action",# Otherwise we finish."end": END}
)# We now add a normal edge from `tools` to `agent`.
# This means that after `tools` is called, `agent` node is called next.
workflow.add_edge('action', 'agent')# Finally, we compile it!
# This compiles it into a LangChain Runnable,
# meaning you can use it as you would any other runnable
app = workflow.compile() #inputs = {"messages": [HumanMessage(content="what is the weather in Beijing?")]}
inputs = {"messages": [HumanMessage(content="3乘以5等于多少,输出最终的结果")]}
response = app.invoke(inputs)
print(type(response))
print(f"last result:{response}")
# 输出如下信息:
# {'messages': [HumanMessage(content='3乘以5等于多少'), AIMessage(content='', additional_kwargs={'function_call': {'arguments': '{\n "a": 3,\n "b": 5\n}', 'name': 'multiply'}}, response_metadata={'finish_reason': 'function_call'}, id='run-bbf18160-747f-48ac-9a81-6c1ee3b70b07-0'), FunctionMessage(content='15', name='multiply'), AIMessage(content='3乘以5等于15。', response_metadata={'finish_reason': 'stop'}, id='run-0d1403cf-4ddb-4db2-8cfa-d0965666e62d-0')]}
关于状态机、节点、边、有向无环图等概念可以去参照相关文档,在这里就不赘述了。
上面代码添加了2个节点,其分别为agent和action,还添加了1个条件边。
三、解释一下几个函数:
3.1. add_node(key,action):
添加节点。节点是要做处理的。
key 是节点的名字,后面会根据这个名字去确定这个节点的。
action是一个函数或者一个LCEL runnable,这个函数或者 LCEL runnable 应该接收一个和状态对象一样的字典作为输入,
其输出也是以状态对象中的属性为key的一个字典,从而更新状态对象中对应的值。
3.2. add_edge(start_key, end_key)
在两个节点之间添加边(连线),从前往后
start_key 开始节点的名字
end_key 结束节点的名字
3.3. add_conditional_edges(source, path, path_map=None, then=None)
添加条件边
source (str) – 开始节点名
path (Union[Callable, Runnable]) – 决定下一个节点的回调函数
path_map (Optional[dict[str, str]], default: None ) – 映射下一个节点的字典.
then (Optional[str], default: None ) – 执行完选择的节点后的下一个节点名
3.4. set_entry_point(key)
设置开始节点,图将从这个节点开始运行
3.5. compile(checkpointer=None, interrupt_before=None, interrupt_after=None, debug=False)
编译state graph为CompiledGraph对象
这篇关于简单的LangGraph示例的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!