In LangChain, how to save the verbose output to a variable?
Asked Answered
G

3

7

I tried executing a langchain agent. I want to save the output from verbose into a variable, but all I can access from the agent.run is only the final answer.

How can I save the verbose output to a variable so that I can use later?

My code:

import json
from langchain.agents import load_tools
from langchain.agents import initialize_agent
from langchain.agents import AgentType
from langchain.llms import OpenAI
from langchain.agents import Tool
from langchain.utilities import PythonREPL

llm = OpenAI(temperature=0.1)

## Define Tools
python_repl = PythonREPL()

tools = load_tools(["python_repl", "llm-math"], llm=llm)

agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)

response = agent.run("What is 3^2. Use calculator to solve.")

I tried accessing the response from the agent, but it's only the final answer instead of the verbose output.

printing response gives only 9. But I would like the verbose process like:

> Entering new AgentExecutor chain...
 I need to use the calculator to solve this.
Action: Calculator
Action Input: 3^2
Observation: Answer: 9
Thought: I now know the final answer.
Final Answer: 9
Garratt answered 5/6, 2023 at 11:31 Comment(0)
E
5

I don't find any API to save verbose output as a variable.

However, I think an alternative solution to the question can be achieved by access intermediate steps in this link.

That is set return_intermediate_steps=True,

agent = initialize_agent(
  tools, 
  llm,
  agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, 
  verbose=True,
  return_intermediate_steps=True
)

and use response = agent({"input":"What is 3^2. Use calculator to solve"}) instead of agent.run.

Finally, you can access the intermediate steps in response["intermediate_steps"]

Hope this will help.

Empire answered 12/6, 2023 at 8:12 Comment(3)
Thanks. This gets the verbose process indedd, though I guess I'll have to format the intermediate steps myself.Garratt
also, response["output"] has the string final answer!Freeliving
@Tyler2P I have a doubt, how can we store intermediate steps as ai message in memory (ConversationBufferMemory) If the intermediate steps value is json/dict and not a string. Thanks in advanceWariness
G
0

Coming to this problem about a year later, and I'm getting the message:

UserWarning: Received additional kwargs {'return_intermediate_steps': True} which are no longer supported.

So the above solution is out of date.

I haven't been able to find a perfect solution to save the "verbosity", which would be very useful for my project, but for now, I'm saving the output through the redirect output character ">".

Example: python run_langchainmodel.py > file.txt

This saves the output to a file called file.txt

If anyone has a better solution, I, and I'm sure others, would love to hear it.

Ger answered 14/6 at 18:38 Comment(1)
Which version of langchain package you are using?Mattias
A
0

I just found a way to do so. For my case, I was using SQL agent.

agent = create_sql_agent(
llm=llm,
db=db,
# extra_tools=[retriever_tool],
prompt=prompt,
agent_type="openai-tools",
verbose=True)

After obtaining the AgentExecutor instance which I named it 'agent', set this parameter to True.

agent.return_intermediate_steps=True

Just set this parameter, return_intermediate_steps, in the AgentExecutor instance to True.

This is equivalent to those who are looking to obtain the thoughts and actions of an agent, or the agent_scratchpad.

Arequipa answered 11/7 at 4:29 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.