In Langchain, why ConversationalRetrievalChain not remembering the chat history and Entering new ConversationalRetrievalChain chain for each chat?
Asked Answered
S

3

4

I am trying to create an customer support system using langchain. I am using text documents as external knowledge provider via TextLoader

In order to remember the chat I using ConversationalRetrievalChain with list of chats

My problem is, each time when I execute conv_chain({"question": prompt, "chat_history": chat_history}),

it is creating a new ConversationalRetrievalChain that is, in the log, I get Entering new ConversationalRetrievalChain chain > message

And the chat_history array looks like, multiple nested arrays :

[[ "Hi I am Ragesh", "Hi Ragesh, How are your"] , ["What is my name?", "Sorry, As an AI....., " ]]

So it couldn't remember my previous chat.

Why this is happening ?

I am very new to AI field. Please help me.

My code:

https://gist.github.com/RageshAntony/79a9050b76e74f5ea868888cd57c6705

Studdingsail answered 16/5, 2023 at 14:26 Comment(3)
Please post a minimal reproducible example inline, in your actual post, instead of linking to Github.Blend
"By default, Chains and Agents are stateless, meaning that they treat each incoming query independently" - the LangChain docs highlight that Chains are stateless by nature - they do not preserve memory. However there are a number of Memory objects that can be added to conversational chains to preserve state/chat history. Have a look at this documentation on how to add memory to a ConversatoinalRetrievalChain.Blend
Chech whether this helps -> Similar problemKanal
P
2

you may add something here

def generate_response(support_qa: BaseConversationalRetrievalChain, prompt):
    response = support_qa({"question": prompt, "chat_history": chat_history})
    chat_history.append((prompt, response["answer"]))
    print( json.dumps(chat_history))
    return response['answer']

to this code below. for the time you need you history

def generate_response(support_qa: BaseConversationalRetrievalChain, prompt):
    chat_history = [(prompt, --**previous chat_history** --)]
    response = support_qa({"question": prompt, "chat_history": chat_history})
    chat_history.append((prompt, response["answer"]))
    print( json.dumps(chat_history))
    return response['answer']

this will ensure that support_qa chat_history is the history you were looking for. be carefull with token maximum issues. you may need to use map-reduce to summarize your history.

Phosphor answered 18/5, 2023 at 0:13 Comment(1)
What if chatHistory contains string literals something like {abc}Headship
T
0
support_qa = ConversationalRetrievalChain.from_llm(
        llm=llm,
        retriever=vectorstore.as_retriever(),
        # retriever=support_store.as_retriever(search_kwargs={"k": 3}),
        verbose=True,
        return_source_documents=True,
        combine_docs_chain_kwargs={"prompt": SUPPORT_PROMPT},
        # qa_prompt=SUPPORT_PROMPT,
        condense_question_prompt=CONDENSE_QUESTION_PROMPT,
    )

This can slove the question.

Thyrsus answered 10/11, 2023 at 9:45 Comment(2)
Can you please post full code. could not understand the difference between support prompt and CONDENSE_QUESTION_PROMPTOrthodoxy
@Orthodoxy Reference the original question and you will find the full code... This answer references the information provided in the question.Phono
R
-3

You need to update your Langchain version

Retinoscopy answered 10/6, 2023 at 9:53 Comment(1)
how it will resolve the issue>Enthetic

© 2022 - 2024 — McMap. All rights reserved.