ValueError: `run` not supported when there is not exactly one input key, got ['question', 'documents']
Asked Answered
P

2

4

Getting the error while trying to run a langchain code.

ValueError: `run` not supported when there is not exactly one input key, got ['question', 'documents'].
Traceback:
File "c:\users\aviparna.biswas\appdata\local\programs\python\python37\lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 565, in _run_script
    exec(code, module.__dict__)
File "D:\Python Projects\POC\Radium\Ana\app.py", line 49, in <module>
    answer = question_chain.run(formatted_prompt)
File "c:\users\aviparna.biswas\appdata\local\programs\python\python37\lib\site-packages\langchain\chains\base.py", line 106, in run
    f"`run` not supported when there is not exactly one input key, got ['question', 'documents']."

My code is as follows.

import os
from apikey import apikey

import streamlit as st
from langchain.llms import OpenAI
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain, SequentialChain
#from langchain.memory import ConversationBufferMemory
from docx import Document

os.environ['OPENAI_API_KEY'] = apikey

# App framework
st.title('πŸ¦œπŸ”— Colab Ana Answering Bot..')
prompt = st.text_input('Plug in your question here')


# Upload multiple documents
uploaded_files = st.file_uploader("Choose your documents (docx files)", accept_multiple_files=True, type=['docx'])
document_text = ""

# Read and combine Word documents
def read_docx(file):
    doc = Document(file)
    full_text = []
    for paragraph in doc.paragraphs:
        full_text.append(paragraph.text)
    return '\n'.join(full_text)

for file in uploaded_files:
    document_text += read_docx(file) + "\n\n"

with st.expander('Contextual Prompt'):
    st.write(document_text)

# Prompt template
question_template = PromptTemplate(
    input_variables=['question', 'documents'],
    template='Given the following documents: {documents}. Answer the question: {question}'
)

# Llms
llm = OpenAI(temperature=0.9)
question_chain = LLMChain(llm=llm, prompt=question_template, verbose=True, output_key='answer')

# Show answer if there's a prompt and documents are uploaded
if prompt and document_text:
    formatted_prompt = question_template.format(question=prompt, documents=document_text)
    answer = question_chain.run(formatted_prompt)
    st.write(answer['answer'])

I have gone through the documentations and even then I am getting the same error. I have already seen demos where multiple prompts are being taken by langchain.

Penn answered 8/5, 2023 at 10:24 Comment(2)
There is some inconsistency in the error message. It says: File "D:\Python Projects\POC\Radium\Ana\app.py", line 49, in <module> answer = question_chain.run(input_variables) but in the line 49 the parameter in the method run of the code you posted isn't input_variables, it is formatted_prompt instead. – Sunshine
Extremely sorry. Allow me to correct this. – Penn
E
4

For a prompt with multiple inputs, use predict() instead of run(), or just call the chain directly. (Note: Requires Python 3.8+)

prompt_template = "Tell me a {adjective} joke and make it include a {profession}"
llm_chain = LLMChain(
    llm=OpenAI(temperature=0.5),
    prompt=PromptTemplate.from_template(prompt_template)
)

# Option 1
llm_chain(inputs={"adjective": "corny", "profession": "plumber"})

# Option 2
llm_chain.predict(adjective="corny", profession="plumber")

Also note that you only need to assign the PromptTemplate at the moment you're instantiating the LLMChain - after that you're just passing in the template variables - in your case, documents and question (instead of passing in the formatted template, as you have currently).

Estheresthesia answered 8/5, 2023 at 20:1 Comment(9)
Hello, followed your solution for this. The error unfortunately was coming. Then I used a different version of python 3.10. Previously was using python 3.7. Seems to be LangChain needs to be run on python version > 3.8. Have they provided this on any documentation? – Penn
Ah, that would be part of the problem for sure. There isn't anything in the main documentation I can see about Python versioning. Does this solution work for you with the upgraded Python? – Estheresthesia
Yup. Only after changing the Python version it is working. Could you edit your comment and add the Python specific version also i.e., > 3.8 and I can accept the solution? – Penn
When using ConversationBufferMemory as a memory in the chain, I get the error raise ValueError(f"One input key expected got {prompt_input_keys}"). It will only allow one input key, and not more in the PromptTemplate. – Miscellaneous
@Adriaan, I have the same error. Using Python 3.11. Have you found a solution using memory? – Teasley
@Teasley Until the bug is fixed, I'm just storing my chat history in a list and then use a different LLM instance to summarize it within the given limit. I then pass this summary to the prompt so that it's taken into account by the main conversation LLM / chain. – Miscellaneous
@Miscellaneous I found a solution, which worked for me, and I logged it here. Basically, in the ConversationBufferMemory, you need to give an input_key, which has to be among the input_variables to every prompt. Hope it helps. – Teasley
that call the chain directly run asynchronously? @Estheresthesia – Dissentient
what if I have multiple input_variables? – Riata
S
0

I got the same error while on python 3.7.1 but when I upgraded my python to 3.10 and langchain to latest version I could get rid of that error. I noticed this since on colab it was running fine but locally it wasn't.

Shumway answered 12/5, 2023 at 5:33 Comment(1)
Your answer could be improved with additional supporting information. Please edit to add further details, such as citations or documentation, so that others can confirm that your answer is correct. You can find more information on how to write good answers in the help center. – Thaumatology

© 2022 - 2024 β€” McMap. All rights reserved.