OpenAI api - asynchronous API calls
Asked Answered
H

2

11

I work with the OpenAI API. I have extracted slides text from a PowerPoint presentation, and written a prompt for each slide. Now, I want to make asynchronous API calls, so that all the slides are processed at the same time.

this is the code from the async main function:

for prompt in prompted_slides_text:
    task = asyncio.create_task(api_manager.generate_answer(prompt))
    tasks.append(task)
results = await asyncio.gather(*tasks)

and this is generate_answer function:

@staticmethod
    async def generate_answer(prompt):
        """
        Send a prompt to OpenAI API and get the answer.
        :param prompt: the prompt to send.
        :return: the answer.
        """
        completion = await openai.ChatCompletion.create(
                model="gpt-3.5-turbo",
                messages=[{"role": "user", "content": prompt}]

        )
        return completion.choices[0].message.content

the problem is:

object OpenAIObject can't be used in 'await' expression

and I don't know how to await for the response in generate_answer function

Would appreciate any help!

Hydromedusa answered 22/5, 2023 at 10:23 Comment(1)
Does this answer your question? Using asyncio for Non-async Functions in Python?Bountiful
C
15

For those landing here, the error here was probably the instantiation of the object. It has to be:

client = AsyncOpenAI(api_key=api_key)

Then you can use:

        response = await client.chat.completions.create(
        model="gpt-4",
        messages=custom_prompt,
        temperature=0.9
    )
Conk answered 23/11, 2023 at 21:20 Comment(1)
can this be handled somehow by langchain? For me I still have issues with langchain only being able to handle openai<1.0Peag
W
15

Note: With version v1 the api has changed and this answer is not valid anymore, see Graciela's answer for the new API.


You have to use openai.ChatCompletion.acreate to use the api asynchronously.

It's documented on their Github - https://github.com/openai/openai-python#async-usage

Waxbill answered 22/5, 2023 at 10:38 Comment(5)
The api has a limit of 3 requests per minute, and my program is crashed because of it, do you know how can I overcome this issue?Hydromedusa
@Hydromedusa I had written another answer that limits concurrent requests - https://mcmap.net/q/1013715/-using-aiohttp-fire-off-an-api-request-exactly-every-0-1s-if-a-condition-is-met-in-one-of-the-returned-results-exit-the-function-early. See if that helps. You will find more answers at - https://mcmap.net/q/158893/-how-to-limit-concurrency-with-python-asyncio/3007402Waxbill
Still it does not work.Mcdowell
@Mcdowell What specifically doesn't work? Can you share your code? You may want to ask a new question.Waxbill
I think this used to be correct but now the answer by Graciela is correct.Scapegrace
C
15

For those landing here, the error here was probably the instantiation of the object. It has to be:

client = AsyncOpenAI(api_key=api_key)

Then you can use:

        response = await client.chat.completions.create(
        model="gpt-4",
        messages=custom_prompt,
        temperature=0.9
    )
Conk answered 23/11, 2023 at 21:20 Comment(1)
can this be handled somehow by langchain? For me I still have issues with langchain only being able to handle openai<1.0Peag

© 2022 - 2024 — McMap. All rights reserved.