OpenAI Chat Completions API error: "InvalidRequestError: Unrecognized request argument supplied: messages"
Asked Answered
W

4

38

I am currently trying to use OpenAI's most recent model: gpt-3.5-turbo. I am following a very basic tutorial.

I am working from a Google Collab notebook. I have to make a request for each prompt in a list of prompts, which for sake of simplicity looks like this:

prompts = ['What are your functionalities?', 'what is the best name for an ice-cream shop?', 'who won the premier league last year?']

I defined a function to do so:

import openai

# Load your API key from an environment variable or secret management service
openai.api_key = 'my_API'

def get_response(prompts: list, model = "gpt-3.5-turbo"):
  responses = []

  
  restart_sequence = "\n"

  for item in prompts:

      response = openai.Completion.create(
      model=model,
      messages=[{"role": "user", "content": prompt}],
      temperature=0,
      max_tokens=20,
      top_p=1,
      frequency_penalty=0,
      presence_penalty=0
    )

      responses.append(response['choices'][0]['message']['content'])

  return responses

However, when I call responses = get_response(prompts=prompts[0:3]) I get the following error:

InvalidRequestError: Unrecognized request argument supplied: messages

Any suggestions?

Replacing the messages argument with prompt leads to the following error:

InvalidRequestError: [{'role': 'user', 'content': 'What are your functionalities?'}] is valid under each of {'type': 'array', 'minItems': 1, 'items': {'oneOf': [{'type': 'integer'}, {'type': 'object', 'properties': {'buffer': {'type': 'string', 'description': 'A serialized numpy buffer'}, 'shape': {'type': 'array', 'items': {'type': 'integer'}, 'description': 'Array shape'}, 'dtype': {'type': 'string', 'description': 'Stringified dtype'}, 'token': {'type': 'string'}}}]}, 'example': '[1, 1313, 451, {"buffer": "abcdefgh", "shape": [1024], "dtype": "float16"}]'}, {'type': 'array', 'minItems': 1, 'maxItems': 2048, 'items': {'oneOf': [{'type': 'string'}, {'type': 'object', 'properties': {'buffer': {'type': 'string', 'description': 'A serialized numpy buffer'}, 'shape': {'type': 'array', 'items': {'type': 'integer'}, 'description': 'Array shape'}, 'dtype': {'type': 'string', 'description': 'Stringified dtype'}, 'token': {'type': 'string'}}}], 'default': '', 'example': 'This is a test.', 'nullable': False}} - 'prompt'
Woolfell answered 2/3, 2023 at 15:55 Comment(7)
messages isn't the correct argument. Guess you need prompt: []Bib
@Bib the messages argument is the one provided in the documentation. However, implementing your solution leads to another error message (check the most recent edit)Woolfell
But the prompt just need to be your question: prompt: itemBib
@Bib This leads to a different error which I believe has to do with the model (your solution would work, e.g., with a davinci model. InvalidRequestError: This is a chat model and not supported in the v1/completions endpoint. Did you mean to use v1/chat/completions?Woolfell
Oke I made some code myself and can't reproduce your problem. Works fine over here.Bib
Sure you are using the latest version of the openai package?Bib
error I get is openai.NotFoundError: Error code: 404 - {'error': {'message': 'This is a chat model and not supported in the v1/completions endpoint. Did you mean to use v1/chat/completions?', 'type': 'invalid_request_error', 'param': 'model', 'code': None}}Kittle
C
55

Problem

You used the wrong method to get a completion. When using the OpenAI SDK, whether you use Python or Node.js, you need to use the right method.

Which method is the right one? It depends on the OpenAI model you want to use.

Solution

The tables below will help you figure out which method is the right one for a given OpenAI model.

STEP 1: Find in the table below which API endpoint is compatible with the OpenAI model you want to use.

API endpoint Model group Model name
/v1/chat/completions • GPT-4
• GPT-3.5
gpt-4 and dated model releases
gpt-4-32k and dated model releases
gpt-4-1106-preview
gpt-4-vision-preview
gpt-3.5-turbo and dated model releases
gpt-3.5-turbo-16k and dated model releases
• fine-tuned versions of gpt-3.5-turbo
/v1/completions (Legacy) • GPT-3.5
• GPT base
gpt-3.5-turbo-instruct
babbage-002
davinci-002
/v1/assistants All models except gpt-3.5-turbo-0301 supported.
Retrieval tool requires gpt-4-1106-preview or gpt-3.5-turbo-1106.
/v1/audio/transcriptions Whisper whisper-1
/v1/audio/translations Whisper whisper-1
/v1/audio/speech TTS tts-1
tts-1-hd
/v1/fine_tuning/jobs • GPT-3.5
• GPT base
gpt-3.5-turbo
babbage-002
davinci-002
/v1/embeddings Embeddings text-embedding-ada-002
/v1/moderations Moderations text-moderation-stable
text-moderation-latest

STEP 2: Find in the table below which method you need to use for the API endpoint you selected in the table above.

Note: Pay attention, because you have to use the method that is compatible with your OpenAI SDK version.

API endpoint Method for the
Python SDK v0.28.1
Method for the
Python SDK >=v1.0.0
Method for the
Node.js SDK v3.3.0
Method for the
Node.js SDK >=v4.0.0
/v1/chat/completions openai.ChatCompletion.create openai.chat.completions.create openai.createChatCompletion openai.chat.completions.create
/v1/completions (Legacy) openai.Completion.create openai.completions.create openai.createCompletion openai.completions.create
/v1/assistants / openai.beta.assistants.create / openai.beta.assistants.create
/v1/audio/transcriptions openai.Audio.transcribe openai.audio.transcriptions.create openai.createTranscription openai.audio.transcriptions.create
/v1/audio/translations openai.Audio.translate openai.audio.translations.create openai.createTranslation openai.audio.translations.create
/v1/audio/speech / openai.audio.speech.create / openai.audio.speech.create
/v1/fine_tuning/jobs / openai.fine_tuning.jobs.create / openai.fineTuning.jobs.create
/v1/embeddings openai.Embedding.create openai.embeddings.create openai.createEmbedding openai.embeddings.create
/v1/moderations openai.Moderation.create openai.moderations.create openai.createModeration openai.moderations.create

Python SDK v1.0.0 working example for the gpt-3.5-turbo model

If you run test.py, the OpenAI API will return the following completion:

Hello! How can I assist you today?

test.py

import os
from openai import OpenAI

client = OpenAI(
    api_key = os.getenv("OPENAI_API_KEY"),
)

completion = client.chat.completions.create(
  model = "gpt-3.5-turbo",
  messages = [
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "Hello!"},
  ]
)

print(completion.choices[0].message.content.strip())

Node.js SDK v4.0.0 working example for the gpt-3.5-turbo model

If you run test.js, the OpenAI API will return the following completion:

Hello! How can I assist you today?

test.js

const OpenAI = require("openai");

const openai = new OpenAI({
  apiKey: process.env.OPENAI_API_KEY,
});

async function main() {
  const completion = await openai.chat.completions.create({
    model: "gpt-3.5-turbo",
    messages: [
      { role: "system", content: "You are a helpful assistant." },
      { role: "user", content: "Hello!" },
    ],
  });

  console.log(completion.choices[0].message.content.trim());
}

main();
Cheri answered 2/3, 2023 at 18:50 Comment(4)
This: I think the naming practices at OpenAI made it a bit confusing, Why would you have this in the introduction example: import openai openai.ChatCompletion.create( model="gpt-3.5-turbo", messages=[ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Who won the world series in 2020?"}, {"role": "assistant", "content": "The Los Angeles Dodgers won the World Series in 2020."}, {"role": "user", "content": "Where was it played?"} ] )Woolfell
I agree, it's a bit confusing. I think they should copy-paste the example from the documentation.Cheri
This is Pytthon, right? What's the nodejs equivalent? I had const completion = await openai.createCompletion({....}) but I get the error "This is a chat model and not supported in the v1/completions endpoint. Did you mean to use v1/chat/completions?"Igenia
Super helpful~! In my case, I used a wrong class, so instead of openai.ChatCompletion.create(), I used the wrong one openai.Completion.create(). And OpenAI happens to have both ChatCompletion and Completion which makes this issue harder to be found.Westcott
A
4
response = openai.ChatCompletion.create(
model='gpt-3.5-turbo',
  messages=[
    {"role": "user", "content": "What is openAI?"}],
max_tokens=193,
temperature=0,
)

print(response)
print(response["choices"][0]["message"]["content"])
Alcine answered 5/3, 2023 at 17:52 Comment(1)
This answer could benefit from literally any explanation at all. Code-only answers are rarely useful, especially as time goes on.Ultrasonics
K
2

For gpt4 this is what worked for me

        for prompt in prompts:
            # from openai import OpenAI
            # client = OpenAI(api_key=openai.api_key)
            print(f'{self.model_name=}')
            response = openai.chat.completions.create(
            # response = client.chat.completions.create(
                model = self.model_name,
                messages = 
                [
                    # {"role": "system", "content": "You are a helpful assistant."},
                    {"role": "user", "content": prompt},
                ],
                temperature = temperature 
            )
            generated_text: str = response.choices[0].message.content.strip()
            generated_texts.append(generated_text)

when error

openai.NotFoundError: Error code: 404 - {'error': {'message': 'This is a chat model and not supported in the v1/completions endpoint. Did you mean to use v1/chat/completions?', 'type': 'invalid_request_error', 'param': 'model', 'code': None}}

appeared.

Kittle answered 2/3, 2023 at 15:55 Comment(0)
A
0

You should define messages=[{"role": "user", "content": prompt}] outside of your response variable and call that in response variable like:

messages=[{"role": "user", "content": prompt}]
for item in prompts:
      response = openai.Completion.create(
      model=model,
      messages=messages,
      temperature=0,
      max_tokens=20,
      top_p=1,
      frequency_penalty=0,
      presence_penalty=0
    )
Amalee answered 1/6, 2023 at 12:32 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.