OpenAI GPT-3 API error: "Invalid URL (POST /v1/chat/completions)"
Asked Answered
C

2

5

Here is my code snippet:

const { Configuration, OpenAI, OpenAIApi } = require ("openai");
const configuration = new Configuration({
    apiKey: 'MY KEY'
})

const openai = new OpenAIApi(configuration)

async function start() {
    const response = await openai.createChatCompletion({
        model:"text-davinci-003",
        prompt: "Write a 90 word essay about Family Guy",
        temperature: 0,
        max_tokens: 1000
    })

    console.log(response.data.choices[0].text)
}

start()

when I run: node index

I run into this issue:

data: {
      error: {
        message: 'Invalid URL (POST /v1/chat/completions)',
        type: 'invalid_request_error',
        param: null,
        code: null
      }
    }
  },
  isAxiosError: true,
  toJSON: [Function: toJSON]
}

Node.js v18.15.0

I've looked all over the internet and tried some solutions but nothing seems to work. Please help!

Usually others have some link attached to their code when I look up this problem online. Very much a beginner at this stuff so any help would be much appreciated

Callipygian answered 29/3, 2023 at 23:32 Comment(3)
The OpenAI API isn't very good with its response statuses. You need to check the actual response text. Try start().catch((err) => console.error(err.response?.data ?? err.toJSON?.() ?? err)) to get more detailsKurth
the output of "" start().catch((err) => console.error(err.response?.data ?? err.toJSON?.() ?? err)) "" seems to be the same I get: PS C:\Users\Dasa\Desktop\Node-ChatGPT> node index { error: { message: 'Invalid URL (POST /v1/chat/completions)', type: 'invalid_request_error', param: null, code: null } }Callipygian
Could you add a label to that so that it's clear you're actually running the right code... console.error("start failed", err.response?.data ?? err.toJSON.?() ?? err). Please also edit your question to show the current state of your code and any changes to the error messageKurth
W
7

TL;DR: Treat the text-davinci-003 as a GPT-3 model (i.e., Completions API). See the code under OPTION 1.

Introduction

At first glance, as someone who's been using the OpenAI API for the past few months, I thought the answer was straight and simple if you read the official OpenAI documentation. Well, I read the documentation once again, and now I understand why you're confused.

Confusion number 1

You want to use the text-davinci-003 model. This model is originally from the GPT-3 model family. But if you take a look at the OpenAI models overview and click GPT-3, you won't find text-davinci-003 listed as a GPT-3 model. This is unexpected.

Screenshot 1

Confusion number 2

Moreover, the text-davinci-003 is listed as a GPT-3.5 model.

Screenshot 2

Confusion number 3

As if this isn't confusing enough, if you take a look at the OpenAI model endpoint compatibility, you'll find the text-davinci-003 listed under the /v1/completions endpoint. This API endpoint is used for the GPT-3 model family.

Screenshot 3


Wait, what?

The text-davinci-003 isn't listed as a GPT-3 model (i.e., Completions API). It's listed as a GPT-3.5 model (i.e., Chat Completions API) but is compatible with the GPT-3 API endpoint. This doesn't make any sense.


Test

Either the text-davinci-003 could be treated as a GPT-3 model or a GPT-3.5 model, or perhaps both? Let's make a test.

Note: OpenAI NodeJS SDK v4 was released on August 16, 2023, and is a complete rewrite of the SDK. The code below differs depending on the version you currently have. See the v3 to v4 migration guide.

OPTION 1: Treat the text-davinci-003 as a GPT-3 model --> IT WORKS ✓

If you treat the text-davinci-003 as a GPT-3 model, then run test-1.js, and the OpenAI will return the following completion:

This is indeed a test

• If you have the OpenAI NodeJS SDK v3:

test-1.js

const { Configuration, OpenAIApi } = require('openai');

const configuration = new Configuration({
  apiKey: process.env.OPENAI_API_KEY,
});

const openai = new OpenAIApi(configuration);

async function getCompletionFromOpenAI() {
  const completion = await openai.createCompletion({
    model: 'text-davinci-003',
    prompt: 'Say this is a test',
    max_tokens: 7,
    temperature: 0,
  });

  console.log(completion.data.choices[0].text);
}

getCompletionFromOpenAI();

• If you have the OpenAI NodeJS SDK v4:

test-1.js

import OpenAI from 'openai';

const openai = new OpenAI({
  apiKey: process.env.OPENAI_API_KEY,
});

async function getCompletionFromOpenAI() {
  const completion = await openai.completions.create({
    model: 'text-davinci-003',
    prompt: 'Say this is a test',
    max_tokens: 7,
    temperature: 0,
  });

  console.log(completion.choices[0].text);
}

getCompletionFromOpenAI();
OPTION 2: Treat the text-davinci-003 as a GPT-3.5 model --> IT DOESN'T WORK ✗

If you treat the text-davinci-003 as a GPT-3.5 model, then run test-2.js, and the OpenAI will return the following error:

data: {
  error: {
    message: 'Invalid URL (POST /v1/chat/completions)',
    type: 'invalid_request_error',
    param: null,
    code: null
  }
}

• If you have the OpenAI NodeJS SDK v3:

test-2.js

const { Configuration, OpenAIApi } = require('openai');

const configuration = new Configuration({
  apiKey: process.env.OPENAI_API_KEY,
});

const openai = new OpenAIApi(configuration);

async function getChatCompletionFromOpenAI() {
  const chatCompletion = await openai.createChatCompletion({
    model: 'text-davinci-003',
    messages: [{ role: 'user', content: 'Hello!' }],
    temperature: 0,
  });

  console.log(chatCompletion.data.choices[0].message.content);
}

getChatCompletionFromOpenAI();

• If you have the OpenAI NodeJS SDK v4:

test-2.js

import OpenAI from 'openai';

const openai = new OpenAI({
  apiKey: process.env.OPENAI_API_KEY,
});

async function getChatCompletionFromOpenAI() {
  const chatCompletion = await openai.chat.completions.create({
    model: 'gpt-3.5-turbo',
    messages: [{role: 'user', content: 'Hello!'}],
    temperature: 0,
  });

  console.log(chatCompletion.choices[0].message.content);
}

getChatCompletionFromOpenAI();

Conclusion

Treat the text-davinci-003 as a GPT-3 model. See the code under OPTION 1.

Waynant answered 30/3, 2023 at 9:58 Comment(1)
Thanks! That solution is so simple and I feel dumb lol but the documentation is kinda a pain to follow and your explanation of completions vs chat completions was super helpful!Callipygian
S
3

You are mixing two capabilities of OpenAI API.

You can either create one-time completion from a prompt, what they call Completions, or you can create completion from a discussion between an agent an a user, what they call ChatCompletion.

Depending on which one you want to use the parameters are not the same.

In the first case it should be

const response = await openai. createCompletion({
        model:"text-davinci-003",
        prompt: "Write a 90 word essay about Family Guy",
        temperature: 0,
        max_tokens: 1000
    })

and in the other case, you need to specify messages, and the role.

const completion = await openai.createChatCompletion({
  model: "gpt-3.5-turbo",
  messages: [{role: "user", content: "Hello world"}],
});

Take a look at the API documentation to understand the difference between the two different APIs.

Synergist answered 30/3, 2023 at 6:35 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.