TL;DR: Treat the text-davinci-003
as a GPT-3 model (i.e., Completions API). See the code under OPTION 1.
Introduction
At first glance, as someone who's been using the OpenAI API for the past few months, I thought the answer was straight and simple if you read the official OpenAI documentation. Well, I read the documentation once again, and now I understand why you're confused.
Confusion number 1
You want to use the text-davinci-003
model. This model is originally from the GPT-3 model family. But if you take a look at the OpenAI models overview and click GPT-3, you won't find text-davinci-003
listed as a GPT-3 model. This is unexpected.
Confusion number 2
Moreover, the text-davinci-003
is listed as a GPT-3.5 model.
Confusion number 3
As if this isn't confusing enough, if you take a look at the OpenAI model endpoint compatibility, you'll find the text-davinci-003
listed under the /v1/completions
endpoint. This API endpoint is used for the GPT-3 model family.
Wait, what?
The text-davinci-003
isn't listed as a GPT-3 model (i.e., Completions API). It's listed as a GPT-3.5 model (i.e., Chat Completions API) but is compatible with the GPT-3 API endpoint. This doesn't make any sense.
Test
Either the text-davinci-003
could be treated as a GPT-3 model or a GPT-3.5 model, or perhaps both? Let's make a test.
Note: OpenAI NodeJS SDK v4
was released on August 16, 2023, and is a complete rewrite of the SDK. The code below differs depending on the version you currently have. See the v3
to v4
migration guide.
OPTION 1: Treat the text-davinci-003
as a GPT-3 model --> IT WORKS ✓
If you treat the text-davinci-003
as a GPT-3 model, then run test-1.js
, and the OpenAI will return the following completion:
This is indeed a test
• If you have the OpenAI NodeJS SDK v3
:
test-1.js
const { Configuration, OpenAIApi } = require('openai');
const configuration = new Configuration({
apiKey: process.env.OPENAI_API_KEY,
});
const openai = new OpenAIApi(configuration);
async function getCompletionFromOpenAI() {
const completion = await openai.createCompletion({
model: 'text-davinci-003',
prompt: 'Say this is a test',
max_tokens: 7,
temperature: 0,
});
console.log(completion.data.choices[0].text);
}
getCompletionFromOpenAI();
• If you have the OpenAI NodeJS SDK v4
:
test-1.js
import OpenAI from 'openai';
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
});
async function getCompletionFromOpenAI() {
const completion = await openai.completions.create({
model: 'text-davinci-003',
prompt: 'Say this is a test',
max_tokens: 7,
temperature: 0,
});
console.log(completion.choices[0].text);
}
getCompletionFromOpenAI();
OPTION 2: Treat the text-davinci-003
as a GPT-3.5 model --> IT DOESN'T WORK ✗
If you treat the text-davinci-003
as a GPT-3.5 model, then run test-2.js
, and the OpenAI will return the following error:
data: {
error: {
message: 'Invalid URL (POST /v1/chat/completions)',
type: 'invalid_request_error',
param: null,
code: null
}
}
• If you have the OpenAI NodeJS SDK v3
:
test-2.js
const { Configuration, OpenAIApi } = require('openai');
const configuration = new Configuration({
apiKey: process.env.OPENAI_API_KEY,
});
const openai = new OpenAIApi(configuration);
async function getChatCompletionFromOpenAI() {
const chatCompletion = await openai.createChatCompletion({
model: 'text-davinci-003',
messages: [{ role: 'user', content: 'Hello!' }],
temperature: 0,
});
console.log(chatCompletion.data.choices[0].message.content);
}
getChatCompletionFromOpenAI();
• If you have the OpenAI NodeJS SDK v4
:
test-2.js
import OpenAI from 'openai';
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
});
async function getChatCompletionFromOpenAI() {
const chatCompletion = await openai.chat.completions.create({
model: 'gpt-3.5-turbo',
messages: [{role: 'user', content: 'Hello!'}],
temperature: 0,
});
console.log(chatCompletion.choices[0].message.content);
}
getChatCompletionFromOpenAI();
Conclusion
Treat the text-davinci-003
as a GPT-3 model. See the code under OPTION 1.
start().catch((err) => console.error(err.response?.data ?? err.toJSON?.() ?? err))
to get more details – Kurthconsole.error("start failed", err.response?.data ?? err.toJSON.?() ?? err)
. Please also edit your question to show the current state of your code and any changes to the error message – Kurth