How to maintain context with OpenAI gpt-3.5-turbo API?
Asked Answered
D

1

7

I thought the user parameter is doing this job. But it doesn’t work.

https://platform.openai.com/docs/api-reference/chat

enter image description here

Delk answered 12/3, 2023 at 3:33 Comment(0)
S
9

You need to refeed your previous responses to maintain context. (The user param is only for OpenAI to monitor abuse). Remember it is a completion AI, meaning that it can only take input and give output. To maintain context, you need to input the context.

Also, keep in mind that the new model, gpt-3.5-turbo processes info differently than the Davinci model.

Davinci input is like this:

//import and configure...

const response = await openai.createCompletion({
  model: "text-davinci-003",
  prompt: "Say this is a test",
  temperature: 0,
  max_tokens: 7,
});

while the gpt-3.5-turbo model is like this:

//import and configure...

const response = await openai.createCompletion({
  model: "gpt-3.5-turbo",
  messages: [
        {role: "user", content: "Say this is a test"},
      ],
  temperature: 0,
});

So it's a little different. If you want to refeed for context, you need to make an input field in the "messages" - something like this...

//import and configure...

const message = "${<user input>}"
const context = "${<user previous messages if any, else empty>}"

const response = await openai.createCompletion({
  model: "gpt-3.5-turbo",
  messages: [
        {role: "system", content: `${context}`},
        {role: "user", content: `${message}`},
      ],
  temperature: 0,
});

The "system" role is for the context, so gpt knows to respond to the user input primarily and not the system input. That can also be a useful field for prefacing user prompts fyi.

Hope that helps.

Synectics answered 18/3, 2023 at 3:15 Comment(7)
Does OpenAI provide access to those previous user messages or is that something we have to store and populate?Ashil
Has to be populatedSynectics
This will easily consume the tokens and become expensive right ?Sigridsigsmond
yes, if you add context it will consume additional tokensHepsiba
it might also save tokens because you may have to ask fewer questions to get the info you truly want. It's a balancing act, feeding too much context can confuse it but lets say I want it to find a bunch of genes then ask a bunch of questions about them. The Context will be the list of Genes that way the Engine doesn't have to solve that same problem of figuring out the right genes over and over and over. The ten genes are in the context. If you only intend on asking one or two questions to get a final answer you shouldn't need context you can provide it in the query itself.Fleet
Should I have to send only the messages sent by the user, or also the responses by gpt model?Oxide
Should I have to send only the messages sent by the user, or also the responses by gpt model?Oxide

© 2022 - 2024 — McMap. All rights reserved.