gpt-3 Questions
6
import tiktoken
tokenizer = tiktoken.get_encoding("cl100k_base") tokenizer = tiktoken.encoding_for_model("gpt-3.5-turbo")
text = "Hello, nice to meet you"
tokenizer...
4
I am experimenting with the GPT API by OpenAI and am learning how to use the GPT-3.5-Turbo model. I found a quickstart example on the web:
def generate_chat_completion(messages, model="gpt-3.5...
Benighted asked 16/4, 2023 at 3:42
4
On occasion I'm getting a rate limit error without being over my rate limit. I'm using the text completions endpoint on the paid api which has a rate limit of 3,000 requests per minute. I am using ...
Williamswilliamsburg asked 18/2, 2023 at 17:5
2
Solved
I am trying to put together a simple "Q&A with sources" using Langchain and a specific URL as the source data. The URL consists of a single page with quite a lot of information on it....
Crore asked 15/6, 2023 at 11:31
2
Solved
I am using the SQL Database Agent to query a postgres database. I want to use gpt 4 or gpt 3.5 models in the OpenAI llm passed to the agent, but it says I must use ChatOpenAI. Using ChatOpenAI thro...
Pasadis asked 7/6, 2023 at 9:40
4
Solved
OpenAI's text models have a context length, e.g.: Curie has a context length of 2049 tokens.
They provide max_tokens and stop parameters to control the length of the generated sequence. Therefore t...
Equilibrium asked 21/3, 2023 at 17:35
3
Solved
I'm trying to use the stream=true property as follows.
completion = openai.Completion.create(
model="text-davinci-003",
prompt="Write me a story about dogs.",
temperature=0.7...
2
I'm new to OpenAI API. I work with GPT-3.5-Turbo, using this code:
messages = [
{"role": "system", "content": "You’re a helpful assistant"}
]
while True:...
Sweepstakes asked 30/8, 2023 at 10:35
4
I want ChatGPT to remember past conversations and have a consistent (stateful) conversation.
I have seen several code of ChatGPT prompt engineering.
There were two ways to design the prompt shown b...
Beanfeast asked 28/2, 2023 at 0:6
2
Solved
I have three questions :
Given count of LLM parameters in Billions, how can you figure how much GPU RAM do you need to run the model ?
If you have enough CPU-RAM (i.e. no GPU) can you run the...
Avon asked 15/5, 2023 at 14:57
5
Solved
I would like to get the text inside this data structure that is outputted via GPT3 OpenAI. I'm using Python. When I print the object I get:
<OpenAIObject text_completion id=cmpl-6F7ScZDu2UKKJGPX...
Clubman asked 21/11, 2022 at 20:26
5
Solved
I'm trying OpenAI.
I have prepared the training data, and used fine_tunes.create. Several minutes later, it showed Stream interrupted (client disconnected).
$ openai api fine_tunes.create -t data_p...
Keen asked 3/12, 2022 at 11:24
2
I keep get an error as below
Request timed out: HTTPSConnectionPool(host='api.openai.com', port=443): Read timed out. (read timeout=600)
when I run the code below
def generate_gpt3_response(user_te...
Zecchino asked 20/3, 2023 at 7:43
0
We're prototyping a chatbot application using Azure OpenAI gpt-3.5-turbo model using the standard tier.
We're facing random latency bursts, which sometimes go between 3-20 minutes. Below I have scr...
Archduke asked 28/7, 2023 at 8:58
2
Solved
I'm trying to use functions when calling Azure OpenAI GPT, as documented in https://platform.openai.com/docs/api-reference/chat/create#chat/create-functions
I use:
import openai
openai.api_type = &...
Highhat asked 24/7, 2023 at 18:35
1
I am fairly new to using llama-index library for training GPT-3 as well as using ChatGPT through the standard API (both in Python). I have noticed that standard ChatGPT API i could simply do the fo...
Informed asked 10/4, 2023 at 18:45
6
I am building an app around GPT-3, and I would like to know how much tokens every request I make uses. Is this possible and how ?
Amylolysis asked 18/5, 2022 at 19:11
4
Solved
I'm new to APIs and I'm trying to understand how to get a response from a prompt using OpenAI's GPT-3 API (using api.openai.com/v1/completions). I'm using Postman to do so.
The documentation says t...
Spica asked 21/9, 2022 at 8:49
2
Solved
I'm testing the different models for OpenAI, and I noticed that not all of them are developed or trained enough to give a reliable response.
The models I tested are the following:
model_engine = &q...
Supereminent asked 4/5, 2023 at 12:51
3
Solved
I am making a request to the completions endpoint. My prompt is 1360 tokens, as verified by the Playground and the Tokenizer. I won't show the prompt as it's a little too long for this question.
He...
Dina asked 9/2, 2023 at 9:18
2
I'm testing a couple of the widely published GPT models just trying to get my feet wet and I am running into an error that I cannot solve.
I am running this code:
from llama_index import SimpleDire...
Hygienist asked 3/4, 2023 at 22:27
2
Solved
Here is my code snippet:
const { Configuration, OpenAI, OpenAIApi } = require ("openai");
const configuration = new Configuration({
apiKey: 'MY KEY'
})
const openai = new OpenAIApi(conf...
Callipygian asked 29/3, 2023 at 23:32
1
Is there a way to train a Large Language Model (LLM) to store a specific context? For example, I had a long story I want to ask questions about, but I don't want to put the whole story in every pro...
Succory asked 19/2, 2023 at 15:38
1
I'm using customized text with 'Prompt' and 'Completion' to train new model.
Here's the tutorial I used to create customized model from my data:
beta.openai.com/docs/guides/fine-tuning/advanced-usa...
Tresa asked 8/10, 2022 at 19:58
1
I'd like to produce a 3-6 sentence summary from a 2-3 page article, using OpenAI's TLDR. I've pasted the article text but the output seems to stay between 1 and 2 sentences only.
Clownery asked 7/9, 2022 at 1:53
1 Next >
© 2022 - 2024 — McMap. All rights reserved.