How to get headers from OpenAI API
Asked Answered
T

2

6

I am relatively new to the OpenAI API and am trying to obtain my rate limits through the HTTP headers of the response, as discussed https://platform.openai.com/docs/guides/rate-limits/usage-tiers?context=tier-free. However, none of the calls to OpenAI are returning a response with a headers attribute.

I have tried with both the [https://platform.openai.com/docs/libraries/python-library](openai package) (code below) and llama-index.

from openai import OpenAI
client = OpenAI(
    # Defaults to os.environ.get("OPENAI_API_KEY")
    # Otherwise use: api_key="Your_API_Key",
)

chat_completion = client.chat.completions.create(
    model="gpt-3.5-turbo",
    messages=[{"role": "user", "content": "Hello world"}]
)

I'm obviously missing something obvious so if anyone can tell me it would be hugely appreciated!

Taxi answered 5/12, 2023 at 11:2 Comment(7)
Isn't it chat_completion.headers?Smallman
It doesn't seem to be - I get an AttributeError stating 'ChatCompletion' object has no attribute 'headers'Taxi
they say Conversations can be as short as one message or many back and forth turns. platform.openai.com/docs/guides/text-generation/… ,Radically
is "response_format={ "type": "json_object" }" needed after the model before the message to return json formatted ??Radically
ISnt it chat_completion.usage ?Radically
chat_completion.usage gives me the tokens for this particular request, but no information on my rate limits or usage in the last minute.Taxi
I tried changing the response format: chat_completion = client.chat.completions.create( response_format={ "type": "json_object" }, model="gpt-3.5-turbo", messages=[ { "role": "user", "content": "Hello world", } ] ) But got a BadRequestError: Error code: 400 - {'error': {'message': "Invalid parameter: 'response_format' of type 'json_object' is not supported with this model." ...Taxi
B
3

Ask for the raw response object like:

from openai import OpenAI
client = OpenAI(
    # Defaults to os.environ.get("OPENAI_API_KEY")
    # Otherwise use: api_key="Your_API_Key",
)

raw_response = client.chat.completions.with_raw_response.create(
    model="gpt-3.5-turbo",
    messages=[{"role": "user", "content": "Hello world"}]
)

chat_completion = raw_response.parse()
response_headers = raw_response.headers 
Binder answered 20/12, 2023 at 13:1 Comment(0)
L
0

Do something like below:

import requests

api_key = 'YOUR-API-KEY'  # Replace with your OpenAI API key

headers = {
    'Authorization': f'Bearer {api_key}',
    'Content-Type': 'application/json'
}

model = "gpt-3.5-turbo-1106"
messages = [
    {"role": "user", "content": "j"}
]
data = {
    'model': model,  # For example, "gpt-3.5-turbo"
    'messages': messages,  # List of message dictionaries
    'max_tokens': 1,
}

response = requests.post('https://api.openai.com/v1/chat/completions', headers=headers, json=data)
sleep_time_unconverted = response.headers['x-ratelimit-reset-tokens'] # this is just example ratelimit header
print(sleep_time_unconverted)
Leeland answered 8/12, 2023 at 6:38 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.