I am relatively new to the OpenAI API and am trying to obtain my rate limits through the HTTP headers of the response, as discussed https://platform.openai.com/docs/guides/rate-limits/usage-tiers?context=tier-free. However, none of the calls to OpenAI are returning a response with a headers attribute.
I have tried with both the [https://platform.openai.com/docs/libraries/python-library](openai package) (code below) and llama-index.
from openai import OpenAI
client = OpenAI(
# Defaults to os.environ.get("OPENAI_API_KEY")
# Otherwise use: api_key="Your_API_Key",
)
chat_completion = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": "Hello world"}]
)
I'm obviously missing something obvious so if anyone can tell me it would be hugely appreciated!
chat_completion.headers
? – Smallmanchat_completion.usage
gives me the tokens for this particular request, but no information on my rate limits or usage in the last minute. – Taxichat_completion = client.chat.completions.create( response_format={ "type": "json_object" }, model="gpt-3.5-turbo", messages=[ { "role": "user", "content": "Hello world", } ] )
But got aBadRequestError: Error code: 400 - {'error': {'message': "Invalid parameter: 'response_format' of type 'json_object' is not supported with this model." ...
– Taxi