is there any way to stream response word by word of chatgpt api directly in react native (with javascript)
Asked Answered
S

4

8

I want to use Chat GPT Turbo api directly in react native (expo) with word by word stream here is working example without stream

  fetch(`https://api.openai.com/v1/chat/completions`, {
  body: JSON.stringify({
    model: 'gpt-3.5-turbo',
    messages: [{ role: 'user', content: 'hello' }],
    temperature: 0.3,
    max_tokens: 2000,
  }),
  method: 'POST',
  headers: {
    'content-type': 'application/json',
    Authorization: 'Bearer ' + API_KEY,
  },
}).then((response) => {
  console.log(response); //If you want to check the full response
  if (response.ok) {
    response.json().then((json) => {
      console.log(json); //If you want to check the response as JSON
      console.log(json.choices[0].message.content); //HERE'S THE CHATBOT'S RESPONSE
    });
  }
});

what can i change to stream data word by word

Sherrer answered 23/3, 2023 at 17:37 Comment(6)
What do you mean by stream data word by word?Amphitrite
@Amphitrite like in chat gpt website. it streams word by wordSherrer
I try to give stream: true, in the body it does not work also i ask GPT it give answers which can work on web but not in react nativeSherrer
The snippet is a REST based API call.Cano
You may be better of with this: css-tricks.com/snippets/css/typewriter-effect. Implement a REST Endpoint that takes your prompts, that keeps your API Key Secret.Cano
We've used this React Hook and it seems to work: github.com/XD2Sketch/react-chat-streamBeslobber
C
5

OpenAI APIs rely on SSE (Server Side Events) to stream the response back to you. If you pass the stream parameter in your API request, you will receive chunks of data when they are calculated by OpenAI. This creates the illusion of a real-time response that mimics someone typing.

The hardest part to figure out might be how to connect your frontend with your backend. Every-time the backend receives a new chunk you want to display it in the frontend.

I created a simple NextJs project on Replit that demonstrates just that. Live demo

you will need to install better-sse package

npm install better-sse

Server side In an API route file

import {createSession} from "better-sse";

const session = await createSession(req, res);
      if (!session.isConnected) throw new Error('Not connected');

const { data } = await openai.createCompletion({
  model: 'text-davinci-003',
  n: 1,
  max_tokens: 2048,
  temperature: 0.3,
  stream: true,
  prompt: `CHANGE TO YOUR OWN PROMPTS`
}, {
  timeout: 1000 * 60 * 2,
  responseType: 'stream'
});

//what to do when receiving data from the API
data.on('data', text => {
  const lines = text.toString().split('\n').filter(line => line.trim() !== '');
  for (const line of lines) {
    const message = line.replace(/^data: /, '');
    if (message === '[DONE]') { //OpenAI sends [DONE] to say it's over
      session.push('DONE', 'error');
      return;
    }
    try {
      const { choices } = JSON.parse(message);
      session.push({text:choices[0].text});
    } catch (err) {
      console.log(err);
    }
  }
});

//connection is close
data.on('close', () => { 
  console.log("close")
  res.end();
});

data.on('error', (err) => {
  console.error(err);
});

On your front end you can now call this API route

let [result, setResult] = useState("");

//create the sse connection
const sse = new EventSource(`/api/completion?prompt=${inputPrompt}`);

//listen to incoming messages
sse.addEventListener("message", ({ data }) => {
  let msgObj = JSON.parse(data)
  setResult((r) => r + msgObj.text)
});

Hope this makes sense and help others with similar issue.

Chiastic answered 1/4, 2023 at 7:31 Comment(7)
I might change that to gpt3-turbo or gpt-4 if you have the access. But the request format would change for that.Cano
The model currently set is for GPT3, not ChatGPT.Cano
sse is not work in react nativeSherrer
This is a separate codebase that you can set up on any platform of your choice that will interact with ChatGPT for you and you can use it to stream responses to your server.Cano
There are a few workarounds to support SSE in React Native. Did you try the following libraries? npmjs.com/package/react-native-sse there is also a discussion on the react-native repo github.com/facebook/react-native/issues/28835 with people that successfully implemented it.Chiastic
You cant change model to gtp3 or gpt4 with this rest api. This is incorrect answerTieratierce
@Tieratierce you can change it on the server side part on that line model: 'text-davinci-003Chiastic
R
0

Visit this GitHub repository: https://github.com/jonrhall/openai-streaming-hooks. I recommend exploring this library as it offers React hooks that function solely on the client-side, requiring no server support.

Renae answered 2/4, 2023 at 6:15 Comment(6)
Its not working in react nativeSherrer
I didn't downvote, but would it not be a security risk exposing the API key on the client side? The better approach IMO would be setting up a proxy like @Chiastic has, but that wouldcome at the cost of some added latency.Cano
True, for protecting API key request should be proxied from server. The better application with this library would be to processing info from user's open ai key where it is ok to expose API to client since it belongs to userRenae
Yes, that depends on the needs of the application. In order to have the user's OpenAI key, you would need to connect with their OpenAI account. If you are trying to build a ChatGPT clone without any login (as some people are nowadays), then I wouldn't reccomend this.Cano
Connecting OpenAI account only needs API key. True that there are many apps now a days which allows users to use their own API key so that they pay the least depending on their use. I am building a free chrome extension that allows users to use chat gpt in any website interacting with web content. my self using the library for this case so i don't have to set up infra to handle requests with open ai.Renae
I think this library might be useful to you as well, its a very simple React hook, that does exactly what you're looking for: github.com/XD2Sketch/react-chat-streamBeslobber
S
0

React native streaming is now possible with https://github.com/react-native-community/fetch.

This was actually a bug that was never addressed by RN team for a while, and this repo emerged to provide a better fetch that complies with WHATWG Spec

This is a fork of GitHub's fetch polyfill, the fetch implementation React Native currently provides. This project features an alternative fetch implementation directy built on top of React Native's Networking API instead of XMLHttpRequest for performance gains. At the same time, it aims to fill in some gaps of the WHATWG specification for fetch, namely the support for text streaming.

Install

$ npm install react-native-fetch-api --save

Usage

fetch('https://jsonplaceholder.typicode.com/todos/1', { reactNative: { textStreaming: true } })
  .then(response => response.body)
  .then(stream => ...)

You can use the stream object like the normal browser fetch.

Hope this helps!

Sig answered 12/9, 2023 at 12:28 Comment(1)
And if it doesn't work try to polyfill ReadableStream and others as stated here https://mcmap.net/q/1470739/-how-to-achieve-text-streaming-in-react-native-using-openai-apiInsider
I
0

If you want to use ChatGPT api directly in react native (expo) with word by word (it's better to say chunk by chunk streaming) then you may take a look at the examples from their documentation on streaming
https://platform.openai.com/docs/api-reference/streaming

Here is example for TS/JS. Note that we use openai library that we need to install and configure to your project and API_KEY. More details here

Also note that we are just passing stream: true parameter to make the response streaming

import OpenAI from "openai";

const openai = new OpenAI();

async function main() {
  const stream = await openai.chat.completions.create({
    model: "gpt-4o-mini",
    messages: [{
      role: "user",
      content: "Say this is a test"
    }],
    stream: true,
  });
  for await (const chunk of stream) {
    process.stdout.write(chunk.choices[0] ? .delta ? .content || "");
  }
}

main();

If you want to use fetch directly, then take a look at this example.

const response = await fetch("https://api.openai.com/v1/chat/completions", {
  method: "POST",
  headers: {
    "Content-Type": "application/json",
    Authorization: `Bearer YOUR_OPENAI_API_KEY`, // Replace with your API key
  },
  body: JSON.stringify({
    model: "gpt-4o-mini", // Replace with your desired model
    messages: [{
      role: "user",
      content: "Say this is a test"
    }],
    stream: true, // Enable streaming
  }),
});

const reader = response.body.getReader();
const decoder = new TextDecoder("utf-8");

while (true) {
  const {
    done,
    value
  } = await reader.read();
  if (done) break; // Exit the loop if stream is done

  const chunk = decoder.decode(value, {
    stream: true
  });
  // Process each chunk of the streamed response
  const parsedChunks = chunk.split("\n").filter(Boolean); // Split response into lines

  for (const line of parsedChunks) {
    if (line.startsWith("data: ")) {
      const json = line.slice("data: ".length);
      if (json === "[DONE]") return; // End of the stream

      try {
        const parsed = JSON.parse(json);
        const content = parsed.choices[0] ? .delta ? .content || "";
        process.stdout.write(content);
      } catch (err) {
        console.error("Error parsing JSON: ", err);
      }
    }
  }
}

!!!!IMPORTANT!!!!

Note, for streaming to work on ReactNative environments, you should polyfill the missing ReadableStream and TextEncoder.

For Expo you need to:

Create an index.js file and make your polyfill the first import:

import 'polyfill'
import 'expo-router/entry'

Then change the main field in the package.json to point to the "main": "./index" as stated here

Here is how I polyfilled in my expo project

// /index.js
import {
  polyfillGlobal
} from "react-native/Libraries/Utilities/PolyfillFunctions";
import {
  ReadableStream
} from "web-streams-polyfill";
import {
  fetch,
  Headers,
  Request,
  Response
} from "react-native-fetch-api";

polyfillGlobal("ReadableStream", () => ReadableStream);
polyfillGlobal(
  "fetch",
  () =>
  (...args: any[]) =>
  fetch(args[0], { ...args[1],
    reactNative: {
      textStreaming: true
    }
  }),
);
polyfillGlobal("Headers", () => Headers);
polyfillGlobal("Request", () => Request);
polyfillGlobal("Response", () => Response);

import "expo-router/entry";

If you receive Text Encoder doesn't exist

Try to also polyfill TextEncoder. Or use directly. I used import encoding from "text-encoding"; And then const decoder = new encoding.TextDecoder("utf-8"); to decode values from my response.body.getReader().read()

Hope this helps, feel fry to comment out and ask additional questions if it doesn't work for you :)

Insider answered 14/9, 2024 at 8:38 Comment(0)

© 2022 - 2025 — McMap. All rights reserved.