How to handle weirdly combined websocket messages?
Asked Answered
D

4

24

I'm connecting to an external websocket API using the node ws library (node 10.8.0 on Ubuntu 16.04). I've got a listener which simply parses the JSON and passes it to the callback:

this.ws.on('message', (rawdata) => {
    let data = null;
    try {
        data = JSON.parse(rawdata);
    } catch (e) {
        console.log('Failed parsing the following string as json: ' + rawdata);
        return;
    }
    mycallback(data);
});

I now receive errors in which the rawData looks as follows (I formatted and removed irrelevant contents):

�~A
{
    "id": 1,
    etc..
}�~�
{
    "id": 2,
    etc..

I then wondered; what are these characters? Seeing the structure I initially thought that the first weird sign must be an opening bracket of an array ([) and the second one a comma (,) so that it creates an array of objects.

I then investigated the problem further by writing the rawdata to a file whenever it encounters a JSON parsing error. In an hour or so it has saved about 1500 of these error files, meaning this happens a lot. I cated a couple of these files in the terminal, of which I uploaded an example below:

enter image description here

A few things are interesting here:

  1. The files always start with one of these weird signs.
  2. The files appear to exist out of multiple messages which should have been received separately. The weird signs separate those individual messages.
  3. The files always end with an unfinished JSON object.
  4. The files are of varying lengths. They are not always the same size and are thus not cut off on a specific length.

I'm not very experience with websockets, but could it be that my websocket somehow receives a stream of messages that it concatenates together, with these weird signs as separators, and then randomly cuts off the last message? Maybe because I'm getting a constant very fast stream of messages?

Or could it be because of an error (or functionality) server side in that it combines those individual messages?

What's going on here?

Edit

@bendataclear suggested to interpret it as utf8. So I did, and I pasted a screenshot of the results below. The first print is as it is, and the second one interpreted as utf8. To me this doesn't look like anything. I could of course convert to utf8, and then split by those characters. Although the last message is always cut off, this would at least make some of the messages readable. Other ideas still welcome though.

enter image description here

Deltoro answered 6/8, 2018 at 9:10 Comment(13)
Since JSON is supposed to be encoded with one of the UTFs (8, 16, or 32), it's probably a good idea to decode the input properly. However, the characters expected at these positions all belong to the ASCII subset of UTF-8, so I doubt decoding would help you with this particular problem.Janice
The �~ character is "Replacement character" so if you're seeing this it's already too late to fix it. Can you try and convert to utf8 with a utf8 module (npm install utf8) then convert the string (utf8.encode(string))?Carbolated
@Carbolated - I tried and added the results to the question above. Does this give you any hints?Deltoro
@Deltoro - It looks like this is coming through as some other encoding (binary?), are you using the standard node websocket client (require('websocket').client)?Carbolated
@Carbolated - No I'm using the ws library: github.com/websockets/ws . I was also thinking binary, but why? And what to do with it? I tried a split after a utf8 conversion based on that weird string, but to my surprise that doesn't seem to work. Any other ideas?Deltoro
@Deltoro - I ask because the websocket module has message.type which returns the encoding: github.com/theturtle32/WebSocket-Node - Binary streaming is foreign to me but it seems like the object you are getting back is a UTF8 string with buffer data interspersed, it might need to be buffered before being read.Carbolated
@Deltoro what OS are you running on?Seasickness
@Seasickness - Ubuntu 16.04. I'll also add it to the question.Deltoro
The ws library recommends to npm install --save-optional utf-8-validate to check for spec compliance. Have you done that?Evered
Just a wild guess... perhaps the 3rd party server is to blame. I am a bit interested in the fact that you dont receive the whole JSON in the end. Could they bundle json messages, and those characters are actually binary length of the message to be read. So that you dont assemble the last json till you read the next message.... Its a bit old-school, but who knows... So, in fact, you dont have a json, but binary len - string -binary len - string - binary len - par of the string - next message to be appendedAmbur
@kramer65, Hi, please put the cated text in a Stack Overflow code box, I wanna examine them. thanks.Livvi
I guess it's something with chunked encoded responses. Could you capture traffic with tcpdump or wireshark? Sharing third party service you consume or it's documentation would be helpful too.Maneuver
@Deltoro Is it possible to switch to other websocket client? This will eliminate the problem either on the server side or a problem with the client library or how you use it.Salk
S
4

My assumption is that you're working only with English/ASCII characters and something probably messed the stream. (NOTE:I am assuming), there are no special characters, if it's so, then I will suggest you pass the entire json string into this function:

function cleanString(input) {
    var output = "";
    for (var i=0; i<input.length; i++) {
        if (input.charCodeAt(i) <= 127) {
            output += input.charAt(i);
        }
    }
    console.log(output);
}

//example
cleanString("�~�")

You can make reference to How to remove invalid UTF-8 characters from a JavaScript string?

EDIT

From an article by Internet Engineering Task Force (IETF),

A common class of security problems arises when sending text data using the wrong encoding. This protocol specifies that messages with a Text data type (as opposed to Binary or other types) contain UTF-8- encoded data. Although the length is still indicated and applications implementing this protocol should use the length to determine where the frame actually ends, sending data in an improper


The "Payload data" is text data encoded as UTF-8. Note that a particular text frame might include a partial UTF-8 sequence; however, the whole message MUST contain valid UTF-8. Invalid UTF-8 in reassembled messages is handled as described in Handling Errors in UTF-8-Encoded Data, which states that When an endpoint is to interpret a byte stream as UTF-8 but finds that the byte stream is not, in fact, a valid UTF-8 stream, that endpoint MUST Fail the WebSocket Connection. This rule applies both during the opening handshake and during subsequent data exchange.

I really believe that you error (or functionality) is coming from the server side which combines your individual messages, so I will suggest come up with a logic of ensuring that all your characters MUST be converted from Unicode to ASCII by first encoding the characters as UTF-8. And you might also want to install npm install --save-optional utf-8-validate to efficiently check if a message contains valid UTF-8 as required by the spec.

You might also want to pass in an if condition to help you do some checks;

this.ws.on('message', (rawdata) => {
    if (message.type === 'utf8') { // accept only text
    }

I hope this gets to help.

Seasickness answered 10/8, 2018 at 8:0 Comment(2)
removing some random bytes won't make string a valid jsonManeuver
Definitely, The answers about changing JSON, is not correct, because remove or adding some characters in callback function is too late. you should fix it in server side.Livvi
G
1

The problem which you have is that one side sends a JSON in different encoding as the other side it intepretes.

Try to solve this problem with following code:

const { StringDecoder } = require('string_decoder');

this.ws.on('message', (rawdata) => {
    const decoder = new StringDecoder('utf8');
    const buffer = new Buffer(rawdata);
    console.log(decoder.write(buffer));
});

Or with utf16:

const { StringDecoder } = require('string_decoder');

this.ws.on('message', (rawdata) => {
    const decoder = new StringDecoder('utf16');
    const buffer = new Buffer(rawdata);
    console.log(decoder.write(buffer));
});

Please read: String Decoder Documentation

Galvanize answered 14/8, 2018 at 20:25 Comment(0)
T
0

Those characters are known as "REPLACEMENT CHARACTER" - used to replace an unknown, unrecognized or unrepresentable character.

From: https://en.wikipedia.org/wiki/Specials_(Unicode_block)

The replacement character � (often a black diamond with a white question mark or an empty square box) is a symbol found in the Unicode standard at code point U+FFFD in the Specials table. It is used to indicate problems when a system is unable to render a stream of data to a correct symbol. It is usually seen when the data is invalid and does not match any character

Checking the section 8 of the WebSocket protocol Error Handling:

8.1. Handling Errors in UTF-8 from the Server

When a client is to interpret a byte stream as UTF-8 but finds that the byte stream is not in fact a valid UTF-8 stream, then any bytes or sequences of bytes that are not valid UTF-8 sequences MUST be interpreted as a U+FFFD REPLACEMENT CHARACTER.

8.2. Handling Errors in UTF-8 from the Client

When a server is to interpret a byte stream as UTF-8 but finds that the byte stream is not in fact a valid UTF-8 stream, behavior is undefined. A server could close the connection, convert invalid byte sequences to U+FFFD REPLACEMENT CHARACTERs, store the data verbatim, or perform application-specific processing. Subprotocols layered on the WebSocket protocol might define specific behavior for servers.


Depends on the implementation or library in use how to deal with this, for example from this post Implementing Web Socket servers with Node.js:

socket.ondata = function(d, start, end) {
    //var data = d.toString('utf8', start, end);
    var original_data = d.toString('utf8', start, end);
    var data = original_data.split('\ufffd')[0].slice(1);
    if (data == "kill") {
        socket.end();
    } else {
        sys.puts(data);
        socket.write("\u0000", "binary");
        socket.write(data, "utf8");
        socket.write("\uffff", "binary");
    }
};

In this case, if a is found it will do:

var data = original_data.split('\ufffd')[0].slice(1);
if (data == "kill") {
    socket.end();
} 

Another thing that you could do is to update node to the latest stable, from this post OpenSSL and Breaking UTF-8 Change (fixed in Node v0.8.27 and v0.10.29):

As of these releases, if you try and pass a string with an unmatched surrogate pair, Node will replace that character with the unknown unicode character (U+FFFD). To preserve the old behavior set the environment variable NODE_INVALID_UTF8 to anything (even nothing). If the environment variable is present at all it will revert to the old behavior.

Timehonored answered 8/8, 2018 at 16:55 Comment(0)
P
0

It seems your output is having some spaces, If you have any spaces or if you find any special characters please use Unicode to full fill them.

Here is the list of Unicode characters

This might help I think.

Plumbiferous answered 13/8, 2018 at 6:32 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.