I'll assume you're talking about https://github.com/pgriess/node-msgpack.
Just looking at the source, I'm not sure how it could be. For example in src/msgpack.cc
they have the following:
Buffer *bp = Buffer::New(sb._sbuf.size);
memcpy(Buffer::Data(bp), sb._sbuf.data, sb._sbuf.size);
In node terms, they are allocating and filling a new SlowBuffer
for every request. You can benchmark the allocation part by doing following:
var msgpack = require('msgpack');
var SB = require('buffer').SlowBuffer;
var tmpl = {'abcdef' : 1, 'qqq' : 13, '19' : [1, 2, 3, 4]};
console.time('SlowBuffer');
for (var i = 0; i < 1e6; i++)
// 20 is the resulting size of their "DATA_TEMPLATE"
new SB(20);
console.timeEnd('SlowBuffer');
console.time('msgpack.pack');
for (var i = 0; i < 1e6; i++)
msgpack.pack(tmpl);
console.timeEnd('msgpack.pack');
console.time('stringify');
for (var i = 0; i < 1e6; i++)
JSON.stringify(tmpl);
console.timeEnd('stringify');
// result - SlowBuffer: 915ms
// result - msgpack.pack: 5144ms
// result - stringify: 1524ms
So by just allocating memory for the message they've already spent 60% of stringify
time. There's just one reason why it's so much slower.
Also take into account that JSON.stringify
has gotten a lot of love from Google. It's highly optimized and would be difficult to beat.