How can I convert a NodeJS binary buffer into a JavaScript ArrayBuffer?
Instances of Buffer
are also instances of Uint8Array
in node.js 4.x and higher. Thus, the most efficient solution is to access the buffer's own .buffer
property directly, as per https://mcmap.net/q/117961/-convert-a-binary-nodejs-buffer-to-javascript-arraybuffer. The Buffer constructor also takes an ArrayBufferView argument if you need to go the other direction.
Note that this will not create a copy, which means that writes to any ArrayBufferView will write through to the original Buffer instance.
In older versions, node.js has both ArrayBuffer as part of v8, but the Buffer class provides a more flexible API. In order to read or write to an ArrayBuffer, you only need to create a view and copy across.
From Buffer to ArrayBuffer:
function toArrayBuffer(buffer) {
const arrayBuffer = new ArrayBuffer(buffer.length);
const view = new Uint8Array(arrayBuffer);
for (let i = 0; i < buffer.length; ++i) {
view[i] = buffer[i];
}
return arrayBuffer;
}
From ArrayBuffer to Buffer:
function toBuffer(arrayBuffer) {
const buffer = Buffer.alloc(arrayBuffer.byteLength);
const view = new Uint8Array(arrayBuffer);
for (let i = 0; i < buffer.length; ++i) {
buffer[i] = view[i];
}
return buffer;
}
size&0xfffffffe
, copy 32-bit integers, then, if there's 1 byte remaining, copy 8-bit integer, if 2 bytes, copy 16-bit integer, and if 3 bytes, copy 16-bit and 8-bit integer. –
Context getInt32
) are exceptionally slow compared to accessing array views because of a ton of type-checking that goes on. Here, reading from a DataView is 98% slower, which you won't make up for through 4x fewer iterations you'd get from your approach: jsperf.com/haakons-test –
Maguire buffer.readUInt32LE
. This is still 80% slower than Martin's method, see here for the benchmark. As for aligning offsets, I don't know what you mean. Typed array views must always be memory-aligned or the engine will throw an error. The user has no control here. (Node seems to allow non-aligned access to its Buffers.) –
Maguire ab
returned? There is nothing done with ab
? I always get {}
as a result. –
Gongorism slice()
method returns a new ArrayBuffer
whose contents are a copy of this ArrayBuffer
's bytes from begin, inclusive, up to end, exclusive.’ - MDN ArrayBuffer.prototype.slice()
–
Proulx slice()
-ing an ArrayBuffer
DOES make a copy. –
Proulx buffer
property which is an ArrayBuffer
.. so there is no need to iterate through each index. –
Invariable ++i
for a moment. Turns out to have the same effect as i++
. So I wonder why you're not incrementing the "usual" way? –
Imposing blob.arrayBuffer()
which returns a copy of the blob data. –
Aurignacian "From ArrayBuffer to Buffer" could be done this way:
var buffer = Buffer.from( new Uint8Array(arrayBuffer) );
No dependencies, fastest, Node.js 4.x and later
Buffer
s are Uint8Array
s, so you just need to slice (copy) its region of the backing ArrayBuffer
.
// Original Buffer
let b = Buffer.alloc(512);
// Slice (copy) its segment of the underlying ArrayBuffer
let ab = b.buffer.slice(b.byteOffset, b.byteOffset + b.byteLength);
The slice
and offset stuff is required because small Buffer
s (less than 4 kB by default, half the pool size) can be views on a shared ArrayBuffer
. Without slicing, you can end up with an ArrayBuffer
containing data from another Buffer
. See explanation in the docs.
If you ultimately need a TypedArray
, you can create one without copying the data:
// Create a new view of the ArrayBuffer without copying
let ui32 = new Uint32Array(b.buffer, b.byteOffset, b.byteLength / Uint32Array.BYTES_PER_ELEMENT);
No dependencies, moderate speed, any version of Node.js
Use Martin Thomson's answer, which runs in O(n) time. (See also my replies to comments on his answer about non-optimizations. Using a DataView is slow. Even if you need to flip bytes, there are faster ways to do so.)
Dependency, fast, Node.js ≤ 0.12 or iojs 3.x
You can use https://www.npmjs.com/package/memcpy to go in either direction (Buffer to ArrayBuffer and back). It's faster than the other answers posted here and is a well-written library. Node 0.12 through iojs 3.x require ngossen's fork (see this).
.byteLength
and .byteOffset
documented? –
Proulx var ab = b.buffer.slice(b.byteOffset, b.byteOffset + b.byteLength);
saved my day –
Jorgan slice()
is not O(1), it's O(n) because it creates a copy: developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/… –
Reimburse A quicker way to write it
var arrayBuffer = new Uint8Array(nodeBuffer).buffer;
However, this appears to run roughly 4 times slower than the suggested toArrayBuffer function on a buffer with 1024 elements.
Buffer
are also instances of Uint8Array
in Node.js 4.x and higher. For lower Node.js versions you have to implement a toArrayBuffer
function. –
Compatriot 1. A Buffer
is just a view for looking into an ArrayBuffer
.
A Buffer
, in fact, is a FastBuffer
, which extends
(inherits from) Uint8Array
, which is an octet-unit view (“partial accessor”) of the actual memory, an ArrayBuffer
.
/lib/buffer.js#L65-L73
Node.js 9.4.0
class FastBuffer extends Uint8Array {
constructor(arg1, arg2, arg3) {
super(arg1, arg2, arg3);
}
}
FastBuffer.prototype.constructor = Buffer;
internalBuffer.FastBuffer = FastBuffer;
Buffer.prototype = FastBuffer.prototype;
2. The size of an ArrayBuffer
and the size of its view may vary.
Reason #1: Buffer.from(arrayBuffer[, byteOffset[, length]])
.
With Buffer.from(arrayBuffer[, byteOffset[, length]])
, you can create a Buffer
with specifying its underlying ArrayBuffer
and the view's position and size.
const test_buffer = Buffer.from(new ArrayBuffer(50), 40, 10);
console.info(test_buffer.buffer.byteLength); // 50; the size of the memory.
console.info(test_buffer.length); // 10; the size of the view.
Reason #2: FastBuffer
's memory allocation.
It allocates the memory in two different ways depending on the size.
- If the size is less than the half of the size of a memory pool and is not 0 (“small”): it makes use of a memory pool to prepare the required memory.
- Else: it creates a dedicated
ArrayBuffer
that exactly fits the required memory.
/lib/buffer.js#L306-L320
Node.js 9.4.0
function allocate(size) {
if (size <= 0) {
return new FastBuffer();
}
if (size < (Buffer.poolSize >>> 1)) {
if (size > (poolSize - poolOffset))
createPool();
var b = new FastBuffer(allocPool, poolOffset, size);
poolOffset += size;
alignPool();
return b;
} else {
return createUnsafeBuffer(size);
}
}
📜/lib/buffer.js#L98-L100
Node.js 9.4.0
function createUnsafeBuffer(size) {
return new FastBuffer(createUnsafeArrayBuffer(size));
}
What do you mean by a “memory pool?”
A memory pool is a fixed-size pre-allocated memory block for keeping small-size memory chunks for Buffer
s. Using it keeps the small-size memory chunks tightly together, so prevents fragmentation caused by separate management (allocation and deallocation) of small-size memory chunks.
In this case, the memory pools are ArrayBuffer
s whose size is 8 KiB by default, which is specified in Buffer.poolSize
. When it is to provide a small-size memory chunk for a Buffer
, it checks if the last memory pool has enough available memory to handle this; if so, it creates a Buffer
that “views” the given partial chunk of the memory pool, otherwise, it creates a new memory pool and so on.
You can access the underlying ArrayBuffer
of a Buffer
. The Buffer
's buffer
property (that is, inherited from Uint8Array
) holds it. A “small” Buffer
's buffer
property is an ArrayBuffer
that represents the entire memory pool. So in this case, the ArrayBuffer
and the Buffer
varies in size.
const zero_sized_buffer = Buffer.allocUnsafe(0);
const small_buffer = Buffer.from([0xC0, 0xFF, 0xEE]);
const big_buffer = Buffer.allocUnsafe(Buffer.poolSize >>> 1);
// A `Buffer`'s `length` property holds the size, in octets, of the view.
// An `ArrayBuffer`'s `byteLength` property holds the size, in octets, of its data.
console.info(zero_sized_buffer.length); /// 0; the view's size.
console.info(zero_sized_buffer.buffer.byteLength); /// 0; the memory..'s size.
console.info(Buffer.poolSize); /// 8192; a memory pool's size.
console.info(small_buffer.length); /// 3; the view's size.
console.info(small_buffer.buffer.byteLength); /// 8192; the memory pool's size.
console.info(Buffer.poolSize); /// 8192; a memory pool's size.
console.info(big_buffer.length); /// 4096; the view's size.
console.info(big_buffer.buffer.byteLength); /// 4096; the memory's size.
console.info(Buffer.poolSize); /// 8192; a memory pool's size.
3. So we need to extract the memory it “views.”
An ArrayBuffer
is fixed in size, so we need to extract it out by making a copy of the part. To do this, we use Buffer
's byteOffset
property and length
property, which are inherited from Uint8Array
, and the ArrayBuffer.prototype.slice
method, which makes a copy of a part of an ArrayBuffer
. The slice()
-ing method herein was inspired by @ZachB.
const test_buffer = Buffer.from(new ArrayBuffer(10));
const zero_sized_buffer = Buffer.allocUnsafe(0);
const small_buffer = Buffer.from([0xC0, 0xFF, 0xEE]);
const big_buffer = Buffer.allocUnsafe(Buffer.poolSize >>> 1);
function extract_arraybuffer(buf)
{
// You may use the `byteLength` property instead of the `length` one.
return buf.buffer.slice(buf.byteOffset, buf.byteOffset + buf.length);
}
// A copy -
const test_arraybuffer = extract_arraybuffer(test_buffer); // of the memory.
const zero_sized_arraybuffer = extract_arraybuffer(zero_sized_buffer); // of the... void.
const small_arraybuffer = extract_arraybuffer(small_buffer); // of the part of the memory.
const big_arraybuffer = extract_arraybuffer(big_buffer); // of the memory.
console.info(test_arraybuffer.byteLength); // 10
console.info(zero_sized_arraybuffer.byteLength); // 0
console.info(small_arraybuffer.byteLength); // 3
console.info(big_arraybuffer.byteLength); // 4096
4. Performance improvement
If you're to use the results as read-only, or it is okay to modify the input Buffer
s' contents, you can avoid unnecessary memory copying.
const test_buffer = Buffer.from(new ArrayBuffer(10));
const zero_sized_buffer = Buffer.allocUnsafe(0);
const small_buffer = Buffer.from([0xC0, 0xFF, 0xEE]);
const big_buffer = Buffer.allocUnsafe(Buffer.poolSize >>> 1);
function obtain_arraybuffer(buf)
{
if(buf.length === buf.buffer.byteLength)
{
return buf.buffer;
} // else:
// You may use the `byteLength` property instead of the `length` one.
return buf.subarray(0, buf.length);
}
// Its underlying `ArrayBuffer`.
const test_arraybuffer = obtain_arraybuffer(test_buffer);
// Just a zero-sized `ArrayBuffer`.
const zero_sized_arraybuffer = obtain_arraybuffer(zero_sized_buffer);
// A copy of the part of the memory.
const small_arraybuffer = obtain_arraybuffer(small_buffer);
// Its underlying `ArrayBuffer`.
const big_arraybuffer = obtain_arraybuffer(big_buffer);
console.info(test_arraybuffer.byteLength); // 10
console.info(zero_sized_arraybuffer.byteLength); // 0
console.info(small_arraybuffer.byteLength); // 3
console.info(big_arraybuffer.byteLength); // 4096
obtain_arraybuffer
: buf.buffer.subarray
does not seem to exist. Did you mean buf.buffer.slice
here? –
Ardisardisj ArrayBuffer.prototype.slice
and later modified it to Uint8Array.prototype.subarray
. Oh, and I did it wrong. Probably got a bit confused back then. It's all good right now thanks to you. –
Proulx Use the following excellent npm package: to-arraybuffer
.
Or, you can implement it yourself. If your buffer is called buf
, do this:
buf.buffer.slice(buf.byteOffset, buf.byteOffset + buf.byteLength)
A Buffer
is a view
for an ArrayBuffer
. You can get to the internal wrapped ArrayBuffer
using the buffer
property.
This is shared memory, and no copying is required.
const arrayBuffer = theBuffer.buffer
If you want a copy
of the data, create another Buffer
from the original Buffer
(NOT from the wrapped ArrayBuffer), and then reference its wrapped ArrayBuffer
.
const newArrayBuffer = Buffer.from(theBuffer).buffer
For reference, going the other direction, from an ArrayBuffer
to a Buffer
const arrayBuffer = getArrayBuffer()
const sharedBuffer = Buffer.from(arrayBuffer)
const copiedBuffer = Buffer.from(sharedBuffer)
const copiedArrayBuffer = copiedBuffer.buffer
You can think of an ArrayBuffer
as a typed Buffer
.
An ArrayBuffer
therefore always needs a type (the so-called "Array Buffer View"). Typically, the Array Buffer View has a type of Uint8Array
or Uint16Array
.
There is a good article from Renato Mangini on converting between an ArrayBuffer and a String.
I have summarized the essential parts in a code example (for Node.js). It also shows how to convert between the typed ArrayBuffer
and the untyped Buffer
.
function stringToArrayBuffer(string) {
const arrayBuffer = new ArrayBuffer(string.length);
const arrayBufferView = new Uint8Array(arrayBuffer);
for (let i = 0; i < string.length; i++) {
arrayBufferView[i] = string.charCodeAt(i);
}
return arrayBuffer;
}
function arrayBufferToString(buffer) {
return String.fromCharCode.apply(null, new Uint8Array(buffer));
}
const helloWorld = stringToArrayBuffer('Hello, World!'); // "ArrayBuffer" (Uint8Array)
const encodedString = new Buffer(helloWorld).toString('base64'); // "string"
const decodedBuffer = Buffer.from(encodedString, 'base64'); // "Buffer"
const decodedArrayBuffer = new Uint8Array(decodedBuffer).buffer; // "ArrayBuffer" (Uint8Array)
console.log(arrayBufferToString(decodedArrayBuffer)); // prints "Hello, World!"
Now there is a very useful npm package for this: buffer
https://github.com/feross/buffer
It tries to provide an API that is 100% identical to node's Buffer API and allow:
- convert typed array to buffer: https://github.com/feross/buffer#convert-typed-array-to-buffer
- convert buffer to typed array: https://github.com/feross/buffer#convert-buffer-to-typed-array
and few more.
Surprisingly, in my case (Electron: sending buffer to renderer as ArrayBuffer) this simply worked
function bufferToArrayBuffer(buffer: Buffer): ArrayBuffer {
return buffer.buffer as ArrayBuffer;
}
and is like 100x faster than
function bufferToArrayBuffer(buf) {
const ab = new ArrayBuffer(buf.length);
const view = new Uint8Array(ab);
for (let i = 0; i < buf.length; ++i) {
view[i] = buf[i];
}
return ab;
}
I tried the above for a Float64Array and it just did not work.
I ended up realising that really the data needed to be read 'INTO' the view in correct chunks. This means reading 8 bytes at a time from the source Buffer.
Anyway this is what I ended up with...
var buff = new Buffer("40100000000000004014000000000000", "hex");
var ab = new ArrayBuffer(buff.length);
var view = new Float64Array(ab);
var viewIndex = 0;
for (var bufferIndex=0;bufferIndex<buff.length;bufferIndex=bufferIndex+8) {
view[viewIndex] = buff.readDoubleLE(bufferIndex);
viewIndex++;
}
Buffer.read*
methods are all slow, also. –
Maguire This Proxy will expose the buffer as any of the TypedArrays, without any copy. :
https://www.npmjs.com/package/node-buffer-as-typedarray
It only works on LE, but can be easily ported to BE. Also, never got to actually test how efficient this is.
NodeJS, at one point (I think it was v0.6.x) had ArrayBuffer support. I created a small library for base64 encoding and decoding here, but since updating to v0.7, the tests (on NodeJS) fail. I'm thinking of creating something that normalizes this, but till then, I suppose Node's native Buffer
should be used.
I have already update my node to Version 5.0.0 And I work work with this:
function toArrayBuffer(buffer){
var array = [];
var json = buffer.toJSON();
var list = json.data
for(var key in list){
array.push(fixcode(list[key].toString(16)))
}
function fixcode(key){
if(key.length==1){
return '0'+key.toUpperCase()
}else{
return key.toUpperCase()
}
}
return array
}
I use it to check my vhd disk image.
toArrayBuffer(new Buffer([1,2,3]))
-> ['01', '02', '03']
-- this is returning an array of strings, not integers/bytes. –
Maguire © 2022 - 2024 — McMap. All rights reserved.
Array
. So to store many floats you needFloat32Array
where it takes 4 bytes. And if you want quick serialization of those floats to a file you need aBuffer
, as serializing to JSON takes ages. – Fractionateconst file = fs.readFileSync(filePath);
, so how do I use this?... 30 minutes later, wow I miss C. – Readability