I have some bytearray
with length of 2*n
:
a1 a2 b1 b2 c1 c2
I need to switch bytes endian in each 2-byte word, and make:
a2 a1 b2 b1 c2 c1
Now I use next approach but it is very slow for my task:
converted = bytearray([])
for i in range(int(len(chunk)/2)):
converted += bytearray([ chunk[i*2+1], chunk[i*2] ])
Is it possible to switch endian of bytearray
by calling some system/libc function?
Ok, thanks to all, I timed some suggestions:
import timeit
test = [
"""
converted = bytearray([])
for i in range(int(len(chunk)/2)):
converted += bytearray([ chunk[i*2+1], chunk[i*2] ])
""",
"""
for i in range(0, len(chunk), 2):
chunk[i], chunk[i+1] = chunk[i+1], chunk[i]
""",
"""
byteswapped = bytearray([0]) * len(chunk)
byteswapped[0::2] = chunk[1::2]
byteswapped[1::2] = chunk[0::2]
""",
"""
chunk[0::2], chunk[1::2] = chunk[1::2], chunk[0::2]
"""
]
for t in test:
print(timeit.timeit(t, setup='chunk = bytearray([1]*10)'))
and result is:
$ python ti.py
11.6219761372
2.61883187294
3.47194099426
1.66421198845
So in-pace slice assignment with a step of 2 now is fastest. Also thanks to Mr. F for detailed explaining but I not yet tried it because of numpy
byteswapped
array; it should now outperform the explicit loop if you redo your timings. I've also added an in-place slice-based solution that should be even faster. Also, if you try your timings with larger arrays, you'll find that an explicit loop scales much worse than slicing due to greater interpreter overhead. – Balcke