I have used Tensorflow's
DecodeJpeg
to read images while training a model. In order to use the same method on an android device, I compiled Tensorflow with Bazel for android with DecodeJpeg
.
I tried reading the same image on my desktop, which is an x86_64
machine that runs windows. I ran the DecodeJpeg
method on an image with default values with dct_method
set to ''
, INTEGER_FAST
, INTEGER_ACCURATE
.
I did the same on an arm64
device, for the same image. But, the pixel values were significantly different for the same image under the same settings.
For instance, at (100,100,1) the value on the desktop is 213
, while it is 204
on arm64
.
How can I make sure that the pixel values are the same across these two devices?[![This is the image I have used][1]][1]
Update:
On Gimp at (100,100)
the pixel values are (179,203,190)
For dct_method
set to INTEGER_FAST
, the value at (100,100)
on x86_64 is (171, 213, 165)
, on arm it is (180, 204, 191)
For dct_method
set to INTEGER_ACCURATE
, the value at (100,100)
on x86_64 is (170, 212, 164)
, on arm it is (179, 203, 190)
It is (170, 212, 164)
with PIL
, which is what I get with cv2.imread
as well.
dct
methods. On both x86_64 and arm, I'm using default values forratio
andfancy_upscaling
, andchannels
is set to3
. – Romantic