How to set input with image for tensorflow-lite in c++?
Asked Answered
C

2

6

I am trying to move our Tensoflow model from Python+Keras version to Tensorflow Lite with C++ on an embedded platform.

It looks like I don't know how set properly input for interpreter.

Input shape should be (1, 224, 224, 3).

As an input I am taking image with openCV, converting this to CV_BGR2RGB.


std::unique_ptr<tflite::FlatBufferModel> model_stage1 = 
tflite::FlatBufferModel::BuildFromFile("model1.tflite");
  TFLITE_MINIMAL_CHECK(model_stage1 != nullptr);

  // Build the interpreter
  tflite::ops::builtin::BuiltinOpResolver resolver_stage1;
  std::unique_ptr<Interpreter> interpreter_stage1;
  tflite::InterpreterBuilder(*model_stage1, resolver_stage1)(&interpreter_stage1);

TFLITE_MINIMAL_CHECK(interpreter_stage1 != nullptr);

  cv::Mat cvimg = cv::imread(imagefile);
  if(cvimg.data == NULL) {
    printf("=== IMAGE READ ERROR ===\n");
    return 0;
  }

  cv::cvtColor(cvimg, cvimg, CV_BGR2RGB);

  uchar* input_1 = interpreter_stage1->typed_input_tensor<uchar>(0);

 memcpy( ... );

I have issue with proper setup of memcpy for this uchar type.

When I am doing like this, I have seg fault during working:

memcpy(input_1, cvimg.data, cvimg.total() * cvimg.elemSize());

How should I properly fill input in this case?

Crore answered 20/5, 2019 at 14:16 Comment(5)
Instead of memcpy why not loop over all values in cvimg and set them like interpreter_stage1->typed_input_tensor<uchar>(0)[i] = x;, where i is the index and x the value?Midvictorian
OK, but how RGB pixels should be placed in memory? { 0,0R 0,0G 0,0B 0,1R 0,1G 0,1B ... n,mR n,mG n,mB } ?Crore
The answer is in your question: Since you use cv::cvtColor(cvimg, cvimg, CV_BGR2RGB); your cvimg contains them in RGB order just as in your previous comment.Midvictorian
Thank you. This way is putting data properly to input_1 array, but I am not sure is it correct. No matter what data I will load there I am getting the same answers.Crore
I think it might actually be common to put images in a 1-dimensional array like this. OpenCV does it, FLTK is another library i know, that does it.Midvictorian
M
2

To convert my comments into an answer: Memcpy might not be the right approach here. OpenCV saves images as 1-dimensional arrays of RGB-ordered (or BGR or yet another color combination) color values per pixel. It is possible to iterate over these RGB-chunks via:

for (const auto& rgb : cvimg) {
    // now rgb[0] is the red value, rgb[1] green and rgb[2] blue.
}

And writing values to a Tensorflow-Lite typed_input_tensor should be done like this; where i is the index (iterator) and x the assigned value:

interpreter->typed_input_tensor<uchar>(0)[i] = x;

So the loop could look like this:

for (size_t i = 0; size_t < cvimg.size(); ++i) {
    const auto& rgb = cvimg[i];
    interpreter->typed_input_tensor<uchar>(0)[3*i + 0] = rgb[0];
    interpreter->typed_input_tensor<uchar>(0)[3*i + 1] = rgb[1];
    interpreter->typed_input_tensor<uchar>(0)[3*i + 2] = rgb[2];
}
Midvictorian answered 22/5, 2019 at 7:45 Comment(1)
OK, currently I cannot assign nothing to interpreter->typed_input_tensor<uchar>(0)[i]. Every try of assigment is giving me seg faultCrore
G
1

This is how you can do it at least for the single channel case. This assumes that the opencv bufffer is contiguous. So, in this case, tensor dims are (1, x, y, 1).

float* out = interpreter->typed_tensor<float>(input);
input_type = interpreter->tensor(input)->type;
img.convertTo(img, CV_32F, 255.f/input_std);
cv::subtract(img, cv::Scalar(input_mean/input_std), img);
float* in = img.ptr<float>(0);
memcpy(out, in, img.rows * img.cols * sizeof(float));

OpenCV version - 4.3.0 TF Lite version - 2.0.0

nada's approach is also correct. Pick whichever suits your programming style, however memcpy version is going to be comparatively faster.

Gulgee answered 10/5, 2020 at 8:24 Comment(1)
that's very helpful. I am new to cpp and was hoping if you can help me out here: stackoverflow.com/questions/71392038/tflite-inference-using-cppBrotherinlaw

© 2022 - 2024 — McMap. All rights reserved.