Can Tensorflow models run object detection in openGL framebuffer/textures without reading back to CPU
Asked Answered
W

0

11

Background:

  1. I have a pipeline that uses a series of OpenGL shaders to processes webcam source footage and locate a feature (it's always the same feature and there is only one feature that I am ever looking for).
  2. The only thing that is read back to the CPU is 4 coordinates for the bounding box.

I am interested in training an object detection NN to see if I can get better performance/accuracy at extracting my feature from the footage.


The Question:

Is it possible to run the trained model in the openGL environment (using a framebuffer/texture as input) without reading the textures back and forth from the cpu/gpu?

Example:

  1. Run my preprocessing OpenGL shader programs
  2. Feature detection model (trained with tensorflow) using framebuffer as the input
  3. Extract the bounding box coordinates
Walther answered 1/3, 2018 at 4:52 Comment(1)
You can share (frame)buffers between OpenCL and OpenGL and tensorflow as some experimental support for OpenCL, but I guess it's a lot of work to get it working.Retrogressive

© 2022 - 2024 — McMap. All rights reserved.