Background:
- I have a pipeline that uses a series of OpenGL shaders to processes webcam source footage and locate a feature (it's always the same feature and there is only one feature that I am ever looking for).
- The only thing that is read back to the CPU is 4 coordinates for the bounding box.
I am interested in training an object detection NN to see if I can get better performance/accuracy at extracting my feature from the footage.
The Question:
Is it possible to run the trained model in the openGL environment (using a framebuffer/texture as input) without reading the textures back and forth from the cpu/gpu?
Example:
- Run my preprocessing OpenGL shader programs
- Feature detection model (trained with tensorflow) using framebuffer as the input
- Extract the bounding box coordinates