How to map Frame coordinates to Overlay in vision
Asked Answered
H

0

16

I'm feeling that this Question is already solved many times, but I cannot figure it out. I was basically following this little Tutorial about mobile vision and completed it. After that I tried to detect Objects myself starting with a ColorBlob and drawing its borders.

The idea is to start in the middle of the frame (holding the object in the middle of the camera on purpose) and detecting the edges of that object by its color. It works as long as I hold the phone in landscape mode (Frame.ROTATION_0). As soon as I'm in Portrait mode (Frame.Rotation_90) the bounding Rect gets drawn rotated, so an object with more height gets drawn with more width, and also a bit off.

The docs say that a detector always delivers coords to an unrotated upright frame, so how am I supposed to calculate the bounding rectangle coords relative to its rotation? I don't think it matters much, but here is how I find the color Rect

public Rect getBounds(Frame frame){
  int w = frame.getMetadata().getWidth();
  int h = frame.getMetadata().getHeight();
  int scale = 50;
  int scaleX = w / scale;
  int scaleY = h / scale;
  int midX = w / 2;
  int midY = h / 2;
  float ratio = 10.0
  Rect mBoundary = new Rect();
  float[] hsv = new float[3];
  Bitmap bmp = frame.getBitmap();
  int px = bmp.getPixel(midX, midY);
  Color.colorToHSV(px, hsv);
  Log.d(TAG, "detect: mid hsv: " + hsv[0] + ", " + hsv[1] + ", " + hsv[2]);
  float hue = hsv[0];
  float nhue;
  int x, y;
  for (x = midX + scaleX; x < w; x+=scaleX){
      px = bmp.getPixel(x, midY);
      Color.colorToHSV(px, hsv);
      nhue = hsv[0];
      if (nhue <= (hue + ratio) && nhue >= (hue - ratio)){
          mBoundary.right = x
      } else {
          break;
      }
  }

  for (x = midX - scaleX; x >= 0; x-= scaleX){
      px = bmp.getPixel(x, midY);
      Color.colorToHSV(px, hsv);
      nhue = hsv[0];
      if (nhue <= (hue + ratio) && nhue >= (hue - ratio)){
          mBoundary.left = x
      } else {
          break;
      }
  }

  for (y = midY + scaleY; y < h; y+=scaleY){
      px = bmp.getPixel(midX, y);
      Color.colorToHSV(px, hsv);
      nhue = hsv[0];
      if (nhue <= (hue + ratio) && nhue >= (hue - ratio)){
          mBoundary.bottom = y;
      } else {
          break;
      }
  }

  for (y = midY - scaleY; y >= 0; y-=scaleY){
      px = bmp.getPixel(midX, y);
      Color.colorToHSV(px, hsv);
      nhue = hsv[0];
      if (nhue <= (hue + ratio) && nhue >= (hue - ratio)){
          mBoundary.top = y
      } else {
          break;
      }
  }
  return mBoundary;
}

Then I simply draw it in the GraphicOverlay.Graphics draw method on the canvas. I already use the transformX/Y methods on the Graphic and thought, that it will also account for the rotation. I also use the CameraSource and CameraSourcePreview class provided from the samples.

Hoagy answered 14/9, 2016 at 6:49 Comment(2)
Have you called this method to indicate the extent and camera direction? github.com/googlesamples/android-vision/blob/master/…Lonlona
No, but what has a facing to do with a rotation?Hoagy

© 2022 - 2024 — McMap. All rights reserved.