You have to create your own version of Face tracker which will extend google.vision face detector. In your mainActivity or FaceTrackerActivity(in google tracking sample) class create your version of FaceDetector class as following:
class MyFaceDetector extends Detector<Face> {
private Detector<Face> mDelegate;
MyFaceDetector(Detector<Face> delegate) {
mDelegate = delegate;
}
public SparseArray<Face> detect(Frame frame) {
YuvImage yuvImage = new YuvImage(frame.getGrayscaleImageData().array(), ImageFormat.NV21, frame.getMetadata().getWidth(), frame.getMetadata().getHeight(), null);
ByteArrayOutputStream byteArrayOutputStream = new ByteArrayOutputStream();
yuvImage.compressToJpeg(new Rect(0, 0, frame.getMetadata().getWidth(), frame.getMetadata().getHeight()), 100, byteArrayOutputStream);
byte[] jpegArray = byteArrayOutputStream.toByteArray();
Bitmap TempBitmap = BitmapFactory.decodeByteArray(jpegArray, 0, jpegArray.length);
//TempBitmap is a Bitmap version of a frame which is currently captured by your CameraSource in real-time
//So you can process this TempBitmap in your own purposes adding extra code here
return mDelegate.detect(frame);
}
public boolean isOperational() {
return mDelegate.isOperational();
}
public boolean setFocus(int id) {
return mDelegate.setFocus(id);
}
}
Then you have to join your own FaceDetector with CameraSource by modifying your CreateCameraSource method as follows:
private void createCameraSource() {
Context context = getApplicationContext();
// You can use your own settings for your detector
FaceDetector detector = new FaceDetector.Builder(context)
.setClassificationType(FaceDetector.ALL_CLASSIFICATIONS)
.setProminentFaceOnly(true)
.build();
// This is how you merge myFaceDetector and google.vision detector
MyFaceDetector myFaceDetector = new MyFaceDetector(detector);
// You can use your own processor
myFaceDetector.setProcessor(
new MultiProcessor.Builder<>(new GraphicFaceTrackerFactory())
.build());
if (!myFaceDetector.isOperational()) {
Log.w(TAG, "Face detector dependencies are not yet available.");
}
// You can use your own settings for CameraSource
mCameraSource = new CameraSource.Builder(context, myFaceDetector)
.setRequestedPreviewSize(640, 480)
.setFacing(CameraSource.CAMERA_FACING_FRONT)
.setRequestedFps(30.0f)
.build();
}
FaceDetector.SparseArray<Face> detect(Frame var1)
. Once you get a hold on a Frame object you have getBitmap() which sounds very promising. Unfortunately that class is final, which means that intercepting Frame should be possible using reflection. – Plash