Use TensorFlow model with Swift for iOS
Asked Answered
B

1

0

We are trying to use TensorFlow Face Mesh model within our iOS app. Model details: https://drive.google.com/file/d/1VFC_wIpw4O7xBOiTgUldl79d9LA-LsnA/view.

I followed TS official tutorial for setting up the model: https://firebase.google.com/docs/ml-kit/ios/use-custom-models and also printed the model Input-Output using the Python script in the tutorial and got this:

INPUT
[  1 192 192   3]
<class 'numpy.float32'>
OUTPUT
[   1    1    1 1404]
<class 'numpy.float32'> 

At this point, I'm pretty lost trying to understand what those numbers mean, and how do I pass the input image and get the output face mesh points using the model Interpreter. Here's my Swift code so far:

let coreMLDelegate = CoreMLDelegate()
var interpreter: Interpreter
// Core ML delegate will only be created for devices with Neural Engine
if coreMLDelegate != nil {
  interpreter = try Interpreter(modelPath: modelPath,
                                delegates: [coreMLDelegate!])
} else {
  interpreter = try Interpreter(modelPath: modelPath)
}

Any help will be highly appreciated!

Benzyl answered 19/8, 2020 at 11:55 Comment(0)
A
0

What those numbers mean completely depends on the model you're using. It's unrelated to both TensorFlow and Core ML.

The output is a 1x1x1x1404 tensor, which basically means you get a list of 1404 numbers. How to interpret those numbers depends on what the model was designed to do.

If you didn't design the model yourself, you'll have to find documentation for it.

Aindrea answered 20/8, 2020 at 8:58 Comment(2)
Hey Matthijs! Thank you for commenting. I added model description here: drive.google.com/file/d/1VFC_wIpw4O7xBOiTgUldl79d9LA-LsnA/view.Benzyl
The output should describe a 3D face mesh (i think it gives you X Y Z points of the face landmarks so you can take depth into account if you want)Benzyl

© 2022 - 2024 — McMap. All rights reserved.