We are trying to use TensorFlow Face Mesh model within our iOS app. Model details: https://drive.google.com/file/d/1VFC_wIpw4O7xBOiTgUldl79d9LA-LsnA/view.
I followed TS official tutorial for setting up the model: https://firebase.google.com/docs/ml-kit/ios/use-custom-models and also printed the model Input-Output using the Python script in the tutorial and got this:
INPUT
[ 1 192 192 3]
<class 'numpy.float32'>
OUTPUT
[ 1 1 1 1404]
<class 'numpy.float32'>
At this point, I'm pretty lost trying to understand what those numbers mean, and how do I pass the input image and get the output face mesh points using the model Interpreter
. Here's my Swift code so far:
let coreMLDelegate = CoreMLDelegate()
var interpreter: Interpreter
// Core ML delegate will only be created for devices with Neural Engine
if coreMLDelegate != nil {
interpreter = try Interpreter(modelPath: modelPath,
delegates: [coreMLDelegate!])
} else {
interpreter = try Interpreter(modelPath: modelPath)
}
Any help will be highly appreciated!