Is there any way to detect feet in 3d using ar core?
Asked Answered
T

3

6

We are working on an app that detects feet and then puts a 3d model of shoes on them — a kind of shoe AR try-on experience. We are working on Java Android. Is there any library or framework that does it?

Tinner answered 5/3, 2020 at 9:58 Comment(4)
There could not be a direct library but you can start of with tensorflow lite. Did you find any solution? I am looking for sameEvangelina
I started with tensorflow, but there is no proper dataset available to detect feet from first person point of view. The only way I see to do this is to start from scratch.Tinner
how's it going? do you have any tips? I'm trying to learn and this would be an awesome project! @BilalKhanSchnapp
We actually didn't have the resources to pull something like this off. So we had to drop the projectTinner
E
2

To achieve what you are asking for requires 2 things.

  1. To detect feet[AI]
  2. To put object over feet[AR]

Ideally you can begin with Posnet for point 1 which is based on Tensorflowlite and for point 2 you can look into Screenform which is based on ARCore. You many not find complete package as a library but you have to start building your own according to your need.

Evangelina answered 14/3, 2020 at 11:4 Comment(1)
@dr-andro Posnet is not working for first-person perspective posnet needs full body to produce resultsLen
S
1

There are several ready-made solutions to detect parts of your body (legs/feet included). But, if you are looking for a solution to try on a 3D model of the shoe on your leg - you may faced with the R&D phase.

Because the angle from which the user takes his device - is not in front of the user itself. He tries to move the camera directed to his legs, so you need to:

  1. Identify the feet with all control points (not only to detect that this is a left or right foot in the frame but also to understand how it is positioned)
  2. Render a 3D model of the shoe using control points from step 1 to position the 3D model correctly on the user's feet.

The problem is, that using for example native ARKit (iOS way) you can render a 3D model and even possess it by control points, so it's not a big deal. But the main problem related to step 1 - find feet with all control points at not typical angle when you scan your own feet, not the whole human in front of you.

So you may need to have an ML engineer (maybe on Pyhton) to create an ML model first. Next, use your native AR framework (ARKit for iOS for example) or Unity (if not native way) to possess and render the model on detected feet.

Selmner answered 21/9, 2023 at 10:56 Comment(0)
T
0

You can use tflight model given by MediaPipe. It gives Coordinate of cuboid with centre of Cuboid. Cordinate are in (x,y). You have to map that points in 3D.

You can also try MediaPipe API that will give bounding box aswell as orientation and Transformation of bounding box.

Tollhouse answered 4/12, 2023 at 7:15 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.