Is it possible to track objects in ARKit like in Vuforia?
Asked Answered
Z

3

16

I couldn't find any information if Apple's ARKit supports 3D object tracking (or even image tracking) like Vuforia does. I don't want to place a 3D model just anywhere in the world. Instead I want to detect a specific 3D object and place AR objects in front and on top of that object.

Simple example: I want to track a specific toy car in the real world and add a spoiler on top of it in the AR scene.

Can someone provide me information wether that is possible or not?

Zacek answered 29/6, 2017 at 6:3 Comment(5)
so for you can do that with ARKit. You should use some realtime image processing like tensorflow or watson to detect an object.Incorporation
@matloob Hasnain currently Vuforia works fine bit the customer wants to know if we could use ARKit instead.Zacek
Yes I know about vaforia marker scanning.Incorporation
So there is nothing similar in ARKit out of the box?Zacek
I dnt think so. its just give us real world object position with anchor .and placing objectsIncorporation
T
30

Update for iOS 12: In "ARKit 2" (aka ARKit on iOS 12 or later)...

  • Image detection is extended to image tracking, so up to four images don't just get detected once, they get updated "live" every frame even if they're moving relative to world space. So you can attach a recognizable 2D image to your toy, and have virtual AR content follow the toy around on-screen.

  • There's also object detection — in your development process you can use one ARKit app to scan a real-world 3D object and produce a "reference object" file. Then you can ship that file in your app and use it to recognize that object in the user's environment. This might fit your "toy car" case... but be aware that the 3D object recognition feature is detection, not tracking: ARKit won't follow the toy car as it moves.

See the WWDC18 talk on ARKit 2 for details.


Update for iOS 11.3: In "ARKit 1.5" (aka ARKit on iOS 11.3 or later), there's a new image detection feature in ARKit. If you have a known image (like a poster or playing card or some such), you can include it in your Xcode project and/or load it from elsewhere as an ARReferenceImage and put it in your session configuration's detectionImages array. Then, when ARKit finds those images in the user environment, it gives you ARImageAnchor objects telling you where they are.

Note that this isn't quite like the "marker-based AR" you see from some other toolkits — ARKit finds a reference image only once, it doesn't tell you how it's moving over time. So it's good for "triggering" AR content experiences (like those promos where you point your phone at a Star Wars poster in a store and a character walks out of it), but not for, say, AR board games where virtual characters stay attached to game pieces.


Otherwise...

It is possible to access the camera image in each captured ARFrame, so if you have other software that can help with such tasks you could use them in conjunction with ARKit. For example, the Vision framework (also new in iOS 11) offers several of the building blocks for such tasks — you can detect barcodes and find their four corners, and after manually identifying a region of interest in an image, track its movement between frames.

Tedra answered 30/6, 2017 at 3:40 Comment(1)
So does this mean that you can find an object, wait for some period, find the object again and then get the distance allowing you to calculate the speed and/or the direction of the object? I would love to see an app which could tell you how fast an object is moving. Living close to a road sometimes I see cars going far to speedy...Lindsaylindsey
T
3

Note: this is definitely a hack, but it adds persistent image tracking to ARKit Unity. Same idea can be applied to the Native lib as well.

Download ARKit 1.5 beta https://bitbucket.org/Unity-Technologies/unity-arkit-plugin/branch/spring2018_update

In ARSessionNative.mm, add this block of code:

extern "C" void SessionRemoveAllAnchors(void* nativeSession) {
    UnityARSession* session = (__bridge UnityARSession*)nativeSession;
    for (ARAnchor* a in session->_session.currentFrame.anchors)
    {
        [session->_session removeAnchor:a];
        return;
    }
}

In UnityARSessionNativeInterface.cs, add this code under SessionRemoveUserAnchor:

private static extern void SessionRemoveAllAnchors (IntPtr nativeSession);

And this under RemoveUserAnchor:

public void RemoveAllAnchors() {
        #if !UNITY_EDITOR

        SessionRemoveAllAnchors(m_NativeARSession);
        #endif
    }

Then call this from an Update or Coroutine:

UnityARSessionNativeInterface.GetARSessionNativeInterface().RemoveAllAnchors ();

When the anchor is removed, the image can be recognized once again. It's not super smooth but it definitely works.

Hope you found this useful! Let me know if you need further assistance.

Telegraphese answered 4/3, 2018 at 22:36 Comment(0)
S
3

ARKit 2.0 for iOS 12 supports not only Camera Tracking feature but also:

  • 3D Object Tracking
  • Face Tracking
  • Image Tracking
  • Image Detection
  • 3D Object Scanning

ARKit 3.0 for iOS 13 supports even more sophisticated features:

  • People Occlusion (RGBAZ realtime compositing)
  • Body Tracking (a.k.a. Motion Capture)
  • Multiple Face Tracking (up to 3 faces)
  • Simultaneous Front and Back Camera Tracking
  • Collaborative Sessions
Spasm answered 1/9, 2018 at 11:4 Comment(3)
Hi ARGeo, i am newbie to Arkit. i want to detect an object (table, bottle or special shape or anything) from camera/Gallery image with ARKit. I see some example which provides predefined image in the project and then it can detect whether the image exactly exists. But i wont do this and i want it make learn so that it could detect any table/bottle ? Do i need to user Core ML with AirKit ? Any libraries/help for Swift ? Thanks.Strohben
@JamshedAlam, publish it as a question, please. Comments are not for this.Spasm
I get my answer after studying some question and blogs. I will post if i get any confusion further. Thanks anyway.Strohben

© 2022 - 2024 — McMap. All rights reserved.