Area learning after Google Tango
H

2

11

Area learning was a key feature of Google Tango which allowed a Tango device to locate itself in a known environment and save/load a map file (ADF).

Since then Google has announced that it's shutting down Tango and putting its effort into ARCore, but I don't see anything related to area learning in ARCore documentation.

What is the future of area learning on Android ? Is it possible to achieve that on a non-Tango / ARCore-enabled device ?

Heraclid answered 30/1, 2018 at 19:31 Comment(2)
Somewhat related is this post on ARCore: medium.com/super-ventures-blog/… (this article links to another article on ARKit which is interesting as well). Do realize that Google doesn't announce future features. I suspect that future Android versions may have a manifest feature indicating ARCore compliance.Trimester
Thanks for the link, it's very interesting. The author mentions map pre-loading so I assume area learning should be possible but not available yet.Heraclid
O
1

Currently, Tango's area learning is not supported by ARCore and ARCore's offerings are not nearly as functional. First, Tango was able to take precise measurements of the surroundings, whereas ARCore is using mathematical models to make approximations. Currently, the ARCore modeling is nowhere near competitive with Tango's measurement capabilities; it appears to only model certain flat surfaces at the moment. [1]

Second, the area learning on Tango allowed the program to access previously captured ADF files, but ARCore does not currently support this -- meaning that the user has to hardcode the initial starting position. [2]

Google is working on a Visual Positioning Service that would live in the cloud and allow a client to compare local point maps with ground truth point maps to determine indoor position [3]. I suspect that this functionality will only work reliably if the original point map is generated using a rig with a depth sensor (ie. not in your own house with your smartphone), although mobile visual SLAM has had some success. This also seems like a perfect task for deep learning, so there might be robust solutions on the horizon.[4]

[1] ARCore official docs https://developers.google.com/ar/discover/concepts#environmental_understanding

[2] ARCore, ARKit: Augmented Reality for everyone, everywhere! https://www.cologne-intelligence.de/blog/arcore-arkit-augmented-reality-for-everyone-everywhere/

[3] Google 'Visual Positioning Service' AR Tracking in Action https://www.youtube.com/watch?v=L6-KF0HPbS8

[4] Announcing the Matterport3D Research Dataset. https://matterport.com/blog/2017/09/20/announcing-matterport3d-research-dataset/

Oriane answered 8/2, 2018 at 23:17 Comment(0)
C
0

Now at Google Developers channel on YouTube there are Google ARCore videos.

These videos will learn users how to create shared AR experiences across Android and iOS devices and how to build apps using the new APIs revealed in the Google Keynote: Cloud Anchors, Augmented Images, Augmented Faces and Sceneform. You'll come out understanding how to implement them, how they work in each environment, and what opportunities they unlock for your users.

enter image description here

Hope this helps.

Carniola answered 7/4, 2019 at 6:10 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.