Leap Motion point cloud
Asked Answered
F

5

10

How can we access the point cloud in the Leap Motion API? One feature that led me to purchase it was the point cloud demo from their promo video, but I can't seem to locate documentation regarding it and user replies on the forums seem mixed. Am I just missing something?

I'm looking to use the Leap Motion as a sort of cheap 3D scanner.

Firebrat answered 30/7, 2013 at 5:25 Comment(0)
D
20

That demo was clearly a mockup which simulated a 3-D model of the human hand, not actual point cloud data. You can tell by the fact that points were displayed which could not have possibly been read by the sensor, due to obstruction.

orion78fr points to one forum post on this, but the transcript of an interview by the founders provides more information direct from the source:

  1. Can you please allow access to cloud points in SDK?

David: So I think sometimes people have a misperception as to really how things work in our hardware. It’s very different from other things like the Kinect, and in normal device operation we have very different priorities than most other technologies. Our priority is precision, small movements, very low latency, very low CPU usage - so in order to do that we will often be making sacrifices that make what the device is doing completely not applicable to what I think you’re getting at, which is 3D scanning.

What we’re working on are sort of alternative device modes that will let you use it for those sorts of purposes, but that’s not what it was originally built for.You know, it’s our goal to let it be able to do those things and with the hardware can do many things. But our priority right now is of course human computer interaction, which we think is really the missing component in technology, and that’s our core passion.

Michael: We really believe in trying to squeeze every ounce of optimization and performance out of the devices for the purpose they were built. So in this case the Leap today is intended to be a great human computer interface. And we have made thousands of little optimizations along the way to make it better, that might sacrifice things in the process that might be useful for things like 3D scanning objects. But those are intentional decisions, but they don’t mean that we think 3D scanning isn’t exciting and isn’t a good use case. There will be other things we build as a company in the future, and other devices that might be able to do both or maybe there will be two different devices. One that is fully optimized for 3D scanning, and one that continues to be optimized and as great as it can be at tracking fingers and hands.

If we haven’t done a good job communicating that the device isn’t about 3D scanning or isn’t going to be able to 3D scan, that’s unfortunate and it’s a mistake on our part - but that’s something that we’ve had to sacrifice. The good news is that those sacrifices have made the main device really exceptional at tracking hands and fingers.

I have developed with the Leap Motion Controller as well as several other 3-D scanning systems, and from what I've seen I'd seriously doubt if we're ever going to get point cloud data out of the currently shipping hardware. If we do, the fidelity will be far below what we see for gross finger and hand tracking from that device.

There are some low-cost alternatives for 3-D scanning that have started to emerge. SoftKinetic has their DepthSense 325 camera for $250 (which is effectively the same as the Creative Gesture Camera that is only $150 right now). The DS 325 is a time-of-flight IR camera that gives you a 320x240 point cloud map of the 3-D space in front of it. In my tests, it worked well with opaque materials, but anything with a little gloss or shininess gave it trouble.

The PrimeSense Carmine 1.09 ($200) uses structured light to get point cloud data in front of it, as an advancement of the technology they supplied for the original Kinect. It has a lower effective sptial resolution than the SoftKinetic cameras, but it seems to provide less depth noise and to work on a wider variety of materials.

The DUO was also a promising project, but unfortunately its Kickstarter campaign failed. It was using stereoscopic imaging from an IR source to return a point cloud from a couple of PS3 Eye cameras. They may restart that project at some point in the future.

While the Leap may not do what you want, it looks like more and more devices are coming out in the consumer price range to enable 3-D scanning.

Democratize answered 30/7, 2013 at 15:38 Comment(7)
do you know how well the DepthSense or PrimeSense cameras function outdoors or in bright light, including IR? ThanksBevins
@Bevins - Both struggle with strong light sources that contain IR in the range they sense. I believe the ToF IR cameras have the most problems with this, as I think the Carmine sensor has an overall stronger IR signal within the structured light dots they project. I think this might be why that sensor has less depth noise than the one from DepthSense cameras in my tests. Haven't done a lot of work with them outdoors, though. It seems like a non-IR stereoscopic 3-D approach would work best under those conditions.Democratize
I voted your Answer down, because this is clearly no mock-up: youtube.com/watch?v=MYgsAMKLu7s#t=40sLordling
@Lordling - What isn't, the hand display portion of that? That still is a simulation of the hand, not actual point cloud data. Again, note that it is displaying points that it cannot possibly see due to obstructed vision. That's not real. It's most likely a representation of their computational model for the hand and arm, reconstructed from the relatively low fidelity input they get from their sensors. You get impressive results from the Leap Motion controller for what it's designed to do, track a few important characteristics of hands and tools, but it's not giving you point clouds.Democratize
@BradLarson watch here, the data is jaggy but is certainly "point-cloudish". It is not a depth-picture, what is fascinating because the point-cloud seems to contain data normally obtruded. There is more to see here: youtube.com/watch?v=353p4IozW-4 youtube.com/watch?v=vukBCXLG_u0Lordling
@Lordling - Yeah, that's clearly from their computational model of the hand. It's going to be a little noisy, given the input data, but it's only going to be useful for modeling the objects they've tuned it around (hands, fingers, tools) and not for generalized point cloud imaging. You can get the raw stereo images from their cameras, but at that point you're better served with higher-res imagers like in the Duo or a manually built stereo rig. You're going to need to do the stereo matching yourself to generate your depth image.Democratize
@BradLarson I know what you mean, but at least they have a point-cloud or depth-image to reconstruct from - even if this is only a greedy reconstruction. There must be some point-cloud/depth-image from stereo-matching they started reconstructing from.Lordling
C
3

See this link

It says that yes, Leap Motion can theorically handle point cloud and it was temporarily part of the visualiser in beta and no, you can't access it using the Leap Motion APIs right now.

It may appear in the future but it's not a priority of Leap Motion Team.

Canthus answered 30/7, 2013 at 14:19 Comment(1)
One can see the Pointcloud data here: youtube.com/watch?v=MYgsAMKLu7s#t=40sLordling
L
1

As with LeapMotion SDK 2.x one can at least access the stereo camera images! As I know by myself it is a convenient solution, for many tasks where the point cloud data was asked for. This is why I mention it here, even if it does not give the point-cloud data internally generated by the driver to extract the pointer-metadata. But now one has the capability to generate own point-cloud by yourself, this is why I think it is strongly related to the question.

Lordling answered 1/10, 2014 at 17:49 Comment(0)
L
0

Currently there is no access to the Pointcloud in the public API. But I think this video is no mock-up, so there should be a possibility: http://www.youtube.com/watch?v=MYgsAMKLu7s#t=40s

Lordling answered 26/3, 2014 at 3:21 Comment(0)
O
0

Roadtovr recently reviewed the Nimble Sense Kickstarter, which is using point cloud.

It’s the same technology that the Kinect 2 uses, and it’s supposed to have some advantages over the Leap Motion.

Because it’s a depth sensing camera, you can point the camera top-down like the Touch+, although their product will not ship till next year.

Osterman answered 2/11, 2014 at 4:19 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.