Generate a real time 3D (mesh) model in Unity using Kinect
Asked Answered
T

2

6

I'm currently developing an application with the initial goal of obtaining, in real time, a 3D model of the environment "seen" by a Kinect device. This information would be later on used for projection mapping but that's not an issue, for the moment.

There are a couple of challenges to overcome, namely the fact that the Kinect will be mounted on a mobile platform (robot) and the model generation has to be in real-time (or close to it).

After a long research on this topic, I came up with several possible (?) architectures:

1) Use the depth data obtained from Kinect, convert it into a point cloud (using PCL for this step), then a Mesh and then export it into Unity for further work.

2) Use the depth data obtained from Kinect, convert it into a point cloud (using PCL for this step), export it into Unity and then convert it into a Mesh.

3) Use KinectFusion that already the option of creating a Mesh model, and (somehow) automatically load the Mesh model created into Unity.

4) Use OpenNI+ZDK (+ wrapper) to obtain the depth map and generate the Mesh using Unity.

Quite honestly, I'm kinda lost here, my main issue is the real-time requirement along with being forced to integrate several software components makes this it tricky problem. I don't know which if any of these solutions are viable and the information/tutorials on these issues isn't exactly abundant like the one, for example, for Skeleton tracking .

Any sort of help would be greatly appreciated.

Regards, Nuno

Trisect answered 10/1, 2014 at 17:53 Comment(4)
Unity does allow to generate and assign mesh data at runtime, but doesn't offer much assistance. I'd recommend generating the mesh data using one of those other libraries you mentioned. If your team can afford the licensing, it might be worth contacting Unity support about help writing a native plugin.Kopp
Interesting and daunting, an idea to load a 3D model not included inside unity is to use asset bundles, hope that could help. If the unity app will be running on a pc you will have more horse power but on mobile will be difficult due to ponit clouds are usually heavy, I will try to process and clean the data as much as possible outside unity.Lammond
After further research I found out couple of things: the library I previously mentioned (PCL) does have a wrapper for Unity but it's only for iOS applications and loading a mesh created by KinectFusion isn't really an option, it is very heavy (40-80mb file size).Trisect
I'd like to do the same thing as a component of my thesis work involving VR control systems: Unity + realtime models generated by Kinect. I can't provide much help beyond pointing you to the Kintinuous guys: cs.nuim.ie/research/vision/data/rgbd2012/citation.html. My hope was that their software would be able to build the polygon mesh in realtime and Unity could be pointed to it. I'd reached out to them 6 months ago hoping for access to their latest build but they weren't ready to share. Maybe they are now?Longmire
L
3

Sorry, I might not be providing a solution for realtime mesh creation within Unity - but the process discussion here, was interesting enough for me to reply.

In the hard science novel Memories with Maya - there is discussion of exactly such a scenario:

"“Point taken,” he said. “So… Satish showed me a demo of the Quad [Quad=Drone] acquiring real-time depth and texture maps.”

“Nothing new in that,” I said. “Yeah, but look above us.” I tilted my head up. The crude shape of the Quad came into view.

“The Quad is here, but you can't see it because the FishEye [Fisheye=Kinect 2] is on it aimed straight ahead.”

“So it's mapping video texture over live geometry? Cool,” I said.

“Yeah, the breakthrough is I can freeze a frame… freeze real life as it were, step out of the scene and study it.”

“All you do is block out the live world with the cross polarizers?”

“Yeah,” he said. “It's a big deal for AYREE to be able to use such data-sets.”

“The resolution has improved,” I said.

“Good observation,” he said. “So has the range sensing. The lens optics have also been upgraded.”

“I noticed that if I turn around I don't see the live feed, just the empty street,” I said.

“Yes, of course,” he replied. “The Quad is facing the other way around. It's why I'm standing in front of you. The whole street, however, is a 3D model done by a standard laser scan taken from the top of that high tower.” Krish pointed to a building block at the far end of the street. I turned back to the live 3D view again. He walked in front of me.

“This is uber cool. Everyone looks so real.”

“Haha. You should see how cool it is when you're here in person with the Wizer on,” he said. “I'm here watching these real people pass by, only they have a mesh of themselves mapped onto them.”

“Ahhh! Yes.”

“Yeah, it's like they have living paint on them. I feel like reaching out and touching, just to feel the texture.”...


The work that you're thinking of doing in this area, and this use of a live mesh goes far beyond Projection Mapping for events- for sure!

Wishing you the best on the project, and I will be following your updates. Some of the science behind the story is on www.dirrogate.com if the topic interests you. Kind Regards.

Lebensraum answered 4/2, 2014 at 8:0 Comment(0)
C
1

I would use Kinect Fusion, as it has a sample with the ability to export to .obj, which unity supports. You can automatically save it, and import it to unity to generate a mesh automatically. Especially if you have multiple Kinects, then Microsoft even has a sample to show the basics of Kinect Fusion with multiple Kinects. Also, since Fusion is already pre-written, there is not much code you will have to write.

Here is an example of a mesh from Fusion with one camera:

Kinect Fusion

I do want you to notice how many vertices there are though... This could cause performance problems later on.

Good luck!

Constantina answered 7/2, 2014 at 3:32 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.