Hi
im trying to develop an app to measure an object placed on the floor by placing my kinect on the ceiling looking down (see image 1).
i want to get the "height width and depth" of any object in front of the kinect, just like a measure tape but faster
i tried to develop it using the kinect2+ MS sdk + opencv(emgu) but i think my method was wrong and someone suggested me that i should use a pcl library but i couldnt find any for c# so maybe openframeworks could be the way to go
hope you guys can guide me,
moving from c# to openframeworks is not a problem.
this is what my c# app does (with errors) :
- filter depth image by some constant ( filter out floor )
- detect objects using opencv
- get the corner of my detected object and convert to real world coordinate
- get distance between world coordinate corners to get "object width and depth"
- get distance between highgest corner position and floor to get "object heigth"
in other words i try to detect objects on 2d (flatten depth image) then convert to 3d and then measure.
"The problem" im having is when i place an object away from the center of my image i get huge error like 15cm on the "width or depth " because opencv encloses sides of my object as if it were part of the top surface (see image 2 ) so instead of getting the bounding box of the surface of the object im getting the top+visible side of the object.
any ideas ?