In my intention to ask the robot to approach a certain position I am kind of stuck with a simple 2d array depth map scan to identify obstacles.

The kinect provides a distance value in a 640 by 480 matrix. In order to create a top view of obstacles in front of the robot for each colum I want the lowest distance value and add that into an obstacle list with (col, distance).

This works in principle but the problem is that my i7 CPU spends arount 7 seconds to simply walk to the 640x480 array,

for col in range(640):
    for row in range(480):
        i = i+1

tried to use nditer but no speedup. I need this to be at least 10 times faster.

any hint would be welcome

calamity

6 years 9 months ago

Hi Juerg

Skip some pixels. 

first I notice the first and last few rows/cols are full of artefacts, better not use them anyway.

If you remember the guy that make video of for using the kinect data on processing, he skip 4 pixels on each iteration (thus using like a 160 x 120 resolution). 

In integratedMovement service. I did some complex depth map transformation to put the depth map data into a 3d coordinate to be able to group the point into group to form different objects. So to have a decent processing time, I need to skip 8 pixels on each iteration. I lose a lot of precision, but it's still good enough to avoid items or get the hand in position to be close to an items (like +/- 1 inch or 50mm)

Hi Christian

Thanks for your reply. Unfortunately not too many members try to help ...

Investigated some more because I could not believe that it can't be enhanced and found this solution:

I load my 640x480 depth map

There is a function nanmin() which allows to find the lowest value in a column. It needs NaN's for values that should be ignored and NaN's are floats. So I convert my array to float first

b = depth.astype(float)

and then I replace the distancies below 800 mm and above 2800 mm with nan's (2800 is where the floor starts to show, I think it should be possible to "filter out" the floor and maybe also mark abysses from the depth map but that is something for the TODO list

b[b < 800] = np.NaN
b[b > 2800] = np.NaN

the evaluation of the closest point per column is then done with this:

obstacles = np.nanmin(b, axis=1)

and I can show this on my screen with

    for index, i in enumerate(obstacles):
        if not np.isnan(i):
            screen.set_at((index, int(i)/10), WHITE)
 
distance/10 is to get a nice range and 480 pixels = 4.8 m are about the range of the kinect.
 
NOW THE BEST: this takes almost no time, 0.05 seconds on my "fast PC" to get the closest points and about 0.3 seconds for the whole path evaluation including the map update.