Using a marker, Processing, Kinect4WinSDK, nyar4psg and java.net.URL on my PC I wrote some code that will position the InMoov in a defined position in front of a door.

As the kinect range of view is a bit limited I had to fix the marker 20 cm above the door handle axis. 

Unfortunately in my case I had to do the movements on my own (an omniwheel base as Robyn has is yet on the wishlist of my Marvin). The distance to the door is 55..60 cm and could be more precise with a nicely positionable base.

The intention is that with the MRL IK service the bot will be able to open the door afterwards (be patient, calamity will first have to work on that).

Might be of interest for some that the positioning commands you hear in the video come from the Processing app running on my PC and are picked up by the acapela speech service on my bot-controlling odroid-xu4 running MRL by means of:

http://10.1.1.20:8888/api/services/i01.mouth/speakBlocking/"do this now"

It is also possible to call a MRL python function from a local or external browser with e.g.

http://10.1.1.20:8888/api/services/python/exec/about()

you find the video here

https://www.youtube.com/watch?v=wmpB6Om-RkE

and here the code

.https://github.com/MyRobotLab/pyrobotlab/blob/master/home/juerg/findMar…

 

calamity

7 years 7 months ago

Nice work juerg!!!

 

If I understand well, Marvin want to have the marker centered on the kinect field, is that right?

From that position, I think Marvin can open the door with where I am on the IK service. 

Next week-end I will probably be able to test physically the IK service as it seem to work well virtually. But my computer running my InMoov is away for a few day.

I'm sure it won`t be hard to have a mobile base position in the right way from the data you get from the kinect

I think we are not far to see Marvin open that door :)

Thanks calamity for your response

Right, currently InMoov would command me to position him with the kinect-camera x-centred in front of the door with a distance of 55 to 60 cm.

Thought about my approach and I think it would be worth to give a try with  the head camera as the kinect V1  depht info is not very good in close range and cam resolution is only 640*480 inan unmovable setup. Also the nyar4psg library is mostly in japanese and that makes it hard to make full use of it.

Will try to use OpenCV and the ArUco library. This will also allow to have the marker tell whether the handle is to the left or the right and whether the door opens toward the bot or away from it. And it would also allow to print markers for different doors.

Imagine it will be a challange to get the fingers in between the door and the handle and get feedback about the actual position. Maybe a mechnism as shown in Alan Timms InMoov blog could help as I understand the Arduino would know it's current position and that could be compared to the wanted position to maybe give a "second try"?

Nice to hear that you are still so optimistic about this task!

hope to reach you through this:

never can figure out how to contact a member of inmoov or myrobotlab :-(

any help with collision detection (e.g. hand hitting body)? 

also trying to use a servoboard-point to feed back internal pot position, first tries looked promising but the devil - as always - waits at corners.

juerg  

calamity

5 years 5 months ago

In reply to by juerg

Hi juerg

you successfully reach me :)

 

The collision detection i'm talking about is part of the integratedMovement service. 

How do it work?

it use the it use the inverted kinematic engine to know the position of each joints of each 'arm' of the robot. each joints are also describe as a geometric shape (currently I use only cylinder shape for the simplicity). In each loop of the integrated Movement(IM) engine, the closest distance between each shape representing the joints are compute. The computing is also done with the expect position of each joint in a short time when there is movement (in order to give some time to respond before the collision actually happen). If it happen to have the distance between any joints is less than 0, then the IM engine will make a move to get away of the collision before resuming the movement toward it's goal. 

 

How to use IM?

I think I will write a tutorial about it soon. Many people have ask me the question. It's really easy, but need some configuration to fit your robot.

 

your modified servo with feedback look promising. That can for sure improve the precision of the collision detection computing. 

 

hairygael

7 years 6 months ago

Great work Juerg!

Didn't know InMoov could be SO stressing when he needs to get to the door. A bit like an old man that needs help to go to the toilets!

:)

I posted your video on my Google+ page.