Ahoy !

Bretzel_59's picture


Servo calibration

moz4r's picture

This is now worky for testing, what is the goal :

- One place for servo settings and calibration process ( mapping etc ... )
- Share settings automatically ( or not ) between differents script

 

 


The Cookie Factory - aka deployer

GroG's picture

http://build.myrobotlab.org:8888/ 

Ahoy !
In order to provide a stable platform and continuously build the new Nixie release of MyRobotLab I made a new Nodejs application called the "deployer".  The source code is posted on github.  


New MRL Networking System

AutonomicPerfectionist's picture

Hey y'all

This is just me trying to organize my thoughts around the new networking system so I can figure out the best way to implement client libraries. First thing I'll be doing once this system is finished is to update mrlpy and to create a C++ client library, using the information here, so if anything's wrong feel free to correct me.


could use some help please

harland's picture

I would like to use the 2nd serial port on the mega in Azul. The mega currently is used for the left arm and head servos, but the 3 serial ports on the mega are not being used. How do I set the baud rate for say TX2 on the mega. I don’t need to receive any data on TX2 but want to send serial commands out. Can someone post an example to set the baud rate and send a text string out the port. The mega is already setup by python in the Inmoov routine, so the commands should be in python.

Kinect 360 Google Chrome and WebKitRecognition

hairygael's picture
Hello, hello, hello,
I personaly do not use the mics of the kinect, but someone on the InMoov forum is having the same issue.
So I am reporting an issue regarding using the microphone of the kinect 360 with Chrome and Webkitrecognition.
Using the Kinect360 with Windows10, SDK 1.8, MyRobotLab 2693,  gets a "no microphone found" in Chrome.
Although the kinect microphone is correctly detected and selected in Google Chrome.

Setting the deep learning ui to start within myrobotlab

mimorikay's picture

I've been reading over the dl4j documentation over the past couple of days, not the easiest for me - coming from matlab and tensorflow. My java is many years in the past..

Has there been any thoughts to following: https://deeplearning4j.org/visualization and having the training/visualisation ui start up upon initialisation of dl4j in opencv?

The example model for zoo is sub-optimal for a lot of objects..

Hybernating and waking up MRL?

juerg's picture

Running MRL on my laptop it takes quite some time to start up all the services in MRL.

If my laptop hybernates (w10) the servos and maybe some other stuff too will not work any more after a wake up, so I have to terminate and restart MRL

Could MRL be modified to survive a w10 hybernate and reconnect everything needed?

InMoov Script

Antonio18's picture

In order to add new gestures to the own InMoov, I wanted to use the GestureCreator tool. 

I started  START_INMOOV.bat and opened the Creator manually (command capture gesture is not working). When I say "load script) it gives me "X not supported" instead of my gestures. 

How can I get my own script with my limits that I can run in the GestureCreator?

Thank you for any help!