7580OS_Learning ROS for Robotics Programming - Second EditionAs it can be obvious by now given my posting frequency, writing is not something that comes easy to me, especially not in English, so this will definitely be a surprising post. In August, a couple of friends and I published a book about the Robot Operating System (ROS), a robotics framework which we’ve been using for a long time and which made the basis for the software stack.

This is the second edition of the original Learning ROS for Robotics Programming, which was written by two of the same authors and also reviewed by one of the authors of the second edition. Unfortunately, as life would have it, I couldn’t be involved with the first edition, so I couldn’t pass on the opportunity to participate in this iteration.

This second edition improves upon the first by providing updated content, as the latest versions of ROS are significantly different to those used in the first edition. Be aware that even though the book was originally written to support ROS Hydro, we also provide support for Indigo and Jade in our GitHub repository. We have also improved in general the content of the existing chapters with up-to-date examples and better explanations. Finally, we have replaced the last chapter with two new chapters covering point clouds and robotic arms, which we consider to be a great addition to an already extensive book. The layout of this second edition is as follows:

  • Chapter 1 – Getting Started with ROS Hydro: as the name suggests, this first chapter goes through the installation process of ROS Hydro on Ubuntu, covering also the process of doing so in a VirtualBox virtual machine as well as on a Beaglebone Black.
  • Chapter 2 – ROS Architecture and Concepts: the second chapter covers most of the bits and pieces ROS is made of, including practical examples in which the user will learn how to create nodes and packages that interact with each other, as well as with the famous turtlesim.
  • Chapter 3 – Visualization and Debug Tools: there are many situations in software development where we have to deal with the unexpected, this chapter provides information as to how to use common debugging tools, as well as other tools provided by ROS, in order to debug and visualize our nodes, the data and the interactions between the different elements in our system.
  • Chapter 4 – Using Sensors and Actuators with ROS: a very important part of robotics is dealing with hardware, this chapter covers the usage of common and cheap sensors and actuators supported by ROS, as well as some others more complex, and not as cheap, such as the Kinect or laser rangefinders. Finally, this chapter will also cover how to use an Arduino with ROS to expand our hardware possibilities even further.
  • Chapter 5 – Computer Vision:  from connecting a USB or FireWire camera and publishing images to performing visual odometry with stereo cameras or RGBD sensors, passing through the image pipeline to perform corrections and transformations to our images, this chapter provides an overview of the computer vision tools provided by ROS and OpenCV.
  • Chapter 6 – Point Clouds: this chapter explores a different approach to 3D sensor data communication and processing by using the Point Cloud Library (PCL), which is a library tailored to 3D data (or point clouds) processing and it’s well integrated with the latest versions of ROS, providing message abstractions and other facilities.
  • Chapter 7 – 3D Modelling and Simulation: in many situations, working in robotics requires working without the robots themselves and in some others the amount of tests required to validate a system make it an impossibility to use the robot for that purpose, in those situations the best bet of the roboticist are simulations with accurate 3D models. Since simulations are an indispensable tool for any serious robotics project, this chapter covers the process from creating an accurate 3D model of our robot to simulating it and its environment with Gazebo.
  • Chapter 8 – The Navigation Stack – Robot Setups: this chapter introduces the navigation stack, which is an incredibly powerful set of tools provided by ROS to combine sensor and actuator information to navigate a robot through the world. This introduction goes through the basics of the navigation stack, explains how to understand and create our own transformations and covers odometry with the use of a laser rangefinder or Gazebo.
  • Chapter 9 – The Navigation Stack – Beyond Setups: as a continuation to the previous chapter and using all the concepts explained throughout the book, this chapter finalises the configuration of our robot to make full use of the navigation stack.
  • Chapter 10 – Manipulation with MoveIt!: the final chapter of the book covers the integration between ROS and MoveIt! which provides a set tools to control robotic arms in order to perform manipulation tasks such as grasping, picking and placing, or simple motion planning with inverse kinematics.

The authors of the book, which I consider amongst my best friends and most trusted colleagues, are Enrique Fernández, Ph.D. in Computer Engineering by the University of Las Palmas de Gran Canaria and currently working as a Senior Autonomy Engineer at Clearpath Robotics, Aaron Martínez, M.Sc. in Computer Engineering and co-founder of Subsea Mechatronics, Luis Sánchez, M.Sc. in Electronics and Telecommunications and also co-founder of Subsea Mechatronics, and of course yours truly, M.Sc. in Computer Science and Currently a Software Engineer at Dell SecureWorks (I know, unrelated to robotics).

Come on, stop talking and tell us where we can buy the book…

I know, I know, you’re an impatient bunch, right after this paragraph I’ve included a non-exhaustive list of places where the book is currently sold. If you’re not too sure yet, remember that Christmas is very close and books are always a great gift for friends and family, and who doesn’t want to have a grandma who programs robots* as a hobby instead of knitting?

| | | Barnes&Noble | O’Reilly | Safari Books

We’d like to hear your opinions, so don’t forget to comment if you’ve already read the book or even if you haven’t, and spread the word!

* The authors do not claim this book can teach your grandma to program robots.

A few weeks ago a colleague of mine proposed me an interesting problem which years ago he found very challenging and a lot of fun and of course you can’t just tell this kind of thing to a computer scientistprison_lamp2 and expect no reaction whatsoever, so I went ahead and started thinking of a solution. The first thing to say about this problem is that it can have more than one solution and that the only way to compare them is by calculating the average time it would take for the problem to be solved following that particular approach. The goal of this post is not to do a mathematical examination of all the possible solutions but only to talk about the solution I came up with, leaving the problem open to the readers imagination. If you come up with more solutions, please don’t hesitate to comment about them!  The problem statement is the following:

There is a prison with 100 prisoners isolated from each other and a very bored prison warden who proposes a challenge: each day he will select a prisoner at random and put him in a special room fitted with nothing more than a lamp and he will give that particular prisoner the opportunity to tell him when all of the prisoners have been at least once in that room. If the prisoner is right all of them would be given a pardon and if he is wrong all of them die. Since the prisoners are isolated, he will give them one hour to elaborate a plan between them, after which they won’t be able to communicate again until the game is over.

At first glance, it’s obvious that the only link between each prisoner in the room and the next one is the lamp, so it has to be used as a method for communicating the next prisoner some message. Defining the lamp as a method to convey a message seems trivial, but given that it can only hold one bit of information it seems that we need something more in order for this message to have a meaning, we will call this extra information the context of the message.

Now, the main difficulty of the problem is defining a context with which to assign a meaning to the message, or rather define how the prisoner going into the room will interpret the lamp being on or off. One way to look at it is that we need something to accumulate information so that whenever a prisoner goes into the room he has a bigger picture than just one bit, I propose this accumulator to be another prisoner which will interpret the message left in the room and accumulate the information, thus acting like an 100 bit message himself.

Given that one of our prisoners is an 100 bit accumulator, who does this job has to be decided during the planning hour, and also how and what the other prisoners should convey through the lamp. For this particular approach, each of the other prisoners will have to inform the accumulator if they have been in the room or not by turning the lamp on, but they will only do so if when they entered the room the lamp was off and if it’s the first time they’ve been there, this way the lamp will inform the accumulator that at least one of the other prisoners has been there for the first time. Finally, the accumulator has to turn off the lamp every time he goes into the room and finds it on, so that a new cycle can start.

With this solution, the accumulator prisoner (or counter as I called it in the following code) will have to go at least 99 times to the room to be able to tell the warden with complete certainty that all of the prisoners have already been there. Considering the best case, this would take him as least as 198 days if he went after each prisoner and for every prisoner this was the first time. If we assume now that on every 100 day cycle at least one new prisoner has been into the prison and that the accumulator is always the last, it would take him 9900 days which is quite a bad prospect for the prisoners and probably most of them will be already out by then.

The following code attempts to simulate the situation by creating random numbers as if it was the warden picking a random prisoner, it also includes the solution by having the accumulator (counter) prisoner and a lookup table for each of the prisoners (in order to store if they have  been in or not).

int main()
{
    // ID of the prisoner in charge of counting
    const int prisoner_counter_id = 0;
    // Prisoner state (been with lamp off or not)
    int prisoner_lookup[100];
    // Number of counts by prisoner in charge of counting
    int prisoner_counter = 0;
    // State of the lamp
    int lamp = 0;
    // Current prisoner in the room
    int prisoner_room;

    memset(prisoner_lookup, 0, 100*sizeof(int));

    while (prisoner_counter < 99)
    {
        // Select a random prisoner
        prisoner_room = rand() % 100;

        if (prisoner_room == prisoner_counter_id)
        {
            // Random prisoner is the counter
            // If the lamp is on, count it
            prisoner_counter += lamp;
            // Turn off the lamp
            lamp = 0;
        }
        else if ((lamp == 0) && (prisoner_lookup[prisoner_room] == 0))
        {
            // If the lamp is off and prisoners "first time"
            // Turns on the lamp
            lamp = 1;
            // He has now completed his visit to the room
            prisoner_lookup[prisoner_room] = 1;
        }
    }
    
    printf("Counter counts %d\n", prisoner_counter + 1);

    return 0;
}

As I said, there are other solutions to this problem although it seems a bit difficult to come up with them once you already have a solution. I suggest you give it a try and see what you can get. If you still can’t find any other solutions, I’m sure a fast search will give you many results, but what would be the challenge in that?

Lamp Image  |

One day, looking for cheap sensors on ebay, I found this interesting board which contained everything I was looking for. It basically consists of a 3-axis accelerometer (ADXL345), a 3-axis magnetometer (HMC5883L), a 3-axis gyroscope (L3G4200D) and a barometric pressure sensor (BMP085). My plan is to build an Inertial Measurement Unit (IMU) (or maybe I should call it Attitude and heading reference system (AHRS)) and in the process learn how to interact and interpret the information all of this sensors provide. The fact is I have some experience using IMUs since I used one on my and another one on the , but the fact is they come preprogrammed and there is not much point in working with the raw sensor data unless you want to improve the measurement or give it another use.

For this project I am also using an Arduino Duemilanove, for that reason I wanted to call it ArduIMU, but there is already , so I will have to find another name (suggestions would be appreciated). Connecting the sensor board to the Arduino is pretty straightforward, every sensor has an I²C interface so you can access each of them using the . The drawing was done using fritzing, on which I created the corresponding custom part for this board, although I did something wrong and it does not conform to the fritzing graphic standards.

This will be the first of a series of posts I plan to write about this project, since there are several steps I need to take in order to fully understand each sensor and several more to combine them in order to improve accuracy. In this post I want to talk about the accelerometer and how to obtain the roll and pitch angles from it, which is a process that can also be called tilt sensing.

Accelerometers are devices that are capable of measuring the acceleration they experience relative to free-fall,  the same acceleration living beings feel. As a consequence, accelerometers are incapable of measuring the acceleration of gravity, but can be used to measure the upwards acceleration that counters gravity when at rest. This acceleration is measured as 1g on the z-axis, when both pitch and roll angles are zero, but when the sensor is tilted either the x-axis or the y-axis experiences a component of the upward acceleration, whose magnitude depends on the tilt angle.

Pitch & Roll estimation

Obtaining the pitch and roll angles is then a matter of being able to read the accelerometer, convert these readings to the g unit (1g = 9.8 m/s²), and apply the corresponding equations. The process of obtaining and converting the accelerometer readings depends on the accelerometer you are using, in my case, the ADXL345 in its basic configuration, provides 10-bit resolution for ±2g, but has several other ranges (±2g, ±4g, ±8g, ±16g)  and resolutions (from 10 to 13 bits depending on the range) . Generalizing, the formula used to calculate the acceleration from the accelerometer readings is:

G_{Accel} = Raw_{Accel} \cdot \dfrac{Range}{2^{Resolution - 1}}

Once we have the correct acceleration components, we can proceed to calculate the different angles using the following equations:

pitch = \arctan{\left(\dfrac{G_y}{\sqrt{G_{x}^2 + G_{z}^2}}\right)}     roll =\arctan{\left( \dfrac{-G_x}{ G_{z}}\right)}

For more information about where these equations come from, you can read the documentation I include at the end of this post. As you can see, the denominator of the pitch equation is defined to be always positive, so the equation itself only provides [-90, 90] range, which is exactly what is expected for the pitch angle. In contrast, the roll equation provides [-180, 180] range. It is important to take into account that when the pitch angle is 90º, the surge axis (roll) is directly aligned with the gravity vector, thus we cannot measure the roll angle anymore, this is what is called Gimbal Lock.

Also, be aware that the roll equation is undefined when both G_x and G_z are equal to zero, and that for each possible value of the calculation done inside the arctan function there are two valid solutions, not only on the roll but also on the pitch equation. These problems can be easily solved in code by using the function atan2, which eliminates the angle calculation ambiguity by taking into account the quadrant.

Removing short-term fluctuations using a Low-Pass filter

At this point we already have a fully functional pitch & roll estimation system, but if we experiment with it we will discover that the readings fluctuate quite a bit and this may be very annoying for some applications. Removing these short-term fluctuations can be achieved by means of what is called a Low-Pass filter. This type of filter attenuates the higher frequencies of the signal, thus providing a smoother reading. The Low-Pass filter is easily implemented by using the following equation:

y_{t} = \alpha \cdot x_{t} + (1 - \alpha) \cdot y_{t - 1}

Where y_t is our filtered signal, y_{t-1} the previous filtered signal, x_t the accelerometer reading and \alpha the smoothing factor. It probably may seem obvious, but filtering should be done to the accelerometer readings before calculating the angles, instead of to the angles themselves. Regarding the smoothing factor, the lower we set it, the more it will take for the angle to stabilize, so we should not set it too low because then we could lose real-time behaviour. With this I mean that the reading will not correspond to the real angle until it stabilizes, and this could take some time.

The source code & the ADXL345 library

I developed a small library to interface with the accelerometer, even though at the moment I have only implemented the basic functionality, I plan on supporting all of the device features. You can find it in my github account, where you can also find the processing code I used for the video example below. Thanks to the library, the code is pretty straightforward. It just reads the sensor accelerations which are already converted into gs by the library, applies the Low-Pass filter and then uses the roll and pitch equations to calculate the angles.

#include 
#include 

const float alpha = 0.5;

double fXg = 0;
double fYg = 0;
double fZg = 0;

ADXL345 acc;

void setup()
{
        acc.begin();
        Serial.begin(9600);
        delay(100);
}

void loop()
{
        double pitch, roll, Xg, Yg, Zg;
        acc.read(&Xg, &Yg, &Zg);

        //Low Pass Filter
        fXg = Xg * alpha + (fXg * (1.0 - alpha));
        fYg = Yg * alpha + (fYg * (1.0 - alpha));
        fZg = Zg * alpha + (fZg * (1.0 - alpha));

        //Roll & Pitch Equations
        roll  = (atan2(-fYg, fZg)*180.0)/M_PI;
        pitch = (atan2(fXg, sqrt(fYg*fYg + fZg*fZg))*180.0)/M_PI;

        Serial.print(pitch);
        Serial.print(":");
        Serial.println(roll);

        delay(10);
}

The result

For a more interactive visualization of the data, I also developed an example using processing, which consists on a rotating 3D cube. You can see the results in the following video.

In the next post about my Arduino IMU, I will talk about how gyroscopes work and how to interpret the information they provide.

I finally did it, oficially I am now M.Sc. in Computer Science. After long years of very hard work and sleepless nights, but also living under the comfortable feeling of being always busy and the certainty of what was to come. But that’s it, this moment had to come and my days as a university student are over, but I believe I’m ready for what lies ahead.

My Master Thesis consisted in the design and development of a software architecture for monitoring and controlling a remotely operated underwater vehicle (ROV). It consisted of two software blocks: the control system  and the operation system. The control system is the main software architecture, designed to allow multiple modules to work in parallel connected with one another, each of them controlled by a supervisor which guarantees that the system is always working and deals with software and hardware errors. On the other hand, the operation system allows the user to connect to the control system, visualize the sensory data and operate the vehicle.

The software itself is not very complex, but the design of the architecture is focused on offering efficiency, robustness, reliability and flexibility. One of the main goals of the design is to give the developer the ability of adapting the software architecture to different control models, and even to different types of vehicles or robotic systems, such as an autonomous underwater vehicle (AUV). You can read more about it in the documentation, although it is in Spanish.

I had to give a presentation, where I explained the different aspects of the project and demonstrated the results. It went quite well, and I think it took a little bit longer than expected, but I was finally given the highest grade. Overall, I am certainly going to miss being a university student.

AVORA and the SAUC-E’12 Challenge

As you may already know, I was involved in the construction of an autonomous underwater vehicle (AUV) for participating at the Students AUV Challenge – Europe, which was held in La Spezia (Italy), at the Centre for Maritime Research & Experimentation, from July 6 to 13. It was a great experience being surrounded by top students from all over Europe and Canada, sharing ideas, conceptions and visions about underwater vehicles and robotics.

The sea basin was divided in two equal arenas, this way at most two teams could be working at the same time. The visibility conditions were quite rough and the water currents at the surface were noticeable. The organization provided us with two different workspaces, one on the outside, beside the competition arena, and the other one inside a warehouse. The combination of heat and humidity made it quite complicated to work, even though we were provided with several fans.

The first 5 days were allocated for practice runs, but the truth is some of the teams used this time to finish the construction of their vehicles, including us. On our first few days we did some recordings with an underwater camera, which we used fine grain our detection algorithms. We also finished the construction and did some preliminary tests in the pools. Unfortunately, when everything was ready, the vehicle suffered some leakage  because of an incorrectly sealed connector, which made us lose more than a day cleaning everything, but at least none of the electronic components were damaged.

After repairing the damage, we repeated the tests and verified that everything was working as expected. During these days, the qualification period started, so we were now running against the clock. When everything was ready again, we proceeded to adjust the navigation algorithms directly in the competition arena, something which took longer than expected because one of the arenas was being used for the qualification rounds. The last day of the qualification rounds, we did some simulations of the qualification mission and finished programming it, but at the end, since we had not done enough tests of the mission, we decided not to put at risk the vehicle and gave up our qualification slot.

We all felt a little bit demoralized because of not being able to qualify, but not everything was lost, we still had our chance on the “Impress the judges” category, and we sure came prepared for this one. A while ago, working on our AUV, a member of the team brought a pair of “virtual reality” glasses that he used on . These glasses had attached an external inertial measurement unit, so that the computer could be aware of the operator’s head motion. Since our vehicle was equipped with a pan-tilt camera system, we developed software that combined the camera and the pan-tilt system with the glasses and the gyroscope, so that the user could look around and see the surroundings of the vehicle.

The judges were quite impressed with our telepresence system and it was kind of fun to see them taking turns to try the glasses. They were also quite interested in some of our innovations, such as our pan-tilt camera system or the use of a bend sensor for water velocity measurement. The award ceremony was kind of a surprise, we won the first prize at the “Impress the judges” category, which was much more than we expected after four months work, competing against teams with years of experience and very mature vehicles. After the award ceremony we had a small good-bye party at Lerici, which was shorter than expected, for some of us at least, because of transportation issues.

During these days I had the opportunity to meet some of the most incredible vehicles I have ever seen, not only because of their design, but because of the fact that they were built by students. The vehicle I liked the most was the Canadian one, from the Team SONIA, with a robust and flexible design and an . The team was very prepared and it felt like they had every situation under control, which is a demonstration of their years of experience participating at the RoboSub Competition. Suffice it to say they won this year’s SAUC-E and got third place at RoboSub, quite a feat!

I was also impressed by the design of the vehicle SMART-E, from the University of Luebeck, even though I think it might present a painful challenge for autonomous navigation. This vehicle was shaped like a UFO, and was equipped with 3 thrusters each of which had an additional rotational axis so as to achieve vertical motion. The main hull was transparent, so they took advantage of this to build a strobe light, which was a requirement of the competition, using LEDs all around it. This combined with its shape, made it look like a real UFO, or should I say UCO? (Unidentified Cruising Object).

Overall, it was a worthwhile experience, not only competing but also building an autonomous underwater vehicle from scratch, and I surely recommend it to any student. It is an opportunity to gain more knowledge and to test the knowledge you already have, but more importantly to achieve experience in a real life project.

You can read more about our vehicle on our Journal Paper, or visit or the , or visit the .

The Venus Transit

So it’s already over, we will never see the transit of Venus across the Sun again in our lifetime. As the Wikipedia puts it, this rare event occurs  in a pattern that repeats every 243 years, with pairs of transits eight years apart separated by long gaps of 121.5 years and 105.5 years. The last pair of transits took place on 8 June 2004 and last night (5-6 June 2012), and the next pair of events will take place on December 2117 and 2125.

But don’t worry, thousands of eyes around the world recorded the experience in  and wavelengths so that no one would miss a detail. I wanted to be one of those eyes but since I live inside the wrong time zone there was no way to do it. So I decided to try a little experiment after learning that the SDO team had set up a high resolution feed updated every 15 minutes.

I wrote a very simple script  that  downloads an image every minute and checks if it’s different from the last one. As you may recall, the transit was scheduled to happen between 10pm and 5am (UTC) so the script is set up to work only between 9 PM and 8 AM, in order to get enough images to record a full transit. After that, each image is resized using ImageMagick and the time-lapse video is generated with ffmpeg.

#!/bin/bash

next=1
url="http://sdo.gsfc.nasa.gov/assets/mov/depot/APOD/latest_APOD_HMIC_FR.jpg"

if [ ! -e "$next.jpg" ]; then
    wget --cache=off $url -O "$next.jpg"
fi

while true; do
    time=$(date +%k%M)
    time=`echo $time|sed 's/^0*//'`

    if [[ $time -ge 2100 ]] || [[ $time -le 800 ]]; then
        wget --cache=off $url -O aux.jpg

        if [ $? -eq 0 ] && ! diff aux.jpg "$next.jpg" > /dev/null ; then
            next=$((next + 1))
            mv aux.jpg "$next.jpg"
        else
            rm aux.jpg
        fi
    else
        break
    fi

    sleep 60
done

mogrify -resize 1024x1024 *.jpg
ffmpeg -r 5 -qscale 1 -i %d.jpg VenusTransit.mp4

The result was quite nice, although not as impressive as the official videos from the SDO team.

 

Overall, it looks like Bash and Astronomy are a good combination!

Avora AUV SAUC-E’12

Recently, I have been working on a very interesting project, consisting in the design and development of an Autonomous Underwater Vehicle (AUV) for participating in the Students AUV Challenge – Europe held in La Spezia (Italy) at NATO Undersea Research Centre (NURC). We are a team composed of 8 students from different areas (Computer Science, Telecommunications, Electronics, Naval Engineering), in which I have the great honor of being the team leader and lead developer. The name of the AUV (and the team) is AVORA, which means Autonomous Vehicle for Operation and Research in Aquatic Environments, it was intentionally picked as a reference to an ancient deity from the Canary Islands.

The goal of the competition is to perform a series of tasks autonomously, without external information sources and within a fixed time frame, although this time frame is sufficiently large. Taking into account the broad spectrum of missions an AUV can accomplish, we can see that each of the tasks tries to emulate situations that arise in real life, in a limited fashion. The tasks are:

  1. Passing through a validation gate constructed of 2 orange buoys on a rope, 4 meters apart.
  2. Performing an underwater structure inspection. This underwater structure is basically a pipeline of cylinders.
  3. Searching and informing another autonomous vehicle about a mid-water target.
  4. Surveying a wall.
  5. Tracking and following a moving ASV.
  6. Surfacing in the surface zone.
  7. Impressing the judges! In this task, the teams are encouraged to be creative and demonstrate interesting features about their vehicles.

As one can see, completing all of these tasks requires certain type of sensors such as a sonar, cameras, depth and pressure sensors, inertial measurement units, etc, and also a great amount of hard work and time. Our AUV is on its way, since it is our first time, the vehicle has to be constructed from the ground up and it is not an easy job preventing water from getting inside everything.

Another problem with not having the vehicle constructed from the beginning is that most of the work has to be done with each sensor alone and that artificial datasets have to be created in order to fine tune the algorithms. Some of the algorithms require large amounts of data so as to validate them, in such cases we are trying to use datasets provided by others, non-related to the competition. But as I say, being the first time it’s difficult to know what to expect.

From now on I will try to post regularly about our progress. Wish us luck!