7580OS_Learning ROS for Robotics Programming - Second EditionAs it can be obvious by now given my posting frequency, writing is not something that comes easy to me, especially not in English, so this will definitely be a surprising post. In August, a couple of friends and I published a book about the Robot Operating System (ROS), a robotics framework which we’ve been using for a long time and which made the basis for the AVORA AUV software stack.

This is the second edition of the original Learning ROS for Robotics Programming, which was written by two of the same authors and also reviewed by one of the authors of the second edition. Unfortunately, as life would have it, I couldn’t be involved with the first edition, so I couldn’t pass on the opportunity to participate in this iteration.

This second edition improves upon the first by providing updated content, as the latest versions of ROS are significantly different to those used in the first edition. Be aware that even though the book was originally written to support ROS Hydro, we also provide support for Indigo and Jade in our GitHub repository. We have also improved in general the content of the existing chapters with up-to-date examples and better explanations. Finally, we have replaced the last chapter with two new chapters covering point clouds and robotic arms, which we consider to be a great addition to an already extensive book. The layout of this second edition is as follows:

  • Chapter 1 – Getting Started with ROS Hydro: as the name suggests, this first chapter goes through the installation process of ROS Hydro on Ubuntu, covering also the process of doing so in a VirtualBox virtual machine as well as on a Beaglebone Black.
  • Chapter 2 – ROS Architecture and Concepts: the second chapter covers most of the bits and pieces ROS is made of, including practical examples in which the user will learn how to create nodes and packages that interact with each other, as well as with the famous turtlesim.
  • Chapter 3 – Visualization and Debug Tools: there are many situations in software development where we have to deal with the unexpected, this chapter provides information as to how to use common debugging tools, as well as other tools provided by ROS, in order to debug and visualize our nodes, the data and the interactions between the different elements in our system.
  • Chapter 4 – Using Sensors and Actuators with ROS: a very important part of robotics is dealing with hardware, this chapter covers the usage of common and cheap sensors and actuators supported by ROS, as well as some others more complex, and not as cheap, such as the Kinect or laser rangefinders. Finally, this chapter will also cover how to use an Arduino with ROS to expand our hardware possibilities even further.
  • Chapter 5 – Computer Vision:  from connecting a USB or FireWire camera and publishing images to performing visual odometry with stereo cameras or RGBD sensors, passing through the image pipeline to perform corrections and transformations to our images, this chapter provides an overview of the computer vision tools provided by ROS and OpenCV.
  • Chapter 6 – Point Clouds: this chapter explores a different approach to 3D sensor data communication and processing by using the Point Cloud Library (PCL), which is a library tailored to 3D data (or point clouds) processing and it’s well integrated with the latest versions of ROS, providing message abstractions and other facilities.
  • Chapter 7 – 3D Modelling and Simulation: in many situations, working in robotics requires working without the robots themselves and in some others the amount of tests required to validate a system make it an impossibility to use the robot for that purpose, in those situations the best bet of the roboticist are simulations with accurate 3D models. Since simulations are an indispensable tool for any serious robotics project, this chapter covers the process from creating an accurate 3D model of our robot to simulating it and its environment with Gazebo.
  • Chapter 8 – The Navigation Stack – Robot Setups: this chapter introduces the navigation stack, which is an incredibly powerful set of tools provided by ROS to combine sensor and actuator information to navigate a robot through the world. This introduction goes through the basics of the navigation stack, explains how to understand and create our own transformations and covers odometry with the use of a laser rangefinder or Gazebo.
  • Chapter 9 – The Navigation Stack – Beyond Setups: as a continuation to the previous chapter and using all the concepts explained throughout the book, this chapter finalises the configuration of our robot to make full use of the navigation stack.
  • Chapter 10 – Manipulation with MoveIt!: the final chapter of the book covers the integration between ROS and MoveIt! which provides a set tools to control robotic arms in order to perform manipulation tasks such as grasping, picking and placing, or simple motion planning with inverse kinematics.

The authors of the book, which I consider amongst my best friends and most trusted colleagues, are Enrique Fernández, Ph.D. in Computer Engineering by the University of Las Palmas de Gran Canaria and currently working as a Senior Autonomy Engineer at Clearpath Robotics, Aaron Martínez, M.Sc. in Computer Engineering and co-founder of Subsea Mechatronics, Luis Sánchez, M.Sc. in Electronics and Telecommunications and also co-founder of Subsea Mechatronics, and of course yours truly, M.Sc. in Computer Science and Currently a Software Engineer at Dell SecureWorks (I know, unrelated to robotics).

Come on, stop talking and tell us where we can buy the book…

I know, I know, you’re an impatient bunch, right after this paragraph I’ve included a non-exhaustive list of places where the book is currently sold. If you’re not too sure yet, remember that Christmas is very close and books are always a great gift for friends and family, and who doesn’t want to have a grandma who programs robots* as a hobby instead of knitting?

Amazon UK | Amazon US | Packt Pub | Barnes&Noble | O’Reilly | Safari Books

We’d like to hear your opinions, so don’t forget to comment if you’ve already read the book or even if you haven’t, and spread the word!

* The authors do not claim this book can teach your grandma to program robots.

A few weeks ago I received an email from wordpress telling me that it was that time of the year in which my domain has to be renewed, I didn’t really think too much about it, but a few hours later I realised I hadn’t written a single post during 2014.  So this is me breaking my horrible record with a poem I’ve found inspiring at times and which gives me the sense that it’s important to take things easy in life.

I met a traveller from an antique land
Who said: “Two vast and trunkless legs of stone
Stand in the desert. Near them, on the sand,
Half sunk, a shattered visage lies, whose frown,
And wrinkled lip, and sneer of cold command,
Tell that its sculptor well those passions read
Which yet survive, stamped on these lifeless things,
The hand that mocked them and the heart that fed:
And on the pedestal these words appear:
‘My name is Ozymandias, king of kings:
Look on my works, ye Mighty, and despair!’
Nothing beside remains. Round the decay
Of that colossal wreck, boundless and bare
The lone and level sands stretch far away.”

Ozymandias – Percy Bysshe Shelley

A few weeks ago a colleague of mine proposed me an interesting problem which years ago he found very challenging and a lot of fun and of course you can’t just tell this kind of thing to a computer scientistprison_lamp2 and expect no reaction whatsoever, so I went ahead and started thinking of a solution. The first thing to say about this problem is that it can have more than one solution and that the only way to compare them is by calculating the average time it would take for the problem to be solved following that particular approach. The goal of this post is not to do a mathematical examination of all the possible solutions but only to talk about the solution I came up with, leaving the problem open to the readers imagination. If you come up with more solutions, please don’t hesitate to comment about them!  The problem statement is the following:

There is a prison with 100 prisoners isolated from each other and a very bored prison warden who proposes a challenge: each day he will select a prisoner at random and put him in a special room fitted with nothing more than a lamp and he will give that particular prisoner the opportunity to tell him when all of the prisoners have been at least once in that room. If the prisoner is right all of them would be given a pardon and if he is wrong all of them die. Since the prisoners are isolated, he will give them one hour to elaborate a plan between them, after which they won’t be able to communicate again until the game is over.

At first glance, it’s obvious that the only link between each prisoner in the room and the next one is the lamp, so it has to be used as a method for communicating the next prisoner some message. Defining the lamp as a method to convey a message seems trivial, but given that it can only hold one bit of information it seems that we need something more in order for this message to have a meaning, we will call this extra information the context of the message.

Now, the main difficulty of the problem is defining a context with which to assign a meaning to the message, or rather define how the prisoner going into the room will interpret the lamp being on or off. One way to look at it is that we need something to accumulate information so that whenever a prisoner goes into the room he has a bigger picture than just one bit, I propose this accumulator to be another prisoner which will interpret the message left in the room and accumulate the information, thus acting like an 100 bit message himself.

Given that one of our prisoners is an 100 bit accumulator, who does this job has to be decided during the planning hour, and also how and what the other prisoners should convey through the lamp. For this particular approach, each of the other prisoners will have to inform the accumulator if they have been in the room or not by turning the lamp on, but they will only do so if when they entered the room the lamp was off and if it’s the first time they’ve been there, this way the lamp will inform the accumulator that at least one of the other prisoners has been there for the first time. Finally, the accumulator has to turn off the lamp every time he goes into the room and finds it on, so that a new cycle can start.

With this solution, the accumulator prisoner (or counter as I called it in the following code) will have to go at least 99 times to the room to be able to tell the warden with complete certainty that all of the prisoners have already been there. Considering the best case, this would take him as least as 198 days if he went after each prisoner and for every prisoner this was the first time. If we assume now that on every 100 day cycle at least one new prisoner has been into the prison and that the accumulator is always the last, it would take him 9900 days which is quite a bad prospect for the prisoners and probably most of them will be already out by then.

The following code attempts to simulate the situation by creating random numbers as if it was the warden picking a random prisoner, it also includes the solution by having the accumulator (counter) prisoner and a lookup table for each of the prisoners (in order to store if they have  been in or not).

int main()
{
    // ID of the prisoner in charge of counting
    const int prisoner_counter_id = 0;
    // Prisoner state (been with lamp off or not)
    int prisoner_lookup[100];
    // Number of counts by prisoner in charge of counting
    int prisoner_counter = 0;
    // State of the lamp
    int lamp = 0;
    // Current prisoner in the room
    int prisoner_room;

    memset(prisoner_lookup, 0, 100*sizeof(int));

    while (prisoner_counter < 99)
    {
        // Select a random prisoner
        prisoner_room = rand() % 100;

        if (prisoner_room == prisoner_counter_id)
        {
            // Random prisoner is the counter
            // If the lamp is on, count it
            prisoner_counter += lamp;
            // Turn off the lamp
            lamp = 0;
        }
        else if ((lamp == 0) && (prisoner_lookup[prisoner_room] == 0))
        {
            // If the lamp is off and prisoners "first time"
            // Turns on the lamp
            lamp = 1;
            // He has now completed his visit to the room
            prisoner_lookup[prisoner_room] = 1;
        }
    }
    
    printf("Counter counts %d\n", prisoner_counter + 1);

    return 0;
}

As I said, there are other solutions to this problem although it seems a bit difficult to come up with them once you already have a solution. I suggest you give it a try and see what you can get. If you still can’t find any other solutions, I’m sure a fast Google search will give you many results, but what would be the challenge in that?

Lamp Image  | Mischiru

It has been a while since my last post and I know that some of you are waiting for some very informative posts about gyroscopes and magnetometers, but today is not that day. I want to talk alogo wee bit about my life in the past few months, since I joined SeeByte and moved to Edinburgh (As you may recall, I was born and raised in Las Palmas de G.C.).

Even though I haven’t written much in this blog, if you go back about a year ago you will see that I was already working on some interesting UUV projects, so SeeByte seemed like the right place for me, since above all I’m a developer/programmer/software engineer/computer scientist. The work I’m doing is very interesting, but unfortunately I’m not allowed to talk about it, suffice it to say it is related to ROVs, as was my thesis, although the level of complexity is much higher.

Three years ago, in 2009, I came to Edinburgh with my sister and thought it would be a great place to live in, and now that I live here I can certainly agree with my past self. If you come from a hot place, like the Canary Islands, the Scottish weather may not agree with you, but I have to say that I really do like the cold, those of you who know me probably know that already. All right, to be fair I sometimes miss the Sun and the heat.

2013-02-18 17.37.10

Aside from my job and my personal life, I have also been dedicating some time to my projects and in doing so I’ve learned quite a lot about electronics. The first of the projects I completed was a GPS datalogger but for that one I will dedicate a full post which is already half written. The rest of the projects are not that interesting but I’m quite proud of two of them, one is a Sound Meter (also known as VU Meter) and the other one is a variable power supply.

The variable power supply uses a few voltage regulators in order to achieve fixed 5v and a variable voltage dependent on the input voltage, which can be anything between 7v and 36v  if I’m not mistaken, and the value of the potentiometer, this voltage can then be set between the input voltage and 1.25v. I also added an LCD voltage meter I bought a while ago from ebay. The end result is a very useful device which I can use to power the rest of my projects with a few standard AA batteries.

2013-02-24 17.32.13

The sound meter was just an idea I had to learn about LED matrices and shift registers, but it ended up being a lot of fun. In this project I also included an Attiny85, which is a very small microcontroller similar to the ones you can find on the Arduino. In order to program the Attiny, I used the Arduino itself and the Arduino IDE.

2013-02-23 18.27.50

The basic idea behind the sound meter is to sample the output of a standard microphone and extract from it some sort of volume level. I didn’t want to spend much time with the programming part of the project so the algorithm I implemented is very simple and it is probably not as good as some others you can find on other VU meters.

Once the volume level has been obtained, the shift register is used in order to activate the necessary rows of the LED matrix. In the following video you can see an example of the sound meter working when it was just a prototype on a breadboard, the code I was using is quite different from the latest version and the result is much nicer now, but I was too lazy to make another video.

I think that’s all I wanted to say for now, on my next post I will talk about the GPS Datalogger. Sorry for those who are waiting for the gyro stuff, but you will have to wait a bit more, spoiler alert, gyros are not very useful on their own.

By the way, I’m very disappointed with Google’s decision of killing Google Reader. Please reconsider Google, the options out there aren’t half as good/simple, and that goes for Google+ as well.

One day, looking for cheap sensors on ebay, I found this interesting board which contained everything I was looking for. It basically consists of a 3-axis accelerometer (ADXL345), a 3-axis magnetometer (HMC5883L), a 3-axis gyroscope (L3G4200D) and a barometric pressure sensor (BMP085). My plan is to build an Inertial Measurement Unit (IMU) (or maybe I should call it Attitude and heading reference system (AHRS)) and in the process learn how to interact and interpret the information all of this sensors provide. The fact is I have some experience using IMUs since I used one on my master thesis and another one on the Avora AUV, but the fact is they come preprogrammed and there is not much point in working with the raw sensor data unless you want to improve the measurement or give it another use.

For this project I am also using an Arduino Duemilanove, for that reason I wanted to call it ArduIMU, but there is already another project with the same name, so I will have to find another name (suggestions would be appreciated). Connecting the sensor board to the Arduino is pretty straightforward, every sensor has an I²C interface so you can access each of them using the Arduino Wire Library. The drawing was done using fritzing, on which I created the corresponding custom part for this board, although I did something wrong and it does not conform to the fritzing graphic standards.

This will be the first of a series of posts I plan to write about this project, since there are several steps I need to take in order to fully understand each sensor and several more to combine them in order to improve accuracy. In this post I want to talk about the accelerometer and how to obtain the roll and pitch angles from it, which is a process that can also be called tilt sensing.

Accelerometers are devices that are capable of measuring the acceleration they experience relative to free-fall,  the same acceleration living beings feel. As a consequence, accelerometers are incapable of measuring the acceleration of gravity, but can be used to measure the upwards acceleration that counters gravity when at rest. This acceleration is measured as 1g on the z-axis, when both pitch and roll angles are zero, but when the sensor is tilted either the x-axis or the y-axis experiences a component of the upward acceleration, whose magnitude depends on the tilt angle.

Pitch & Roll estimation

Obtaining the pitch and roll angles is then a matter of being able to read the accelerometer, convert these readings to the g unit (1g = 9.8 m/s²), and apply the corresponding equations. The process of obtaining and converting the accelerometer readings depends on the accelerometer you are using, in my case, the ADXL345 in its basic configuration, provides 10-bit resolution for ±2g, but has several other ranges (±2g, ±4g, ±8g, ±16g)  and resolutions (from 10 to 13 bits depending on the range) . Generalizing, the formula used to calculate the acceleration from the accelerometer readings is:

G_{Accel} = Raw_{Accel} \cdot \dfrac{Range}{2^{Resolution - 1}}

Once we have the correct acceleration components, we can proceed to calculate the different angles using the following equations:

pitch = \arctan{\left(\dfrac{G_y}{\sqrt{G_{x}^2 + G_{z}^2}}\right)}     roll =\arctan{\left( \dfrac{-G_x}{ G_{z}}\right)}

For more information about where these equations come from, you can read the documentation I include at the end of this post. As you can see, the denominator of the pitch equation is defined to be always positive, so the equation itself only provides [-90, 90] range, which is exactly what is expected for the pitch angle. In contrast, the roll equation provides [-180, 180] range. It is important to take into account that when the pitch angle is 90º, the surge axis (roll) is directly aligned with the gravity vector, thus we cannot measure the roll angle anymore, this is what is called Gimbal Lock.

Also, be aware that the roll equation is undefined when both G_x and G_z are equal to zero, and that for each possible value of the calculation done inside the arctan function there are two valid solutions, not only on the roll but also on the pitch equation. These problems can be easily solved in code by using the function atan2, which eliminates the angle calculation ambiguity by taking into account the quadrant.

Removing short-term fluctuations using a Low-Pass filter

At this point we already have a fully functional pitch & roll estimation system, but if we experiment with it we will discover that the readings fluctuate quite a bit and this may be very annoying for some applications. Removing these short-term fluctuations can be achieved by means of what is called a Low-Pass filter. This type of filter attenuates the higher frequencies of the signal, thus providing a smoother reading. The Low-Pass filter is easily implemented by using the following equation:

y_{t} = \alpha \cdot x_{t} + (1 - \alpha) \cdot y_{t - 1}

Where y_t is our filtered signal, y_{t-1} the previous filtered signal, x_t the accelerometer reading and \alpha the smoothing factor. It probably may seem obvious, but filtering should be done to the accelerometer readings before calculating the angles, instead of to the angles themselves. Regarding the smoothing factor, the lower we set it, the more it will take for the angle to stabilize, so we should not set it too low because then we could lose real-time behaviour. With this I mean that the reading will not correspond to the real angle until it stabilizes, and this could take some time.

The source code & the ADXL345 library

I developed a small library to interface with the accelerometer, even though at the moment I have only implemented the basic functionality, I plan on supporting all of the device features. You can find it in my github account, where you can also find the processing code I used for the video example below. Thanks to the library, the code is pretty straightforward. It just reads the sensor accelerations which are already converted into gs by the library, applies the Low-Pass filter and then uses the roll and pitch equations to calculate the angles.

#include <Wire.h>
#include <ADXL345.h>

const float alpha = 0.5;

double fXg = 0;
double fYg = 0;
double fZg = 0;

ADXL345 acc;

void setup()
{
	acc.begin();
	Serial.begin(9600);
	delay(100);
}

void loop()
{
	double pitch, roll, Xg, Yg, Zg;
	acc.read(&Xg, &Yg, &Zg);

	//Low Pass Filter
	fXg = Xg * alpha + (fXg * (1.0 - alpha));
	fYg = Yg * alpha + (fYg * (1.0 - alpha));
	fZg = Zg * alpha + (fZg * (1.0 - alpha));

	//Roll & Pitch Equations
	roll  = (atan2(-fYg, fZg)*180.0)/M_PI;
	pitch = (atan2(fXg, sqrt(fYg*fYg + fZg*fZg))*180.0)/M_PI;

	Serial.print(pitch);
	Serial.print(":");
	Serial.println(roll);

	delay(10);
}

The result

For a more interactive visualization of the data, I also developed an example using processing, which consists on a rotating 3D cube. You can see the results in the following video.

In the next post about my Arduino IMU, I will talk about how gyroscopes work and how to interpret the information they provide.

I finally did it, oficially I am now M.Sc. in Computer Science. After long years of very hard work and sleepless nights, but also living under the comfortable feeling of being always busy and the certainty of what was to come. But that’s it, this moment had to come and my days as a university student are over, but I believe I’m ready for what lies ahead.

My Master Thesis consisted in the design and development of a software architecture for monitoring and controlling a remotely operated underwater vehicle (ROV). It consisted of two software blocks: the control system  and the operation system. The control system is the main software architecture, designed to allow multiple modules to work in parallel connected with one another, each of them controlled by a supervisor which guarantees that the system is always working and deals with software and hardware errors. On the other hand, the operation system allows the user to connect to the control system, visualize the sensory data and operate the vehicle.

The software itself is not very complex, but the design of the architecture is focused on offering efficiency, robustness, reliability and flexibility. One of the main goals of the design is to give the developer the ability of adapting the software architecture to different control models, and even to different types of vehicles or robotic systems, such as an autonomous underwater vehicle (AUV). You can read more about it in the documentation, although it is in Spanish.

I had to give a presentation, where I explained the different aspects of the project and demonstrated the results. It went quite well, and I think it took a little bit longer than expected, but I was finally given the highest grade. Overall, I am certainly going to miss being a university student.

After returning from the SAUC-E competition, the Oceanic Platform of the Canary Islands offered some of us the opportunity of attending a two-week course on Gliders.  By that time, only two of us had not already made plans. In my particular case, I had a lot of work with my master thesis and was not planning in going anywhere.

The course was divided in two main areas, one concerning the technological aspects of the vehicles and the other one about its scientific applications. For the first part of the course, we were introduced to the physics related to the glider’s movement, which basically describe the modification of density needed to alter the buoyancy, depending on several parameters such as temperature and salinity, both of which have an impact on the average density of seawater. We also dedicated some time learning about the electronic components used in gliders for computation, communication and navigation and were introduced to the software for remote operation.

During this first part we also learnt about the main aspects of glider operation, such as how the different paths and parameters are set to define the glider flight, how to properly ballast the vehicle, and we even got the chance to deploy (and recover) a glider. The deployment  of the glider was quite interesting, we went on a boat to open sea and gently dropped the glider in the water. When the glider was ready, the operator commanded the vehicle to submerge at a certain depth and it certainly tried, but the vehicle was able to detect that the desired depth was unreachable because of the sea floor being higher, and came up again. The vehicle is quite slow in its operation and watching it dive and come up again can be quite boring, but then again speed is not required for the applications of these vehicles. The recovery was definitely not easy, I did not directly participate but watched the entire scene as it happened.

The course also included opening and examining the interior of several gliders such as the Slocum, the Spray and the SeaGlider. The first two gliders are quite easy to open and the operator has full freedom to do it, but in the case of the SeaGlider, opening it voids the warranty. The Slocum Glider has a very polished design and it seems very comfortable to work with. In contrast, the Spray Glider looks handmade and there were a lot of things that remembered us to the design of the Avora AUV. Finally, for the first part of the course we also had some presentations from Bluefin Robotics on the Spray Glider, from Liquid Robotics on the Wave Glider and from ACSA-Alcen on the SeaExplorer. The Wave Glider is a very interesting vehicle, although some people do not consider it a glider, and the SeaExplorer is the first European glider.

The second part of the course was dedicated to the scientific applications of the gliders. We learned about different types of sensors used to analyze the properties of the seawater, such as salinity, temperature, pressure, turbidity, dissolved oxigen, etc. Also, we had quite a lot of presentations from several groups and universities explaining the different sensors, the applications of the gliders, results they had obtained, and more. This part of the course was certainly interesting, but for a technical person such as myself, a little bit difficult to follow.

I was very impressed by the presentation given by Dr. Oscar Schofield from Rutgers University on how the ice cap melting affected quite a number of parameters of the global seawater and how this produced a chain reaction which ended with several polar species being unable to feed. Another thing I enjoyed about the course was the presentation on the Wave Glider, it is an incredible technology and a demonstration of how to intelligently harvest energy from the environment for long term operation.