After returning from the SAUC-E competition, the Oceanic Platform of the Canary Islands offered some of us the opportunity of attending a two-week course on Gliders.  By that time, only two of us had not already made plans. In my particular case, I had a lot of work with my master thesis and was not planning in going anywhere.

The course was divided in two main areas, one concerning the technological aspects of the vehicles and the other one about its scientific applications. For the first part of the course, we were introduced to the physics related to the glider’s movement, which basically describe the modification of density needed to alter the buoyancy, depending on several parameters such as temperature and salinity, both of which have an impact on the average density of seawater. We also dedicated some time learning about the electronic components used in gliders for computation, communication and navigation and were introduced to the software for remote operation.

During this first part we also learnt about the main aspects of glider operation, such as how the different paths and parameters are set to define the glider flight, how to properly ballast the vehicle, and we even got the chance to deploy (and recover) a glider. The deployment  of the glider was quite interesting, we went on a boat to open sea and gently dropped the glider in the water. When the glider was ready, the operator commanded the vehicle to submerge at a certain depth and it certainly tried, but the vehicle was able to detect that the desired depth was unreachable because of the sea floor being higher, and came up again. The vehicle is quite slow in its operation and watching it dive and come up again can be quite boring, but then again speed is not required for the applications of these vehicles. The recovery was definitely not easy, I did not directly participate but watched the entire scene as it happened.

The course also included opening and examining the interior of several gliders such as the Slocum, the Spray and the SeaGlider. The first two gliders are quite easy to open and the operator has full freedom to do it, but in the case of the SeaGlider, opening it voids the warranty. The Slocum Glider has a very polished design and it seems very comfortable to work with. In contrast, the Spray Glider looks handmade and there were a lot of things that remembered us to the design of the Avora AUV. Finally, for the first part of the course we also had some presentations from Bluefin Robotics on the Spray Glider, from Liquid Robotics on the Wave Glider and from ACSA-Alcen on the SeaExplorer. The Wave Glider is a very interesting vehicle, although some people do not consider it a glider, and the SeaExplorer is the first European glider.

The second part of the course was dedicated to the scientific applications of the gliders. We learned about different types of sensors used to analyze the properties of the seawater, such as salinity, temperature, pressure, turbidity, dissolved oxigen, etc. Also, we had quite a lot of presentations from several groups and universities explaining the different sensors, the applications of the gliders, results they had obtained, and more. This part of the course was certainly interesting, but for a technical person such as myself, a little bit difficult to follow.

I was very impressed by the presentation given by Dr. Oscar Schofield from Rutgers University on how the ice cap melting affected quite a number of parameters of the global seawater and how this produced a chain reaction which ended with several polar species being unable to feed. Another thing I enjoyed about the course was the presentation on the Wave Glider, it is an incredible technology and a demonstration of how to intelligently harvest energy from the environment for long term operation.

As you may already know, I was involved in the construction of an autonomous underwater vehicle (AUV) for participating at the Students AUV Challenge – Europe, which was held in La Spezia (Italy), at the Centre for Maritime Research & Experimentation, from July 6 to 13. It was a great experience being surrounded by top students from all over Europe and Canada, sharing ideas, conceptions and visions about underwater vehicles and robotics.

The sea basin was divided in two equal arenas, this way at most two teams could be working at the same time. The visibility conditions were quite rough and the water currents at the surface were noticeable. The organization provided us with two different workspaces, one on the outside, beside the competition arena, and the other one inside a warehouse. The combination of heat and humidity made it quite complicated to work, even though we were provided with several fans.

The first 5 days were allocated for practice runs, but the truth is some of the teams used this time to finish the construction of their vehicles, including us. On our first few days we did some recordings with an underwater camera, which we used fine grain our detection algorithms. We also finished the construction and did some preliminary tests in the pools. Unfortunately, when everything was ready, the vehicle suffered some leakage  because of an incorrectly sealed connector, which made us lose more than a day cleaning everything, but at least none of the electronic components were damaged.

After repairing the damage, we repeated the tests and verified that everything was working as expected. During these days, the qualification period started, so we were now running against the clock. When everything was ready again, we proceeded to adjust the navigation algorithms directly in the competition arena, something which took longer than expected because one of the arenas was being used for the qualification rounds. The last day of the qualification rounds, we did some simulations of the qualification mission and finished programming it, but at the end, since we had not done enough tests of the mission, we decided not to put at risk the vehicle and gave up our qualification slot.

We all felt a little bit demoralized because of not being able to qualify, but not everything was lost, we still had our chance on the “Impress the judges” category, and we sure came prepared for this one. A while ago, working on our AUV, a member of the team brought a pair of “virtual reality” glasses that he used on his master thesis project. These glasses had attached an external inertial measurement unit, so that the computer could be aware of the operator’s head motion. Since our vehicle was equipped with a pan-tilt camera system, we developed software that combined the camera and the pan-tilt system with the glasses and the gyroscope, so that the user could look around and see the surroundings of the vehicle.

The judges were quite impressed with our telepresence system and it was kind of fun to see them taking turns to try the glasses. They were also quite interested in some of our innovations, such as our pan-tilt camera system or the use of a bend sensor for water velocity measurement. The award ceremony was kind of a surprise, we won the first prize at the “Impress the judges” category, which was much more than we expected after four months work, competing against teams with years of experience and very mature vehicles. After the award ceremony we had a small good-bye party at Lerici, which was shorter than expected, for some of us at least, because of transportation issues.

During these days I had the opportunity to meet some of the most incredible vehicles I have ever seen, not only because of their design, but because of the fact that they were built by students. The vehicle I liked the most was the Canadian one, from the Team SONIA, with a robust and flexible design and an impressive software. The team was very prepared and it felt like they had every situation under control, which is a demonstration of their years of experience participating at the RoboSub Competition. Suffice it to say they won this year’s SAUC-E and got third place at RoboSub, quite a feat!

I was also impressed by the design of the vehicle SMART-E, from the University of Luebeck, even though I think it might present a painful challenge for autonomous navigation. This vehicle was shaped like a UFO, and was equipped with 3 thrusters each of which had an additional rotational axis so as to achieve vertical motion. The main hull was transparent, so they took advantage of this to build a strobe light, which was a requirement of the competition, using LEDs all around it. This combined with its shape, made it look like a real UFO, or should I say UCO? (Unidentified Cruising Object).

Overall, it was a worthwhile experience, not only competing but also building an autonomous underwater vehicle from scratch, and I surely recommend it to any student. It is an opportunity to gain more knowledge and to test the knowledge you already have, but more importantly to achieve experience in a real life project.

You can read more about our vehicle on our Journal Paper, or visit my youtube channel or the team’s youtube channel, or visit the team’s facebook page.

So it’s already over, we will never see the transit of Venus across the Sun again in our lifetime. As the Wikipedia puts it, this rare event occurs  in a pattern that repeats every 243 years, with pairs of transits eight years apart separated by long gaps of 121.5 years and 105.5 years. The last pair of transits took place on 8 June 2004 and last night (5-6 June 2012), and the next pair of events will take place on December 2117 and 2125.

But don’t worry, thousands of eyes around the world recorded the experience in multiple ways and wavelengths so that no one would miss a detail. I wanted to be one of those eyes but since I live inside the wrong time zone there was no way to do it. So I decided to try a little experiment after learning that the SDO team had set up a high resolution feed updated every 15 minutes.

I wrote a very simple script  that  downloads an image every minute and checks if it’s different from the last one. As you may recall, the transit was scheduled to happen between 10pm and 5am (UTC) so the script is set up to work only between 9 PM and 8 AM, in order to get enough images to record a full transit. After that, each image is resized using ImageMagick and the time-lapse video is generated with ffmpeg.

#!/bin/bash

next=1
url="http://sdo.gsfc.nasa.gov/assets/mov/depot/APOD/latest_APOD_HMIC_FR.jpg"

if [ ! -e "$next.jpg" ]; then
    wget --cache=off $url -O "$next.jpg"
fi

while true; do
    time=$(date +%k%M)
    time=`echo $time|sed 's/^0*//'`

    if [[ $time -ge 2100 ]] || [[ $time -le 800 ]]; then
        wget --cache=off $url -O aux.jpg

        if [ $? -eq 0 ] && ! diff aux.jpg "$next.jpg" > /dev/null ; then
            next=$((next + 1))
            mv aux.jpg "$next.jpg"
        else
            rm aux.jpg
        fi
    else
        break
    fi

    sleep 60
done

mogrify -resize 1024x1024 *.jpg
ffmpeg -r 5 -qscale 1 -i %d.jpg VenusTransit.mp4

The result was quite nice, although not as impressive as the official videos from the SDO team.

 

Overall, it looks like Bash and Astronomy are a good combination!

Recently, I have been working on a very interesting project, consisting in the design and development of an Autonomous Underwater Vehicle (AUV) for participating in the Students AUV Challenge – Europe held in La Spezia (Italy) at NATO Undersea Research Centre (NURC). We are a team composed of 8 students from different areas (Computer Science, Telecommunications, Electronics, Naval Engineering), in which I have the great honor of being the team leader and lead developer. The name of the AUV (and the team) is AVORA, which means Autonomous Vehicle for Operation and Research in Aquatic Environments, it was intentionally picked as a reference to an ancient deity from the Canary Islands.

The goal of the competition is to perform a series of tasks autonomously, without external information sources and within a fixed time frame, although this time frame is sufficiently large. Taking into account the broad spectrum of missions an AUV can accomplish, we can see that each of the tasks tries to emulate situations that arise in real life, in a limited fashion. The tasks are:

  1. Passing through a validation gate constructed of 2 orange buoys on a rope, 4 meters apart.
  2. Performing an underwater structure inspection. This underwater structure is basically a pipeline of cylinders.
  3. Searching and informing another autonomous vehicle about a mid-water target.
  4. Surveying a wall.
  5. Tracking and following a moving ASV.
  6. Surfacing in the surface zone.
  7. Impressing the judges! In this task, the teams are encouraged to be creative and demonstrate interesting features about their vehicles.

As one can see, completing all of these tasks requires certain type of sensors such as a sonar, cameras, depth and pressure sensors, inertial measurement units, etc, and also a great amount of hard work and time. Our AUV is on its way, since it is our first time, the vehicle has to be constructed from the ground up and it is not an easy job preventing water from getting inside everything.

Another problem with not having the vehicle constructed from the beginning is that most of the work has to be done with each sensor alone and that artificial datasets have to be created in order to fine tune the algorithms. Some of the algorithms require large amounts of data so as to validate them, in such cases we are trying to use datasets provided by others, non-related to the competition. But as I say, being the first time it’s difficult to know what to expect.

From now on I will try to post regularly about our progress. Wish us luck!

Welcome to The C Continuum!

Even though this blog is called The C Continuum, it’s not only going to be about programming in C, I will probably talk about other languages (prolog, haskell, C++, python, …) and other topics (maths, physics, computer science, …).

But as we are in The C continuum, this introductory post will be dedicated to the first step in the process of learning a new programming language: the very simple, yet enlightening, Hello World! program:

#include <gtk/gtk.h>

int main (int argc, char *argv[])
{
    GtkWidget *window, *label;

    gtk_init(&argc, &argv);

    window = gtk_window_new(GTK_WINDOW_TOPLEVEL);
    gtk_window_set_title(GTK_WINDOW(window), "Hello World!");
    gtk_container_set_border_width(GTK_CONTAINER(window), 10);
    gtk_widget_set_size_request(window, 210, 100);

    g_signal_connect(G_OBJECT(window), "destroy",
                     gtk_main_quit, NULL);
    label = gtk_label_new("Hello World!");
    gtk_container_add(GTK_CONTAINER(window), label);

    gtk_widget_show_all(window);

    gtk_main();

    return 0;
}

As you may already know, the code above is written in C and uses the GTK library to generate a graphical interface. If you wish to compile the code you can use the following command:

    gcc -Wall `pkg-config --cflags --libs gtk+-2.0` hello.c -o hello

Enjoy!