Jekyll2022-10-10T20:08:12+00:00http://cvra.ch/atom.xmlCVRAThe CVRA is a club building robots for the Eurobot contest
Club Vaudois de Robotique AutonomeHow we reached our current architecture2021-10-20T00:00:00+00:002021-10-20T00:00:00+00:00http://cvra.ch/blog/2021/architecture-history<h2 id="background--history">Background & History</h2>
<p>In order to understand the current design, I think one must understand where we
are coming from. The club experimented with different platforms over the years,
and every time outgrew those. Every time we switch to a new platform, we must
throw out everything and start again from scratch. To avoid this waste, the
“obvious” solution is to design a system that we will never outgrow, but such a
design is unlikely to succeed as technology and requirements always evolve.</p>
<p>When I joined the club (2008) it was using a board with a single
microcontroller, a fixed number of I/Os and channels for 3 motors with position
feedback and PID control. The architecture was quite rigid, but the motherboard
allowed a bit of customization of the input/output, and when we needed
functionality that could not be implemented on this platform, we built dedicated
boards with small microcontrollers on them, communicating via ad-hoc protocols,
generally over UART or GPIOs. This approach served us well for several years,
but started to show its age (the PID controllers were 20 year old when we moved
away!).</p>
<p>In 2010, we decided to switch to another system. In particular, we wanted to do
polar control of the wheelbase (rather than per-wheel), which the existing
system could not do as it relied on dedicated hardware for PID control. We also
wanted to experiment with onboarding a computer in the robot for computer
vision. The result was made of two parts: a custom board and a Linux computer.
The board had an ATMega as its core. It could control three motors like the old
board, but the PID was done in software, meaning we could do polar control. The
Linux computer was communicating with the board over USB, sending it orders such
as “go to this point”. The system worked well enough, but was not very modular:
adding functionality for additional actuators required a lot of code changes. We
were also not too convinced by the platform the computer was using (URBI, a
now-dead programming language for robots).</p>
<p>In 2011, we built our first Debra, the name of our robots with SCARA arms. This
was a massive increase in the number of motors we had to control: we went from 2
PIDs to 12, and they needed coordination. It was clear that our current approach
did not scale to those requirements. The ATmega had to go, and was only used for
one year. Realizing this was wasteful, we committed to a more modular solution,
which we could adapt to each year’s requirements. We turned to FPGAs, as they
provide the ultimate modularity: you can change what the hardware is doing
simply by reflashing the FPGA! We still had a computer onboard for tasks like
computer vision, but it never really got used.</p>
<p>The FPGA setup served us well, but it was a nightmare to develop for. FPGAs are
programmed very differently from conventional platforms. To make things worse,
we have to use tools provided by the FPGA vendors, which are pretty bad, non
standard and had some bugs. We stuck with it for a few years, fixing bugs and in
2014, this platform had us win the Swiss championships! However, we needed a
change for several reasons:</p>
<ul>
<li>The FPGAs were too complicated to program<sup id="fnref:1" role="doc-noteref"><a href="#fn:1" class="footnote" rel="footnote">1</a></sup>,</li>
<li>The platform was pretty expensive, meaning we only had three setups (two in
robots, and one redundant). Developing outside of the robots was impossible.</li>
<li>The bugs of the platform made it challenging for reliability,</li>
<li>The boards were quite big, which made it mechanically challenging to
integrate.</li>
</ul>
<p>We started brainstorming for a new solution in 2015, and this document presents
the current architecture, which is what we have running for now.</p>
<h2 id="objective">Objective</h2>
<p>Provide a platform that can be extended indefinitely to match the requirements
of the robot. Give developers flexibility in choosing the best platform for the
subsystem they are working on.</p>
<h2 id="requirements">Requirements</h2>
<ul>
<li>Can drive more than 12 DC motors, since this is what we are replacing</li>
<li>Compact</li>
<li>Real time requirements</li>
<li>Can be used for “10 years” because rebuilding PCBs cost time and money.</li>
</ul>
<h2 id="overview">Overview</h2>
<p>Unlike previous systems, which were relatively centralized, the new architecture
is made of many systems collaborating to control the robots. While in the past
our robot typically had 2-3 microcontrollers, the new design has ~15!</p>
<p>Linking each microcontroller to each other via a dedicated UART link like we
used to do in the past would be infeasible just by the number of wires and UART
interfaces required. Instead, this design uses a
<a href="https://en.wikipedia.org/wiki/Fieldbus">field bus</a> shared by all the nodes: a
single physical interface is enough for each node to talk to every other node.</p>
<p>Each microcontroller exposes a very high interface to the rest of the robot.
This is very important to make the system easy to test and reason about: compare
telling a board “turn on pump #3” to “please set register #4 to 0xfe”.</p>
<h2 id="detailed-design">Detailed design</h2>
<p>The robot’s network is based on the CAN (Controller Area Network) protocol. CAN
was originally designed for the automotive industry, where it is used to
communicate between different parts of an engine and/or a car’s interior. It was
designed for robustness (safety critical systems depend on it), electrical
resilience (a car emits a lot of electrical noise) as well as wiring simplicity
(wires weigh a lot). CAN transmit data over a
<a href="https://en.wikipedia.org/wiki/Differential_signalling">differential pair</a>, with
the two signals named CANH and CANL.</p>
<p>By itself, CAN is a very simple protocol: it can transmit messages up to 8
bytes, which are tagged with a 29-bit identifier. Any node on the bus can send
and all the nodes will receive all the messages. Therefore, it is common to add
higher level protocols on top of CAN, which provide longer messages, addressing
and message serialization<sup id="fnref:2" role="doc-noteref"><a href="#fn:2" class="footnote" rel="footnote">2</a></sup>. We chose UAVCAN for this, which is an emerging
standard aimed at small drones and robots, an application close to ours. It
proposes a nice set of features, and has a good quality reference
implementation. Note that UAVCAN has two versions: the one we use, v0, is
deprecated, and v1, currently in development.</p>
<p>CAN, like most low-level protocols, does not guarantee delivery of messages:
messages can be lost, for example if two different nodes send a message at the
same time. This lead to the introduction of two different message types in
UAVCAN: broadcast and service calls. The first one is simply a node sending a
message to everyone on the network, without expecting a response or a way to
find if a message was dropped. It is well suited for things like sensors or a
motor’s current position, where it does not matter if we drop a single message.
The second one is used when we want to have a response, or we want to know if
the message was dropped. We use it mostly for setting parameters on board (PID
gains, board modes and so on). This mode does not guarantee message delivery,
but triggers an error if a response was not received in a given time.</p>
<p>To simplify development, UAVCAN can automatically generate code to switch
between human-readable formats and representation on the CAN bus. Messages are
described in a special language
(<a href="https://github.com/cvra/robot-software/blob/master/uavcan_data_types/cvra/motor/feedback/20030.MotorPosition.uavcan">example</a>),
and C++ or Python code is generated from that.</p>
<p>When working with a large number of devices, software update becomes a
challenge. We used to do that by connecting a JTAG probe to the target
microcontroller, but this would become intractable with so many
microcontrollers, some of which were not reachable without disassembly. We
decided to develop an in-band method of programming, which uses the same CAN
network that we used in normal operation to download updates. When powering up
the robot, boards wait for software update messages for 10 seconds before
proceeding to normal operations. The detailed design can be found in
<a href="https://github.com/cvra/can-bootloader">cvra/can-bootloader</a>.</p>
<h3 id="available-modules">Available modules</h3>
<p>The first type of board we designed, and still the most commonly used one is the
<a href="https://github.com/cvra/motor-control-board">motor control board</a> (2015). It
allows the control of a single DC motor, with control loops for controlling in
torque, speed or position. It has inputs for two different quadrature encoders
for position sensing. Originally we wanted to be able to use it as an
alternative means of controlling RC servos, but this turned out to be
unnecessary. It was also re-used with a different firmware for our opponent
detection beacon. The
<a href="https://github.com/cvra/robot-software/blob/master/uavcan_data_types/cvra/motor/control/20022.Position.uavcan">API</a>
of the board is simple: you send it a position (or speed, or torque), and it
will go there.</p>
<p>The <a href="https://github.com/cvra/sensor-board/">sensor board</a> (2016) contains three
optical sensors: a laser range finder (10 cm range), a color sensor and an
ambient light sensor. It can be used for object detection around the robot, for
example to check that a game object was correctly handled. It simply publishes
periodic readings on the CAN bus, for anyone interested.</p>
<p>The <a href="https://github.com/cvra/can-io-board">IO board</a> (2016) has no definite
purpose: by default it is simply 4 digital input / output + 4 PWMs. The original
goal was to control a few industrial sensors or custom electronics. We used it
for many different tasks over the years by reprogramming them to add features.
Two generations of this board exist, with the only difference being the size of
the module.</p>
<p>The <a href="https://github.com/cvra/uwb-beacon-board">UWB beacon</a> (2017) is still a
work in progress. The long term goal is to provide a system to find the position
of all robots on the playing field by measuring distances with radio (similar to
how GPS receivers work). Antoine is working on them at the moment.</p>
<p>The <a href="https://github.com/cvra/actuator-board">actuator board</a> (2020) is the
latest addition to the list (2020). Its goal is to be able to control a small
actuator made with RC servos, vacuum pumps and valves. It has vacuum sensors to
check if an object was picked, and has a digital input.</p>
<p>Our <a href="https://github.com/cvra/pi-shield">Pi shield</a> (2020) allows one to connect
a Raspberry Pi to the bus and to send and receive UAVCAN messages from Linux. It
also allows us to connect a touchscreen placed on the front of the robot.</p>
<p>We have a custom <a href="https://github.com/cvra/CAN-USB-dongle">USB to CAN adapter</a>
(2015), which has the correct connectors for our robot. It can also optionally
power on the bus (only for a few devices). It is automatically recognized, and
can be set up to be used as a native CAN interface on Linux. Two generations
exist: micro-USB and USB-C. If you are working on the club’s projects, you
should probably ask to have one.</p>
<h2 id="alternatives-considered">Alternatives considered</h2>
<p>When we started looking for what was used as a field bus, we identified big
contenders based on the bandwidth and general availability: I2C<sup id="fnref:3" role="doc-noteref"><a href="#fn:3" class="footnote" rel="footnote">3</a></sup>, CAN, and
Ethernet-based platforms (IP, ModBus, Ethercat). We removed I2C as it operates
in a master-slave configuration: we wanted the ability for any board to send
messages on the bus. Ethernet-based solutions were the most advanced ones, but
required a lot of circuitry while CAN only required compact single-chip
transceivers.</p>
<p>Originally we had a split master design, where the realtime parts of the master
firmware would be running on a large STM32, while the non-realtime parts would
be written in Python on a PC. The two parts would communicate over Ethernet.
This was extremely complicated and unreliable, so in 2016 we switched to an
architecture with only one master firmware running on STM32. It served us well,
but we were spending a lot of time dealing with low level work, as well as
optimizing to not use too much RAM. This led us to switching back the code to
run on Linux again, but this time including the realtime part as well, with
everything written in C++. You can read more about this switch
<a href="https://cvra.ch/robot-software/design/linux-master/">there</a>.</p>
<p>We experimented with ROS for one year in 2016, using an architecture pretty
similar to the one presented here. The biggest downside of this approach is that
the build tools for ROS are not very nice to use and don’t support cross
compilation, which makes building software really slow. The ROS navigation stack
was very CPU hungry, which did not help with our limited CPU resources. It could
certainly be interesting to come back to it now that ROS2 is available. You can
read more about this approach
<a href="https://cvra.ch/blog/2016/goldorak-post-mortem">here</a>.</p>
<h2 id="future-work">Future work</h2>
<h3 id="communication-between-robots">Communication between robots</h3>
<p>The work presented in this article solves the issue of communicating inside the
robot pretty well. However, the rules are moving more and more in the direction
of requiring collaboration between the two robots. In order to do that in an
efficient and safe manner, the two robots need to be able to communicate with
each other.</p>
<p>Several technologies can be used here. Since the two master firmwares are
running on Linux, we can use the normal networking stack to communicate between
the two, either using Wifi, or by reprogramming the UWB boards. Theoretically,
we could even use the two transports in order to provide a redundant link,
however further study is needed.</p>
<p>The higher level protocol is also still an open question. Should we use UDP, in
order to have real time behavior, or TCP to have reliable transmission? Do we
handle errors at the application layer? Do we have something in-between for
reliable ordering of messages (à la Paxos)?</p>
<div class="footnotes" role="doc-endnotes">
<ol>
<li id="fn:1" role="doc-endnote">
<p>The software landscape for FPGA has since changed, and it might be easier now thanks to projects like <a href="http://www.clifford.at/yosys/">Yosys</a> and <a href="https://www.chisel-lang.org/">Chisel</a>. <a href="#fnref:1" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
<li id="fn:2" role="doc-endnote">
<p>Serialization is the process of taking a high level structure and translating it to bits on the wire. <a href="#fnref:2" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
<li id="fn:3" role="doc-endnote">
<p>I2C is typically not considered a fieldbus and was never designed for inter-board communication. However it is commonly used in Eurobot due to its relative simplicity. <a href="#fnref:3" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
</ol>
</div>AntoineBackground & HistoryCVRA second at SwissEurobot 20182018-04-21T00:00:00+00:002018-04-21T00:00:00+00:00http://cvra.ch/blog/2018/cvra-second-swisseurobot<p>On April 13-14 in Yverdon-les-Bains we took part in the 21st Swiss Robotics Cup.
We were there with 20 other teams, including 9 foreign ones.</p>
<p>The first day was difficult for the CVRA.
The robots had reliability issues, and it was hard for us to pick up the cubes.
We finished the day at the 8th position, with a total of 322 points.</p>
<p>However, we spent the night working and the next day was much better for us.
The first game of Saturday we scored 266 points, almost doubling our total!
We climbed up the ladder after this and were qualified for the finals.</p>
<p>The finals were quite stressful for us.
We lost the first game, which meant that any further lost match would mean the end of the competition.
Fortunately, everything went well, and we went to the final game, where we lost against Happy Social Robot.</p>
<p><img src="/images/2018/team.jpg" alt="CVRA team with our two robots, Order & Chaos" /></p>
<p>As we are in the first three Swiss teams, we will represent Switzerland in the Eurobot finals.
It will take place from May 9th to May 13th in La Roche-sur-Yon, France.
The two other teams qualified with us are Happy Social Robot from Rapperswill (Swiss champion) and TeamAuto from Yverdon (4th).</p>
<p>The club would like to thank again our sponsors.
Without you, none of this adventure would be possible.</p>
<p><img src="/images/2018/sponsors.png" alt="" /></p>
<p><em>You can find more pictures of the event in <a href="/pictures.html">the pictures section</a>.</em></p>AntoineOn April 13-14 in Yverdon-les-Bains we took part in the 21st Swiss Robotics Cup. We were there with 20 other teams, including 9 foreign ones.Project Moon Status update2016-12-26T00:00:00+00:002016-12-26T00:00:00+00:00http://cvra.ch/blog/2016/status-update<blockquote>
<p>Captain’s log, Stardate 94589.68.
Project Moon is going as expected.
Engineers have decided to use the master board as the main computing unit of the robot.
They hope this simplification will lead to faster development and faster debug.</p>
</blockquote>
<p>After the issues<sup id="fnref:0" role="doc-noteref"><a href="#fn:0" class="footnote" rel="footnote">1</a></sup> we encountered when we were running a Beaglebone Black as our main computer in the robot, we decided to adopt a simpler approach:
We will keep the network of control boards, but all the high level tasks (AI, navigation, etc.) will be running on a microcontroller (STM32F407/F429).
The PC (Intel NUC) will stay and will be used for computation-heavy tasks, acting like a “smart sensor”.
This will make the system easier to program, debug and deploy.</p>
<p>We were especially interested in reducing complexity in order to allow a single person to deploy the robot’s software.
To ease that goal, we also switched to a monolithic repository containing all the code and configuration running on our robots.</p>
<p>Since we could not use ROS on microcontrollers (yet ?), we needed some code to replace it.
The obvious starting point was the Aversive framework that we were already using from 2012 to 2014.
This framework was originally designed by a French team (Microb Technology) for the Eurobot contest.
It was originally written for 8-bit AVRs, but we modified it to run on top of ChibiOS on STM32s.</p>
<p>Aversive provides us with a complete navigation stack, from dead reckoning to local navigation and path planning.
This allowed us to quickly have a running setup, which could avoid obstacles using our proximity beacon system.
Here is a video of it in action:</p>
<div class="ytvideo">
<iframe width="640" height="360" src="https://www.youtube.com/embed/ETCFCzpoWx0" frameborder="0" allowfullscreen=""></iframe>
</div>
<h2 id="footnotes">Footnotes</h2>
<div class="footnotes" role="doc-endnotes">
<ol>
<li id="fn:0" role="doc-endnote">
<p><a href="/blog/2016/goldorak-post-mortem">Goldorak post mortem</a> <a href="#fnref:0" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
</ol>
</div>Club Vaudois de Robotique AutonomeCaptain’s log, Stardate 94589.68. Project Moon is going as expected. Engineers have decided to use the master board as the main computing unit of the robot. They hope this simplification will lead to faster development and faster debug.Goldorak 2016 post-mortem2016-06-26T00:00:00+00:002016-06-26T00:00:00+00:00http://cvra.ch/blog/2016/goldorak-post-mortem<p>Until two years ago, the robots we developed for Eurobot, were centralized systems.
An FPGA board powered the robot: reading sensors, controlling actuators, and running all the control loops.
An auxiliary computer provided access to higher level features such as computer vision, and it sometimes was used to run the strategy.</p>
<div class="row">
<div class="large-5 columns">
<p>
However, due to an increasing number of actuators on the main robot: Debra (a robot we started developing in 2011 as a reusable platform with a differential base and two SCARA arms), we decided to shift to a distributed architecture.
</p>
<p>
Thus, the FPGA was replaced by several <em><strong>motor boards</strong></em>, a single <em><strong>master board</strong></em>, and the embedded computer remained.
</p>
<p>
<ul>
<li>
The <em><strong>motor boards</strong></em> are little boards we designed in 2014-2015 that use a STM32F3 microcontroller to control a single motor in torque, velocity, and position.
They were connected through a CAN bus to the <em><strong>master board</strong></em>.
</li>
<li>
The <em><strong>master board</strong></em> is an STM32F4 board from Olimex (Olimex E407) connected to CAN, and to the computer via Ethernet.
</li>
</ul>
</p>
</div>
<div class="large-7 columns">
<p><img src="/images/posts/goldorak-postmortem/Architecture.png" alt="Robot architecture" /></p>
</div>
</div>
<p>The <strong><em>motor boards</em></strong><sup id="fnref:0" role="doc-noteref"><a href="#fn:0" class="footnote" rel="footnote">1</a></sup> were controlled over the CAN bus, then an IP link with a custom RPC protocol bridged the CAN bus with the computer.</p>
<p>As with all projects, we had delays, so in 2015 we weren’t able to homologate at the SwissEurobot competition, although we managed to win a Jury award for best design.</p>
<p>Last October, <a href="https://github.com/antoinealb/">@antoinealb</a> and <a href="https://github.com/syrianspock/">I</a>, decided after a trip to ROSCon 2015 in Hamburg<sup id="fnref:1" role="doc-noteref"><a href="#fn:1" class="footnote" rel="footnote">2</a></sup>, we want to experiment with CAN directly on a computer, and leverage all the tools and libraries provided by ROS and SocketCAN in order to build the small robot for Eurobot.
It was dubbed <strong>project Goldorak</strong>, and the name stuck, so the robot was named <strong>Goldorak</strong><sup id="fnref:2" role="doc-noteref"><a href="#fn:2" class="footnote" rel="footnote">3</a></sup>.</p>
<p>This post goes through our journey, the limitations we hit when using <a href="http://ros.org/">ROS</a> for Eurobot on a microcomputer (the <a href="https://beagleboard.org/black">BeagleBone Black</a>), and the lessons learned from it.</p>
<h2 id="ros-meets-uavcan"><strong>ROS meets UAVCAN</strong></h2>
<h3 id="can-on-linux"><strong>CAN on Linux</strong></h3>
<p>The choice of an embedded computer was pretty straightforward.
I had previously worked with the BeagleBone Black and grew acquainted with it.
It provides CAN and Ethernet off-the-shelf.
So we chose to use the BeagleBone Black as our embedded computer.</p>
<p>At first, we did the setup by hand, but later on we automated it using SaltStack<sup id="fnref:3" role="doc-noteref"><a href="#fn:3" class="footnote" rel="footnote">4</a></sup>.
Installing and setting up SocketCAN was quite easy<sup id="fnref:4" role="doc-noteref"><a href="#fn:4" class="footnote" rel="footnote">5</a></sup>.</p>
<h3 id="bridging-uavcan-with-ros"><strong>Bridging UAVCAN with ROS</strong></h3>
<p>In order to use the motor boards we designed the previous year, our BeagleBone Black had to speak <a href="http://uavcan.org/">UAVCAN</a>.
We needed to send PID parameters and setpoints, and to also receive feedback if needed.</p>
<p>First, a ROS-UAVCAN bridge was the first node to be written.
We wrote it in C++ since UAVCAN’s Python implementation was not as mature back in October.
The process was quite straightforward, but boring and repetitive: for each message structure supported by the motor board, there are a ROS publisher and a UAVCAN subscriber, or vice versa.</p>
<p>Each motor board had a name and node ID to identify it.
The CAN messages were relayed onto ROS topics divided between setpoint and feedback, and organized by namespace: each node name defined an associated namespace.
Take the <code class="language-plaintext highlighter-rouge">right_wheel</code> node for example, the associated list of topics looks like this.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>/right_wheel/feedback/encoder
/right_wheel/feedback/index
/right_wheel/feedback/position
/right_wheel/feedback/torque
/right_wheel/feedback/velocity
/right_wheel/feedback/voltage
/right_wheel/feedback_pid/current
/right_wheel/feedback_pid/position
/right_wheel/feedback_pid/velocity
/right_wheel/setpoint
</code></pre></div></div>
<p>We can control the right wheel of the robot by sending a message over the <code class="language-plaintext highlighter-rouge">right_wheel/setpoint</code> topic and receive feedback from it through several topics under the namespace <code class="language-plaintext highlighter-rouge">right_wheel/feedback</code>.
One especially useful feedback is <code class="language-plaintext highlighter-rouge">/right_wheel/feedback/encoder</code> providing external incremental encoder values used for positioning.
The <code class="language-plaintext highlighter-rouge">right_wheel/feedback_pid</code> namespace was dedicated to feedback for PID tuning which included setpoints along with measurements of the same quantity.</p>
<h3 id="pid-tuning-over-can"><strong>PID tuning over CAN</strong></h3>
<p>Using RQT for plotting and using dynamic reconfigure to send PID parameters to the <strong><em>motor boards</em></strong>, we setup a nice interface to tune the PIDs of the motor board:</p>
<ul>
<li>Setpoints and measured outputs are plotted in top left (PyQtGraph plot)</li>
<li>PID parameters are tuned through the widget in top right (dynamic reconfigure)</li>
<li>Topics are monitored and selected for plotting, from the bottom widget (topic monitor)</li>
</ul>
<p><img src="/images/posts/goldorak-postmortem/PID_tuning.jpg" alt="PID tuning interface with Rviz and Dynamic reconfigure" /></p>
<h2 id="making-goldorak-a-ros-enabled-robot"><strong>Making Goldorak a ROS enabled robot</strong></h2>
<p>Once our differential base’s motors were controllable through ROS nodes, and our PIDs were tuned, we were ready to build an abstraction of the wheelbase to be used for robot motion.</p>
<p>In order to control a robot the ROS way, you need to expose a node that subscribes to the <code class="language-plaintext highlighter-rouge">/cmd_vel</code> topic of type <code class="language-plaintext highlighter-rouge">geometry_msgs/Twist</code>.
This is a message describing a desired velocity in linear x,y,z and rotational x,y,z axis (roll, pitch, yaw) to be performed by the robot.</p>
<pre><code class="language-[geometry_msgs/Twist]">geometry_msgs/Vector3 linear
float64 x
float64 y
float64 z
geometry_msgs/Vector3 angular
float64 x
float64 y
float64 z
</code></pre>
<p>For a robot with differential base, this is quite easy, since you can only apply a combination of linear x and angular z (yaw) velocity.
The equations are also straightforward:</p>
<pre><code class="language-math">velocity_right = (velocity_forward + (track / 2.f) * velocity_yaw) / radius_right;
velocity_left = (velocity_forward - (track / 2.f) * velocity_yaw) / radius_left;
</code></pre>
<p>Where track is the distance between the two differential driving wheels.</p>
<p>Moving the robot is nice, but the robot also needs to know where it is.
For that, we use incremental encoders placed on additional external wheels, that we call odometers (i.e. not the motor driving wheels).
Odometry equations are simple for a differential base.</p>
<p>Our system, however, is a bit more complex due to its distributed architecture.
Indeed, the two odometers are wired and handled by two separate microcontrollers so there is no synchronisation between their samples.
A solution to this problem is to predict the encoder value of the wheel with the oldest sample at the time of odometry computation.
This asynchronous odometry was implemented last year, so it was only a matter of wrapping it in a node and subscribing to the right topics to get encoder values.
The estimated robot pose is published to the <code class="language-plaintext highlighter-rouge">/odom</code> topic which is a <code class="language-plaintext highlighter-rouge">nav_msgs/Odometry</code> type of message.</p>
<pre><code class="language-[nav_msgs/Odometry]">std_msgs/Header header
uint32 seq
time stamp
string frame_id
string child_frame_id
geometry_msgs/PoseWithCovariance pose
geometry_msgs/Pose pose
geometry_msgs/Point position
float64 x
float64 y
float64 z
geometry_msgs/Quaternion orientation
float64 x
float64 y
float64 z
float64 w
float64[36] covariance
geometry_msgs/TwistWithCovariance twist
geometry_msgs/Twist twist
geometry_msgs/Vector3 linear
float64 x
float64 y
float64 z
geometry_msgs/Vector3 angular
float64 x
float64 y
float64 z
float64[36] covariance
</code></pre>
<p>Now our robot is compatible with the ROS navigation stack. Yay!
But that was not enough for us, we also wanted to visualise our robot in Rviz… in 3D!
In order to achieve that you need a joint state publishing node to broadcast the state of the wheel joints, this will alow you to see the wheels rotating in Rviz.
Then running the <code class="language-plaintext highlighter-rouge">robot_state_publisher</code> node we would get the TF of all the links specified in our URDF robot description.</p>
<p>This was a simple model of the robot with two wheels, two caster links, the body and a beacon link<sup id="fnref:5" role="doc-noteref"><a href="#fn:5" class="footnote" rel="footnote">6</a></sup>.</p>
<p><img src="/images/posts/goldorak-postmortem/Goldorak_Rviz.png" alt="Goldorak in Rviz" /></p>
<h2 id="using-ros-navigation-stack-for-eurobot"><strong>Using ROS navigation stack for Eurobot</strong></h2>
<p>So now we can control our robot, know where it is and visualise it in real time.
Time to make it navigate!</p>
<p>The <a href="http://wiki.ros.org/navigation">ROS navigation stack</a> is a set of motion planners based on a discrete costmap approach.
It consists of a global planner and a local planner.</p>
<ul>
<li>The <a href="http://wiki.ros.org/global_planner">global planner</a> solves the problem of going from point A to point B in spatial coordinates (x, y, z) given a discrete map with obstacles (occupancy grid). This problem is typically solved using A*, Djikstra and/or potential field algorithms.</li>
<li>The local planner will locally compute a feasible trajectory on the global plan (result of the global planner) taking into account desired arrival orientation and kinematic constraints (holonomic or not, fully actuated or underactuated).
Popular implementations of local planners include the <strong>simple</strong> <a href="http://wiki.ros.org/base_local_planner">trajectory rollout</a>, the <a href="http://wiki.ros.org/dwa_local_planner">Dynamic Window Approach</a>, the <a href="http://wiki.ros.org/eband_local_planner">Elastic Band Approach</a> to cite a few.</li>
</ul>
<p>The navigation stack also supports inputs from a laser scanner, but we didn’t use it on Goldorak since we only had one that was already used on Debra.</p>
<p><img src="http://wiki.ros.org/navigation/Tutorials/RobotSetup?action=AttachFile&do=get&target=overview_tf.png" alt="ROS navigation stack" /></p>
<p>Setting up the navigation stack is not black magic, a <a href="http://wiki.ros.org/navigation/Tutorials/RobotSetup">nice guide is provided in the tutorials</a> and if you follow it you’ll be OK.
Tuning the navigation stack, now that was another story.
I was able to tune the parameters when running the navigation nodes on my laptop using DWA as local planner.
The result was pretty cool.</p>
<div class="ytvideo">
<iframe width="640" height="360" src="https://www.youtube.com/embed/8rnjWCc1nB8" frameborder="0" allowfullscreen=""></iframe>
</div>
<p>But as soon as it had to run on the BeagleBone Black, we would hit the computational limits of the platform, so we had to tune the parameters sub-optimally to run on it.
We switched to trajectory rollout for local planning, and increased the goal tolerances while decreasing update rates of costmaps and control.</p>
<p>The result was OK, although not sufficient for the tasks at hand.
We only realized these limitations a few weeks before the contest when we started testing the fishing module for the competition.
By then, it was too late to change much in our approach to the problem.
Retrospectively, a possible solution would have been to use the spare Intel NUC of Debra and connect it to the BeagleBone Black, then have it run all nodes except the UAVCAN bridge which would run on the BeagleBone Black.</p>
<h2 id="computers-are-all-fun-and-games-until-they-are-not"><strong>Computers are all fun and games until they are not</strong></h2>
<h3 id="saving-cpu-resources-on-the-beaglebone-black"><strong>Saving CPU resources on the BeagleBone Black</strong></h3>
<p>Now that our navigation was consuming up to 70% of CPU time on the BeagleBone Black, we needed to think carefully about how we used our resources.
We switched time critical nodes into nodelets to save some CPU time.
Nodelets are a way of writing a node such that it can benefit from zero-copy publish/subscribe and free us from some overhead of processes while maintaining the introspective capabilities of ROS.
One major setback was that we weren’t able to wrap the UAVCAN bridge into a single thread or a nodelet.</p>
<p>We also played with <strong>nice</strong>, a Unix tool that allows you to set the priority of a process.
This way we were able to give priority to control nodes.</p>
<h3 id="beaglebone-blacks-limitations-as-ros-platform"><strong>BeagleBone Black’s limitations as ROS platform</strong></h3>
<p>Along the journey, as we had more and more nodes to build for our robot to run, the build time increased from a few seconds to ~20 seconds.
This effect was more dramatic on the BeagleBone Black, we went from a few tens of seconds to a full record of 20 minutes of painful compilation time.
It felt like writing software in the 1960 where you had to wait until next day after the code you wrote was punched into cards and ran by a computer operator.</p>
<p>A logical reaction of ours was to try crosscompiling for the BeagleBone Black on our machines.
We crosscompile everyday for ARM cortex-M microcontrollers that run our motor boards and such.
So crosscompiling must be easy, right?
Well… turns out it’s not.
After a few minutes of Google search, you realize that no one ever crosscompiles ROS nodes, and you wonder why.
So you try to make it crosscompile on your own.
You sacrifice a few goats<sup id="fnref:10" role="doc-noteref"><a href="#fn:10" class="footnote" rel="footnote">7</a></sup>, and you learn more than you would like to know about CMake, and you managed to compile, but then crosscompilation crushes you because you will NEVER link against ROS libraries.
You heretic, how dare you think about crosscompilation<sup id="fnref:11" role="doc-noteref"><a href="#fn:11" class="footnote" rel="footnote">8</a></sup>.</p>
<p>So if you were thinking about crosscompiling your ROS nodes, then think about something else to do with your life, maybe become a farmer and grow some corn: agriculture makes more sense than computer science, at least it’s governed by laws of physics.</p>
<h2 id="2016-competition-summary"><strong>2016 Competition summary</strong></h2>
<p>The swiss contest took place on May 2016.
With Goldorak, we intended to perform several actions:</p>
<ul>
<li>Shell collection using grippers (2 points per shell)</li>
<li>Door closing (10 points per door), this task was left to the bigger robot, Debra</li>
<li>Fish collection in the water (10 points per fish in net and 5 if in the robot) using a dedicated module (2-axis cartesian with impeller and magnets)</li>
<li>Beach umbrella deployment using a pneumatic tank (20 points) on which Mathieu wrote a <a href="http://www.cvra.ch/blog/2016/airtank">great article</a></li>
</ul>
<p>With all the software, mechanical and integration hassles encountered, we only managed to perform the last two actions.</p>
<p>The umbrella deployment worked flawlessly.
The electrovalve commanding the pneumatic release was controlled by a motor board acting as on/off switch and DC-DC regulator.
The umbrella was successfully deployed in all our 7 matches.</p>
<p>The fishing sequence had high requirements on the position of the robot relative to the border.
The left side of the robot had to be less than 2cm away from the border to allow the impeller to deploy correctly in water and attract the fish to be caught using magnets.</p>
<p>In the end, fishing sequence had to be hacked around due to the poor performance of the navigation on our CPU-bound platform.
We managed to collect between 0 and 2 fish per match for a total of 9 fish in 7 matches, and we successfully deposited 7 fish in the net (1.14 fish per match in average if counting fish in robot as 0.5 fish).</p>
<p>An unexpected problem was getting stuck by a shell and thus getting lost.
Another common problem was the navigation oscillating around its goal at the 2nd or 3rd fishing sequence, after which the robot would freeze until the end.</p>
<p>In average, the small robot scored 31.4 points per match, well below the maximum 90 points envisioned at the design phase (70 if we don’t count the door closing action).</p>
<h2 id="closing-remarks"><strong>Closing remarks</strong></h2>
<p>To finish this long article, here is a list of things to remember:</p>
<ul>
<li>ROS communication stack is stable and very nice to have, it makes node debugging easier and feature implementation faster.</li>
<li>ROS navigation stack is fun and very useful for large scale robots navigating among humans but for Eurobot it’s overkill and requires a very powerful onboard computer to run properly.
The navigation stack is well suited for probabilistic navigation and avoiding unpredictible obstacles, however Eurobot requires more precise and repeatable positioning.</li>
<li>ROS visualisation tools are very useful but tend to crash sometimes which is kind of frustrating (but it doesn’t matter since processes are decoupled).</li>
<li>Smach is a cool library for writing a state machine for your strategy but its viewer crashes all the time.</li>
<li>There is a nice framework to write tests with ROS nodes but the feedback loop is too long to use effectively for Test Driver Development.</li>
<li>We didn’t manage to cross-compile ROS packages to ARM which slowed down our development as we approached the contest and the changes were only testable on the real hardware.</li>
<li>We lacked thorough testing due to delays in manufacturing and assembly of the robot and very long feedback loops between two tests on the robot.</li>
<li>Having a computer on board your robot for Eurobot is nice for CPU hungry computation, but don’t use a microcomputer such as the BeagleBone Black or the Raspberry Pi. These boards are nice for tinkering but they tend to have long boot times and slow CPUs. They can’t be used as replacement for a more conventional computer in all applications. Also note that it’s not as simple to use IO as it is on a microcontroller<sup id="fnref:12" role="doc-noteref"><a href="#fn:12" class="footnote" rel="footnote">9</a></sup>.</li>
<li>Keep your stack as simple as your application requires.</li>
<li>We are going to use a microcontroller board as master on our robots. That doesn’t mean we won’t use an onboard computer, just that it will be a slave and won’t be a critical component of the robot.</li>
</ul>
<h2 id="footnotes-and-links"><strong>Footnotes and links</strong></h2>
<div class="footnotes" role="doc-endnotes">
<ol>
<li id="fn:0" role="doc-endnote">
<p><a href="https://github.com/cvra/motor-control-board">DC Motor Controller boards with CAN Interface</a>. <a href="#fnref:0" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
<li id="fn:1" role="doc-endnote">
<p><a href="http://www.commitstrip.com/en/2016/04/26/the-just-got-back-from-a-conference-effect/">The just got back from conference effect</a>. <a href="#fnref:1" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
<li id="fn:2" role="doc-endnote">
<p><a href="https://github.com/cvra/goldorak">Project Goldorak, CVRA’s small robot for Eurobot 2016</a>. <a href="#fnref:2" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
<li id="fn:3" role="doc-endnote">
<p><a href="https://github.com/cvra/goldorak-operations">Setup of the BeagleBone Black used on Goldorak, our small robot</a>. <a href="#fnref:3" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
<li id="fn:4" role="doc-endnote">
<p><a href="http://syrianspock.github.io/embedded-linux/2015/09/13/my-beaglebone-black-setup-for-embedded-and-robotics-development.html">My BeagleBone Black setup for embedded and robotics development</a>. <a href="#fnref:4" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
<li id="fn:5" role="doc-endnote">
<p>The beacon system we used this year consisted of an optical obstacle detection based on an emitter and receiver on our robots and a reflector on the opponent’s robots. <a href="#fnref:5" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
<li id="fn:10" role="doc-endnote">
<p>No goats were harmed in the making of this article. <a href="#fnref:10" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
<li id="fn:11" role="doc-endnote">
<p><a href="https://github.com/cvra/goldorak/pull/6">Where we stopped trying to crosscompile</a>. <a href="#fnref:11" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
<li id="fn:12" role="doc-endnote">
<p>In order to use IOs on Linux, you need to setup the device tree overlays and use sudo. You can use sysfs to run without sudo, but that can’t be applied to all peripherals. <a href="#fnref:12" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
</ol>
</div>SalahUntil two years ago, the robots we developed for Eurobot, were centralized systems. An FPGA board powered the robot: reading sensors, controlling actuators, and running all the control loops. An auxiliary computer provided access to higher level features such as computer vision, and it sometimes was used to run the strategy.CVRA at the MassChallenge Switzerland Opening2016-06-16T00:00:00+00:002016-06-16T00:00:00+00:00http://cvra.ch/blog/2016/masschallenge-opening<p>On Friday 10th of June, 2016 a few team members and Debra were invited to present the robot at the inauguration of the MassChallenge startup accelerator.
This accelerator aims to be the most startup-friendly on the planet, by providing entrepreneurs with the ressources needed to take their companies to the next level.
MassChallenge will last 4 months and it will be installed just a few meters away from the CVRA.</p>
<p>During the opening, Debra was tasked with an important job: giving the inaugural ribbon-cutting scissors to Anne-Catherine Lyon, member of the Council of State of Vaud and to Marianne Huguenin, Mayor of Renens.
It was started using the typical Eurobot cord by Philippe Leuba, member of the Council of State of Vaud.</p>
<p>Such an event was an excellent opportunity for us to present our activities to local politicians as well as other members of the tech industry.
We would like to thank MassChallenge Switzerland, Inartis and UniverCité for inviting us.</p>
<p>Watch the pictures of the event in <a href="https://goo.gl/photos/TeTntYhAEUNJJysb7">our album</a>.</p>
<div class="ytvideo">
<iframe width="640" height="360" src="https://www.youtube.com/embed/zTbEyH0-Y8A" frameborder="0" allowfullscreen=""></iframe>
</div>
<p><img src="/images/album_thumbnails/2016_masschallenge.jpg" alt="Debra revealing the ceremony ribbon-cutting scissors" /></p>
<p><small>Picture © Inartis Foundation 2016</small></p>Club Vaudois de Robotique AutonomeOn Friday 10th of June, 2016 a few team members and Debra were invited to present the robot at the inauguration of the MassChallenge startup accelerator. This accelerator aims to be the most startup-friendly on the planet, by providing entrepreneurs with the ressources needed to take their companies to the next level. MassChallenge will last 4 months and it will be installed just a few meters away from the CVRA.3D printed compressed air tank2016-05-23T00:00:00+00:002016-05-23T00:00:00+00:00http://cvra.ch/blog/2016/airtank<div><i>June 3rd edit:
<p>To readers redirected from <a href="http://hackaday.com/2016/06/02/3d-printing-compressed-air-tanks">hackaday.com</a></p>
<p>There has been a mash-up of informations on the article you came from.
For the sake of better understanding, let us provide you with some context and discuss a few details.</p>
<p>We are experimenters building robots in a club, we do not build commercial air tanks.
We take part in the Eurobot contest every year, and one action to be performed this year could be nicely executed with the help of pneumatics and compressed air stored in a tank placed in one of our robots.
Our requirement was to have as big as a tank to fit in a prismatic rectangular space. A cylinder would not have done the job properly and would not have been innovative. A 3D-printed air tank would do both, so we tried a few ideas.</p>
<p>Then, to address criticism emitted in the comment section, 4.0 bar = 58 psi and 6.5 bar = 94 psi, these are low pressures.
6.5 bar was the pressure we subjected our prototypes to, whereas 4.0 bar was the maximum pressure the tank would have to endure during contest use, our security factor there is at the very least 1.6 on the prototype used in our robot, which would most certainly not be tolerated on a commercial product, but enough for our specific application.
Then, the tank is enclosed in our robot behind 2 mm thick (0.078 in) aluminium panels. Shrapnel would not be able to escape the robot and cause injuries any rupture had occurred.</p>
<p>Finally, we use PLA filament to 3D-print our parts, not PVC as some readers understood.
We do not understand why our blog post somehow ended up being embedded in a post on hackaday.com relating events regarding the safety of PVC pipes use for compressed air.</p>
</i></div>
<hr />
<div style="height: 5vh"></div>
<p>In order to perform the funny action for Eurobot 2016, we considered a pneumatic umbrella and needed to have a reservoir of compressed air in our robot.</p>
<p>The tank would need to sustain prolonged operation at 4 bar and the pneumatic system as a whole would have to be airtight enough so that a funny action triggered 60 minutes after refilling the tank would still be successful.</p>
<h1 id="idea-doubled-by-opportunity">Idea doubled by opportunity</h1>
<p>3D printing allows for fast prototyping at low costs and super low iteration cycle times. Once you get used to printing parts and you have to test a new idea, you just end up doing it without a second thought.
At CVRA we 3D print main mechanical internal parts as well as cosmetic external bodyshells. We can make stiff parts as well as bendable ones. Pretty much every idea we have is a candidate for 3D printing at some point.</p>
<p>So… I hear you have to use a compressed air tank for your pneumatic system, right? What if you just… 3D print it?</p>
<p>After all 4 bar is not such a high pressure. We are routinely using PLA filament and this material has a rather high yield tensile strength. Doing a quick “back of the envelope” calculation indicates wall thickness can be pretty low.</p>
<h1 id="prototype-1-excitement">Prototype #1: excitement</h1>
<div class="row">
<div class="large-6 columns"><p>
Our first attempt was a 40x40x40mm air tank exported in STL as a solid and printed with a wall thickness of 4 layers and 20% infill. A 3mm hole was drilled on the side, and deep drilling the tank opened a clear venting channel within the infill structure.
The test was a direct success. The tank was able to sustain pressures up to 6.5 bar without a hitch and be airtight enough to successfully deploy the pneumatic umbrella after a 1-hour delay.</p>
</div>
<div class="large-6 columns"><p>
<div class="ytvideo">
<iframe width="560" height="315" src="https://www.youtube.com/embed/73REhTiLfEU" frameborder="0" allowfullscreen=""></iframe>
</div>
The blue cube is the tank.</p>
</div>
</div>
<h1 id="prototype-2-adrenaline">Prototype #2: adrenaline</h1>
<div class="row">
<div class="large-6 columns"><p>
Then came the time for a larger tank version our robot would accommodate, ramping up in volume from 40x40x40mm to 65x40x80mm, a roughly +225% increase, leaving infill aside, opting for a 6-layer wall thickness and having an internal support helping with the leveling of the innermost layer of the top wall during 3D printing.</p>
The tank exploded into pieces just centimeters to my face when subjected to 5.5 bar pressure.</p>
I was wearing thick clothing and took the precaution to hide my face behind my arm, so nothing happened to me.
But this is a clear reminder things can go wrong with only 4 bar of pressure.
(Actually 2 bar are more than enough to do nasty damage.
As an experiment we made a pointy steel javelin whose ejection velocity from a 6mm diameter pneumatic cylinder was really nasty enough any interference with human flesh had occurred.)
</p>
<p>
<ul>
<li>Rule #1: wear protective glasses when subjected to even remotely possible projections.</li>
<li>Rule #2: hide behind a protection wall when things can potentially explode.</li>
</ul>
</p>
</div>
<div class="large-6 columns">
<p><img src="/images/posts/2016-05-23-airtank/explosion.jpg" alt="Explosion results" /></p>
</div>
</div>
<div class="row">
<div class="large-6 columns"><p>
This is a cross-section view of the faulty design. The internal structure is not bound to the walls.</p>
It is supposed the pressure inside the tank inflated the walls enough to accentuate stress concentration at the inner corners and to cause one of them to yield. The rest was a chain reaction resulting in a handful of fragments flying across the shop.</p>
</div>
<div class="large-6 columns">
<p><img src="/images/posts/2016-05-23-airtank/v5.png" alt="Second prototype internals" /></p>
</div>
</div>
<h1 id="prototype-3-production-model">Prototype #3: production model</h1>
<div class="row">
<div class="large-6 columns"><p>
External features remain the same as for prototype #2 since the function and bulk volume have to be maintained.
Changes are found inside. End-filleted ribs fill the inside of the tank, effectively preventing the walls from inflating. Those ribs essentially serve the same purpose as the infill structure found in prototype #1.</p>
</div>
<div class="large-6 columns">
<p><img src="/images/posts/2016-05-23-airtank/v6.png" alt="Third prototype internals" /></p>
</div>
</div>
<div class="row">
<div class="large-6 columns"><p>
The tank lived past repeated 6.5 bar stress tests but leaked in the corners.
Acrylic spray paint was applied to the outer surfaces and the tank now performs well. As a benchmark, well over 1 hour after an initial 3.6 bar fill, the funny action runs perfectly.
</p>
<p>
Although this tank is the one used on our 2nd robot, such an air-tightening solution is still not a fully satisfactory one, mostly because of the visible paint job.
</p>
</div>
<div class="large-6 columns">
<div class="ytvideo">
<iframe width="560" height="315" src="https://www.youtube.com/embed/1HTuNI_y9nI" frameborder="0" allowfullscreen=""></iframe>
</div>
</div>
</div>
<h1 id="prototype-4-beyond">Prototype #4: beyond</h1>
<div class="row">
<div class="large-6 columns"><p>
An additional test was performed to assess the ability of tire sealing compounds to render bleeding 3D printed tanks air-tight.
Back to the basics, a 40x40x40mm tank with 3 layers of wall thickness was printed.</p>
<p>As expected, it did exhibit strong—and mesmerizing—bubbling from the corners when pressurized at 6.5 bar.</p>
</div>
<div class="large-6 columns">
<div class="ytvideo">
<iframe width="560" height="315" src="https://www.youtube.com/embed/A7uHKiDCwCg" frameborder="0" allowfullscreen=""></iframe>
</div>
</div>
</div>
<p>The tire sealing compound used was the following one.</p>
<p><img src="/images/posts/2016-05-23-airtank/seal.jpg" alt="Tire sealing compound" /></p>
<p>After the internal surfaces of the tank were coated with liquid sealant and excess removed, we got a perfectly air-tight cube, apart from the inlet which initially leaked but soon was sealed too!</p>
<div class="ytvideo">
<iframe width="560" height="315" src="https://www.youtube.com/embed/fMEjYq-rfVI" frameborder="0" allowfullscreen=""></iframe>
</div>
<h1 id="other-use-cases">Other use cases</h1>
<p>The air-tightening procedure described above will be applied to the internal suction channels of the hands of Debra 2016.</p>Mathieu, RomainJune 3rd edit:D-23: Wiring done, testing can begin2016-05-05T00:00:00+00:002016-05-05T00:00:00+00:00http://cvra.ch/blog/2016/d-minus-23<p><img src="/images/posts/2016-05-05-d-minus-23/0.jpeg" alt="" /></p>
<p>The assembly of our two robots, Debra and Goldorak, is almost over.
Our electrical engineers are working hard to wire all the different motors and circuit boards (more than 30 in Debra!).
Software testing on the real thing can start, and the first integration bug reports are coming…</p>
<p><img src="/images/posts/2016-05-05-d-minus-23/1.jpeg" alt="" /></p>
<p><img src="/images/posts/2016-05-05-d-minus-23/2.jpeg" alt="" /></p>
<p><img src="/images/posts/2016-05-05-d-minus-23/3.jpeg" alt="" /></p>
<p><img src="/images/posts/2016-05-05-d-minus-23/4.jpeg" alt="" /></p>AntoineNew version of our 4-axis robotic arm2016-03-22T00:00:00+00:002016-03-22T00:00:00+00:00http://cvra.ch/blog/2016/new-robot-arm<p>This is a concise presentation of the improvements and main corrections made to the new version of the arms on Debra, our main robot.</p>
<p><img src="/images/2016/Bras1.png" alt="Debra Arm, 2016 edition" /></p>
<p>Compared to last year, some problems limiting the motion of the axis of the shoulder as well as the loosening of the trapezoidal lead screw gear mounted at the end of the Z-axis have been corrected.</p>
<p><img src="/images/2016/Bras2.png" alt="Debra Arm, 2016 edition" /></p>
<p>Four pumps were directly integrated onto the upper arm structure.
The 4th axis now allows for continuous rotation of the end effector while transmitting four air passages, the electrical power and the CAN bus.</p>
<p><img src="/images/2016/Bras3.png" alt="Debra Arm, 2016 edition" /></p>
<p>You can visualize our CAD files and download them from our <a href="https://grabcad.com/library/4-axis-robotic-arm-v2-1">GrabCAD account</a>.</p>RomainThis is a concise presentation of the improvements and main corrections made to the new version of the arms on Debra, our main robot.Our robot avoids obstacles!2016-02-21T00:00:00+00:002016-02-21T00:00:00+00:00http://cvra.ch/blog/2016/robot-is-navigating<p>Today <a href="https://github.com/syrianspock/">@SyrianSpock</a> tweaked the navigation stack used in our small robot (codenamed “Goldorak” for now).
Our code is based on the <a href="http://www.ros.org/">Robot Operating System</a>, which allowed us to quickly develop our pathfinding system.
We will try to say more about it in another post, but for now… demo!</p>
<div class="ytvideo">
<iframe width="640" height="360" src="https://www.youtube.com/embed/8rnjWCc1nB8" frameborder="0" allowfullscreen=""></iframe>
</div>AntoineToday @SyrianSpock tweaked the navigation stack used in our small robot (codenamed “Goldorak” for now). Our code is based on the Robot Operating System, which allowed us to quickly develop our pathfinding system. We will try to say more about it in another post, but for now… demo!My Beaglebone black setup for embedded and robotics development2015-09-13T00:00:00+00:002015-09-13T00:00:00+00:00http://cvra.ch/blog/2015/my-beaglebone-black-setup-for-embedded-and-robotics-development<p>The Beaglebone black is easily my favourite Embedded linux platform.
It’s cheap, open-source and offers a great amount of functionnalities.
Through this article, I would like to document my default setup for the board so I can start developing on it.</p>
<p>What’s even greater about this board is it’s surprisingly easy to use thanks to some great engineering and a great community of developers.
All you need to use this board is the mini USB cable that comes with it: it powers it and emulates an ethernet connection over USB.</p>
<h2 id="choosing-a-linux-distribution-the-eternal-argument">Choosing a Linux distribution, the eternal argument</h2>
<p>The choice for a Linux distribution is always subject to discussion.
The most popular distros on embedded computer boards are Yocto, Angstrom and Debian.
The last two are supported by the <a href="http://beagleboard.org/">Beagleboard organisation</a><sup id="fnref:1" role="doc-noteref"><a href="#fn:1" class="footnote" rel="footnote">1</a></sup>.
But I like Ubuntu better, that’s what I run on my laptop, its setup time is short and you will have less problems with drivers.</p>
<p>Fortunately, an official Ubuntu image is supported and maintained by the Beagleboard organisation.
So, start by downloading the image from them, uncompress it and then copy the image into your SD card (in my case under <code class="language-plaintext highlighter-rouge">/dev/mmcblk0</code>):</p>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash">wget https://rcn-ee.com/rootfs/2015-05-08/flasher/BBB-eMMC-flasher-ubuntu-14.04.2-console-armhf-2015-05-08-2gb.img.xz
unxz BBB-eMMC-flasher-ubuntu-14.04.2-console-armhf-2015-05-08-2gb.img.xz
<span class="nb">sudo dd </span><span class="k">if</span><span class="o">=</span>./BBB-eMMC-flasher-ubuntu-14.04.2-console-armhf-2015-05-08-2gb.img <span class="nv">of</span><span class="o">=</span>/dev/mmcblk0</code></pre></figure>
<p>Now remove it from your computer and plug it in the Beaglebone.
Press the S2 button while powering it up so it boots from the SD card and starts flashing the image to the eMMC.
After a few seconds, the LEDs should start in a K2000/Cylon way.
Once the LEDs are off, it’s done, you can unplug/replug it and SSH into it.</p>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash">ssh ubuntu@192.168.7.2</code></pre></figure>
<p>If network manager is giving you a hard time, switch the LAN interface to manual by modifying your <code class="language-plaintext highlighter-rouge">/etc/network/interfaces</code> file, mine looks like this:</p>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash"><span class="c"># interfaces(5) file used by ifup(8) and ifdown(8)</span>
auto lo
iface lo inet loopback
<span class="c"># For Beaglebone black</span>
iface eth2 inet manual
iface eth1 inet manual
iface usb0 inet manual</code></pre></figure>
<p>Now, if you did this, before you can SSH into the board you will need to configure your IP over the shared network (let’s say <code class="language-plaintext highlighter-rouge">eth2</code>):</p>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash"><span class="nb">sudo </span>ifconfig eth2 192.168.7.1
<span class="c"># then</span>
ssh ubuntu@192.168.7.2</code></pre></figure>
<h2 id="linux-rt-preempt-or-xenomai-the-roboticists-dilemma">Linux, RT-PREEMPT or Xenomai, the roboticist’s dilemma</h2>
<p>If you are starting to play with embedded linux platforms and just want to try to build some cool little thing, skip this part.
If you are trying to build more complex systems that impose constraints on the execution time of your tasks, then read this part.</p>
<p>A good comparison between these three solutions is made by this article titled: <a href="https://www.osadl.org/fileadmin/dam/rtlws/12/Brown.pdf">How fast is fast enough? Choosing between Xenomai and Linux for
real-time applications</a><sup id="fnref:2" role="doc-noteref"><a href="#fn:2" class="footnote" rel="footnote">2</a></sup> by Dr. Jeremy H.Brown and Brad Martin.
The TL;DR of the article is that Linux is OK for soft real-time applications, but for hard real-time requirements you’ll either need RT-PREEMPT or Xenomai.
The best solution being Xenomai although it requires more effort to install.
On most boards you would need to maintain your own kernel with the Xenomai patch, but luckily for us there is a prepackaged kernel available for the Beaglebone in the official repositories.
I told you the community was great.</p>
<p>If you want to install Xenomai on your Beaglebone, I suggest you check the section “Xenomai installation: the easy way (3 steps)” of my article <a href="http://syrianspock.github.io/embedded-linux/2015/08/03/xenomai-installation-on-a-beaglebone-black.html">Xenomai installation on a Beaglebone black</a><sup id="fnref:3" role="doc-noteref"><a href="#fn:3" class="footnote" rel="footnote">3</a></sup></p>
<h2 id="slaying-cerberus-io-setup-made-easy">Slaying Cerberus: IO setup made easy</h2>
<p>In the early years of embedded linux, the boards were dark and full of terrors.
IO configuration was a hell of task that required kernel recompilation.
Luckily, now we have the <a href="http://elinux.org/Device_Tree">device tree</a><sup id="fnref:4" role="doc-noteref"><a href="#fn:4" class="footnote" rel="footnote">4</a></sup>.
They were introduced as a way of decoupling hardware dependencies from the Linux kernel.
And they made IO configuration at runtime possible.</p>
<p>Writing a device tree overlay may seem hard but if you’re familiar with IO configuration on microcontrollers it’s quite similar.
But we are not going to write any device tree overlay.
Thanks to <a href="https://github.com/cdsteinkuehler"><strong>cdsteinkuehler</strong></a>, we have <a href="https://github.com/cdsteinkuehler/beaglebone-universal-io"><strong>beaglebone-universal-io</strong></a><sup id="fnref:5" role="doc-noteref"><a href="#fn:5" class="footnote" rel="footnote">5</a></sup>, an amazing tool that reduces pin configuration from writing 30 somewhat complicated lines of configuration to a single line in the command line.
So we are going to install this on our board:</p>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash"><span class="nb">sudo</span> <span class="nt">-s</span>
apt-get <span class="nb">install </span>device-tree-compiler <span class="nt">-y</span>
<span class="nb">cd</span> /opt/source
git clone https://github.com/cdsteinkuehler/beaglebone-universal-io
<span class="nb">cd </span>beaglebone-universal-io
make <span class="nb">install
exit</span></code></pre></figure>
<p>Now we can start messing with the IOs.
For example if I want to have a GPIO at pin 20 from header 9, I can just type</p>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash"><span class="nb">sudo </span>config-pin P9.20 gpio</code></pre></figure>
<p>And that’s it.
Told you it was going to be easy.
To understand more of what you can do with this tool, go read the <a href="https://github.com/cdsteinkuehler/beaglebone-universal-io/blob/master/README.md">documentation</a>.</p>
<h2 id="finding-a-low-level-library-to-use-the-hardware-peripherals">Finding a low-level library to use the hardware peripherals</h2>
<p>There are some neat libraries out there that allow you to use the Beaglebone’s peripherals with a nice Python API for instance.
I can think of the <a href="https://github.com/graycatlabs/PyBBIO">PyBBIO library</a><sup id="fnref:6" role="doc-noteref"><a href="#fn:6" class="footnote" rel="footnote">6</a></sup> by graycatlabs or the <a href="https://github.com/adafruit/adafruit-beaglebone-io-python">Adafruit library</a><sup id="fnref:7" role="doc-noteref"><a href="#fn:7" class="footnote" rel="footnote">7</a></sup>.
But my favourite is the <a href="https://github.com/intel-iot-devkit/mraa">MRAA library</a><sup id="fnref:8" role="doc-noteref"><a href="#fn:8" class="footnote" rel="footnote">8</a></sup> developped by the IOT team at Intel.</p>
<p>It was first meant to be a library for the Edison, but it soon became compatible with the Raspberry Pi and the Beaglebone black (and other boards).
That way it provides a nice low-level library that abstracts the hardware to some extent.
This library is written entirely in C and it provides severals APIs: C, C++, Python and NodeJS.
So you can do anything from real-time robotic applications to Web-based/IOT applications.</p>
<p>To install it, do the following:</p>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash"><span class="nb">sudo </span>apt-get <span class="nb">install </span>libpcre3-dev git cmake python-dev swig <span class="nt">-y</span>
<span class="nb">cd</span> ~
git clone https://github.com/intel-iot-devkit/mraa.git
<span class="nb">mkdir </span>mraa/build <span class="o">&&</span> <span class="nb">cd</span> <span class="nv">$_</span>
cmake .. <span class="nt">-DCMAKE_BUILD_TYPE</span><span class="o">=</span>DEBUG <span class="nt">-DBUILDARCH</span><span class="o">=</span>arm <span class="nt">-DBUILDSWIGNODE</span><span class="o">=</span>OFF
make
<span class="nb">sudo </span>make <span class="nb">install
cd</span> ~</code></pre></figure>
<p>We are almost done:</p>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash"><span class="nb">sudo</span> <span class="nt">-s</span>
<span class="nb">echo</span> <span class="s2">"/usr/local/lib/arm-linux-gnueabihf/"</span> <span class="o">>></span> /etc/ld.so.conf
<span class="nb">exit
sudo </span>ldconfig
<span class="nb">sudo echo</span> <span class="s2">"export PYTHONPATH=</span><span class="nv">$PYTHONPATH</span><span class="s2">:</span><span class="si">$(</span><span class="nb">dirname</span> <span class="si">$(</span>find /usr/local <span class="nt">-name</span> mraa.py<span class="si">))</span><span class="s2">"</span> <span class="o">>></span> ~/.bashrc
<span class="nb">sudo cp </span>mraa/build/examples/mraa-gpio /usr/bin/
<span class="nb">sudo chmod</span> +x /usr/bin/mraa-gpio
<span class="nb">sudo </span>mraa-gpio list</code></pre></figure>
<p>The last command should output the list of all IOs on the board with the possible configurations for each pin.
The workflow is the following: you setup the IO function using the <code class="language-plaintext highlighter-rouge">config-pin</code> command from <strong>beaglebone-universal-io</strong> and then you use <strong>mraa</strong> library to use that peripheral on that pin.</p>
<p>Now you can go write some cool application.
Unless that’s not enough for you.</p>
<h2 id="we-need-to-go-higher-installing-ros">We need to go higher: installing ROS</h2>
<p>Let’s say you want to build some little robot with computer vision for navigation.
You can write your PWM driver to control your motors with the <strong>mraa</strong> library, but how can you do the vision part?
A few years ago, I would have told you to install <strong>opencv</strong> and work from there.
But you would most certainly get stuck communication-wise: how do you interface your vision code with the motor controller part.
It turns out there is a very popular framework out there, in the wild forest of open-source projects, that is awesome at interfacing differents chunks of code that perform different tasks: <a href="http://ros.org/">ROS</a><sup id="fnref:9" role="doc-noteref"><a href="#fn:9" class="footnote" rel="footnote">9</a></sup>.</p>
<p>ROS stands for Robot Operating System.
It’s not and OS per se, it’s more like a virtual machine that runs on top of Linux.
In some ways it’s similar to <strong>dbus</strong> as it enables inter process communication, but it’s safer to use and, I would argue, easier.</p>
<p>Using ROS to build your robotics application also opens the door to use a wide range of nodes written by other people that do can anything from reading a camera to performing SLAM with stereo vision.
So you may even be able to reuse and tweak some existing code to complete your little robot with vision-aided navigation.</p>
<p>Enough with the talking, here are the installation guidelines as documented on the <a href="http://wiki.ros.org/indigo/Installation/UbuntuARM">ROS wiki</a><sup id="fnref:10" role="doc-noteref"><a href="#fn:10" class="footnote" rel="footnote">10</a></sup></p>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash"><span class="nb">sudo </span>sh <span class="nt">-c</span> <span class="s1">'echo "deb http://packages.ros.org/ros/ubuntu trusty main" > /etc/apt/sources.list.d/ros-latest.list'</span>
wget https://raw.githubusercontent.com/ros/rosdistro/master/ros.key <span class="nt">-O</span> - | <span class="nb">sudo </span>apt-key add -
<span class="nb">sudo </span>apt-get update
<span class="nb">sudo </span>apt-get <span class="nb">install </span>ros-indigo-ros-base
<span class="nb">sudo </span>apt-get <span class="nb">install </span>python-rosdep
<span class="nb">sudo </span>rosdep init
rosdep update
<span class="nb">echo</span> <span class="s2">"source /opt/ros/indigo/setup.bash"</span> <span class="o">>></span> ~/.bashrc
<span class="nb">source</span> ~/.bashrc
<span class="nb">echo</span> <span class="s2">"export DISTRIB_ID=Ubuntu"</span> <span class="o">>></span> ~/.bashrc
<span class="nb">echo</span> <span class="s2">"export DISTRIB_RELEASE=14.04"</span> <span class="o">>></span> ~/.bashrc
<span class="nb">echo</span> <span class="s2">"export DISTRIB_CODENAME=trusty"</span> <span class="o">>></span> ~/.bashrc
<span class="nb">echo</span> <span class="s2">"export DISTRIB_DESCRIPTION="</span>Ubuntu 14.04<span class="s2">""</span> <span class="o">>></span> ~/.bashrc
<span class="nb">sudo </span>apt-get <span class="nb">install </span>python-rosinstall</code></pre></figure>
<p>Now we need to setup the ROS workspace as documented yet again in the <a href="http://wiki.ros.org/catkin/Tutorials/create_a_workspace">ROS wiki</a><sup id="fnref:11" role="doc-noteref"><a href="#fn:11" class="footnote" rel="footnote">11</a></sup></p>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash"><span class="nb">mkdir</span> <span class="nt">-p</span> ~/catkin_ws/src
<span class="nb">cd</span> ~/catkin_ws/src
catkin_init_workspace
<span class="nb">cd</span> ~/catkin_ws
catkin_make
<span class="nb">source </span>devel/setup.bash</code></pre></figure>
<p>This gives you the base install which is lightweight enough to fit on the Beaglebone black’s 4GB eMMC.
You can go on and search for ROS packages that may be of interest to your application and look up how to install them and use them.</p>
<p>That’s it for this guide.
You should now have all the tools you need to start developing on your Beaglebone black.
Go make some awesome stuff.</p>
<h2 id="references">References</h2>
<div class="footnotes" role="doc-endnotes">
<ol>
<li id="fn:1" role="doc-endnote">
<p>The beagleboard organisation website <a href="http://beagleboard.org/">http://beagleboard.org/</a> <a href="#fnref:1" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
<li id="fn:2" role="doc-endnote">
<p>How fast is fast enough? Choosing between Xenomai and Linux for real-time applications by Dr. Jeremy H.Brown and Brad Martin <a href="https://www.osadl.org/fileadmin/dam/rtlws/12/Brown.pdf">https://www.osadl.org/fileadmin/dam/rtlws/12/Brown.pdf</a> <a href="#fnref:2" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
<li id="fn:3" role="doc-endnote">
<p>“Xenomai installation on a Beaglebone black” an article I wrote a month ago <a href="http://syrianspock.github.io/embedded-linux/2015/08/03/xenomai-installation-on-a-beaglebone-black.html">http://syrianspock.github.io/embedded-linux/2015/08/03/xenomai-installation-on-a-beaglebone-black.html</a> <a href="#fnref:3" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
<li id="fn:4" role="doc-endnote">
<p>Linux device tree documentation on eLinux website <a href="http://elinux.org/Device_Tree">http://elinux.org/Device_Tree</a> <a href="#fnref:4" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
<li id="fn:5" role="doc-endnote">
<p>beaglebone-universal-io repository on Github <a href="https://github.com/cdsteinkuehler/beaglebone-universal-io">https://github.com/cdsteinkuehler/beaglebone-universal-io</a> <a href="#fnref:5" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
<li id="fn:6" role="doc-endnote">
<p>PyBBIO library for Beaglebone black repository on Github <a href="https://github.com/graycatlabs/PyBBIO">https://github.com/graycatlabs/PyBBIO</a> <a href="#fnref:6" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
<li id="fn:7" role="doc-endnote">
<p>Adafruit library for Beaglebone black repository on Github <a href="https://github.com/adafruit/adafruit-beaglebone-io-python">https://github.com/adafruit/adafruit-beaglebone-io-python</a> <a href="#fnref:7" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
<li id="fn:8" role="doc-endnote">
<p>mraa library repository on Github <a href="https://github.com/intel-iot-devkit/mraa">https://github.com/intel-iot-devkit/mraa</a> <a href="#fnref:8" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
<li id="fn:9" role="doc-endnote">
<p>ROS (Robot Operating System) website <a href="http://ros.org/">http://ros.org/</a> <a href="#fnref:9" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
<li id="fn:10" role="doc-endnote">
<p>Installation of ROS on Ubuntu ARM platforms from the ROS official wiki <a href="http://wiki.ros.org/indigo/Installation/UbuntuARM">http://wiki.ros.org/indigo/Installation/UbuntuARM</a> <a href="#fnref:10" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
<li id="fn:11" role="doc-endnote">
<p>Creating a workspace for catkin from the ROS official wiki <a href="http://wiki.ros.org/catkin/Tutorials/create_a_workspace">http://wiki.ros.org/catkin/Tutorials/create_a_workspace</a> <a href="#fnref:11" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
</ol>
</div>SalahThe Beaglebone black is easily my favourite Embedded linux platform. It’s cheap, open-source and offers a great amount of functionnalities. Through this article, I would like to document my default setup for the board so I can start developing on it.