Docker part 2 – our use

For us, using Docker means:

  • we can build the code on our fast laptops and only deploy the built code to the robot’s Pi.
  • The deployed container works just the same on my Pi and on Shaun’s Pi.
  • We can package our build toolchain so that that too “just works” on my laptop and Shaun’s laptop.
  • The robot code and build toolchain can be pushed to the cloud for easy sharing between us
  • If we have to rebuild an SD card on the day, it should be easy.
  • We don’t have to install OpenCV ourselves (someone else has already done the hard bit for us)!

So how do we actually get these benefits?  You define a docker container with a Dockerfile.  This is a text file which has a few commands used to set up the contents of the container.  Our build container (more on that in a moment) has this dockerfile:

# Start with a container that's already set up with OpenCV
# and do the builds in there.

FROM sgtwilko/rpi-raspbian-opencv:stretch-latest

RUN apt update
RUN apt install make gcc
RUN apt install wget
RUN wget https://dl.google.com/go/go1.10.linux-armv6l.tar.gz
RUN tar -C /usr/local -xzf go*.tar.gz

ENV PATH=$PATH:/usr/local/go/bin
ENV GOROOT=/usr/local/go/
ENV GOPATH=/go/
RUN apt install git

RUN mkdir -p $GOPATH/src/gocv.io/x/ && \
    cd $GOPATH/src/gocv.io/x/ && \
    git clone https://github.com/fasaxc/gocv.git

# Pre-build gocv to cache the package in this layer. That
# stops expensive gocv builds when we're compiling the controller.
RUN bash -c "cd $GOPATH/src/gocv.io/x/gocv && \
             source ./env.sh && \
             go build -v gocv.io/x/gocv"

RUN bash -c "cd $GOPATH/src/gocv.io/x/gocv && \
             source ./env.sh && \
             go build -v ./cmd/saveimage/main.go"

# Add the propeller IDE tools so we can extract the propman tool.
RUN wget https://github.com/parallaxinc/PropellerIDE/releases/download/0.38.5/propelleride-0.38.5-armhf.deb
RUN sh -c "dpkg -i propelleride-0.38.5-armhf.deb || true" && \
    apt-get install -y -f && \
    apt-get clean -y

RUN apt-get install libasound2-dev libasound2 libasound2-plugins

# Pre-build the ToF libraries

COPY VL53L0X_1.0.2 $GOPATH/src/github.com/tigerbot-team/tigerbot/VL53L0X_1.0.2
COPY VL53L0X_rasp $GOPATH/src/github.com/tigerbot-team/tigerbot/VL53L0X_rasp
WORKDIR $GOPATH/src/github.com/tigerbot-team/tigerbot/VL53L0X_rasp
RUN API_DIR=../VL53L0X_1.0.2 make all examples

RUN mkdir -p $GOPATH/src/github.com/tigerbot-team/tigerbot/go-controller
WORKDIR $GOPATH/src/github.com/tigerbot-team/tigerbot/go-controller

This breaks down as:

  • start with the docker container by the sgtwilko organisation called rpi-raspbian-opencv with the version stretch-latest (this gets us the latest version of raspbian with opencv pre-installed).
  • Run apt-get to install compilation tools.
  • Set some environment variables
  • git clone our fork of the gocv repo
  • Pre-build gocv
  • Install the propeller IDE to get the propman tool (to flash the propeller with)
  • Prebuild the VL53L0X libraries
  • Create the directory for the go-controller code to be mounted into
  • Set the working directory to be where the go-controller code is mounted in.

A note about layers and caching: docker containers build in layers – docker caches container images at each command in the build.  If you rebuild a container, it will start from the latest container image that hasn’t changed.  So it pays to put the stuff that you won’t change early in the Dockerfile (like our build of OpenCV).

We use 2 different containers in our robot – a build container (above) and a deploy container.  The deploy container Dockerfile looks like this:

# Start with a container that's already set up with OpenCV
# and do the builds in there.

FROM tigerbot/go-controller-phase-1:latest as build

COPY go-controller/controller /go/src/github.com/tigerbot-team/tigerbot/go-controller/controller
COPY go-controller/copy-libs /go/src/github.com/tigerbot-team/tigerbot/go-controller/copy-libs

WORKDIR $GOPATH/src/github.com/tigerbot-team/tigerbot/go-controller

# Copy the shared libraries that the controller uses to a designated
# directory so that they're easy to find in the next phase.
RUN bash -c "source /go/src/gocv.io/x/gocv/env.sh && \
./copy-libs"

# Now build the container image that we actually ship by copying
# across only the relevant files. We start with alpins since it's
# nice and small to start with but we'll be throwing in a lot
# of glibc-linked binaries so the resulting system will be a bit
# of a hybrid.

FROM resin/raspberry-pi-alpine:latest

RUN apk --no-cache add util-linux strace

RUN mkdir -p /usr/local/lib
COPY --from=build /usr/bin/propman /usr/bin/propman
COPY --from=build /lib/ld-linux-armhf.so* /lib
COPY --from=build /controller-libs/* /usr/local/lib/
COPY --from=build /usr/share/alsa /usr/share/alsa
COPY --from=build /go/src/github.com/tigerbot-team/tigerbot/VL53L0X_rasp/bin/* /usr/local/bin/
COPY go-controller/sounds /sounds
COPY --from=build /go/src/github.com/tigerbot-team/tigerbot/go-controller/controller /controller
COPY metabotspin/mb3.binary /mb3.binary
ENV LD_LIBRARY_PATH=/usr/local/lib

ENTRYPOINT []
CMD /controller

Which breaks down like this:

  • Grab the build container contents.
  • Starting with the raspberry-pi-alpine container with tag latest from the resin organisation (a very stripped down linux distribution – the whole OS is 18MB)
  • Install util-linux and strace binaries
  • Copy built artifacts from the build container into this container
  • wipe the ENTRYPOINT (the command run when the container starts)
  • set the command to run when the container starts to /controller

Our build Makefile has these cryptic lines in:

ifeq ($(shell uname -m),x86_64)
	ARCH_DEPS:=/proc/sys/fs/binfmt_misc/arm
endif

/proc/sys/fs/binfmt_misc/arm:
	echo ':arm:M::\x7fELF\x01\x01\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x00\x28\x00:\xff\xff\xff\xff\xff\xff\xff\x00\xff\xff\xff\xff\xff\xff\xff\xff\xfe\xff\xff\xff:/usr/bin/qemu-arm-static:' | sudo tee /proc/sys/fs/binfmt_misc/register

This says – if we’re building on an x86_64 machine (i.e. our 64 bit intel laptops), then put that magic string into /proc/sys/fs/binfmt_misc/register which registers the qemu-arm-static binary as an ARM interpreter in the kernel (using binfmt_misc kernel module).  In other words, use the qemu emulator to make this machine pretend to be ARM architecture while building.

We can now do all our development on intel linux laptops, build on the fast laptop, put the binaries into a deploy container and copy the container over to the Pi for execution.  We can do the copy in a couple of ways.  We can use docker save to output a tar file which we copy over to the Pi and docker load into docker there.  Makefile has:

install-to-pi: controller-image.tar
	rsync -zv --progress controller-image.tar pi@$(BOT_HOST):controller-image.tar
	ssh pi@$(BOT_HOST) docker load -i controller-image.tar

The other way is to docker push <imagename> the image to Dockerhub – this is cloud storage for Docker images.  We can grab that from the cloud on the Pi with docker pull <imagename> allowing us to grab and run the docker image on ANY Pi (connected to a network and running the Docker daemon) – so I can easily grab and try out code that Shaun has built and pushed to Dockerhub on my Pi at my home.

This setup is a reasonably advanced use of Docker and pretty similar to what we have in our day jobs (building an open source software project for deployment on different architectures).

Docker part 1 – what is it?

Once again this year, we’ll be making use of docker on the robot.  Docker is a linux container system.  I’ll explain how we use Docker in the next post, this post will concentrate on introducing Docker.

Containers are a way of segregating processes.  A docker container has its own filesystem, separate from the hosts (though you can mount part of the hosts filesystem into the container if you want to).  Processes running inside the container cannot see processes running outside the container (either on the host or in other containers).  If you use an appropriate network plugin, it is possible to set up networking for the container too (e.g. so that you can run a web application on port 80 in multiple containers on the same host).  The only thing that gets shared between the host and the containers is the host hardware and the linux kernel.

One way of looking at Docker (if you’re used to python) is that its virtualenv on steroids.  Virtualenv gives you a local python environment that you can install libraries into without affecting the systemwide python environment (and potentially breaking other packages).  A docker container gives you a whole filesystem to do stuff in that’s independent of all the other containers and the host.  An example of this: if you have code built to run on Centos, you can install it (and all its dependencies) in a container with Centos and it’ll just work on a Raspbian host.  Or code which worked on an old version of Raspbian, but doesn’t work on the latest.  It makes your code totally independent of what’s on the host, so long as the host is running the Docker daemon.  So I no longer need to be careful about how I setup my SD cards for my robot – so long as they have Docker, the container will run just the same – which makes collaborating on different Pis much easier.  You never run into problems where “well it worked on my machine”.

Thrust Bearings

We discovered in testing that the main gear of the robot was being pushed away from the motor bevel gear far enough that it was causing the gears to slip under high load (e.g. turning the robot quickly on carpet).  This was causing our heading-hold code trouble too, which would affect our accuracy in the autonomous events.  Here’s a video demonstrating the problem – listen for the clicking noise at start and stop:

The usual solution to this problem is a thrust bearing – normal bearings are good for taking load in a radial direction (like supporting the weight of the robot through an axle).  A Thrust Bearing is good for loads along the axle.

A quick google showed that suitable bearings would cost 9 pounds each.  And we’d have to modify the track frame and drive cog to use it.  At that point Shaun asked “Why don’t you just print one?”.  Why not?  A quick browse of thingiverse turned up a couple of examples of them – suggesting airsoft BBs as ball bearings.  A quick play in Onshape later and I had modified the track frame and drive cog to have a channel for the BBs and a cover to prevent too much dirt getting in.  And a couple of 4 hour prints later, we had a pair of these:

They go together like this:And spin beautifully freely:

Onshape

This years robot is a milestone for us in terms of the amount of 3D printing that has gone into it.

It started last year with the discovery that you could design really complicated things (that worked!) when we made custom mecanum wheels.

So this year we designed tracks, brackets, nerf dart shooter and even thrust bearings!  And all of this has been done in Onshape.

Onshape is a web based parametric CAD package.  It seems that the hobbyist community mostly uses Fusion 360, but for us that was a non-starter because it is Windows only, and I no longer have *any* Windows machines.

Onshape works completely in your web browser and stores its data in the cloud, so I can drop into it for a quick design session from work or at home without having to install anything.  And with their android app, also while commuting on the train.

They have a plugin system too for people to write custom tools when the existing ones are insufficient.  This year we used Nut Pocket (which generates pockets for nuts, so that you can use standard fasteners in your 3D prints) and Bevel Gear Generator (which generated our main drive gears).  

Assembly!

 

Its always amazing how much you learn when you actually attempt to assemble the robot.  There’s always something you forgot, no matter how careful you were in design…

Fixed missing pull up resistors on one of the I2C lines.
We got lucky and this cable was exactly the right length! All the others needed work though.
Not enough clearance for the screen connector
Voltage/Current monitors too close together
Lower profile capacitors so that it actually fits in the case!

The Frame

The last couple of weekends, I’ve been working on the least sexy part of the robot – the mounting frame.  As has been mentioned, the space inside the robot is VERY tight this year, so making everything fit is a real challenge.

We need to fit in:

  • The Pi and its (not quite) Hat
  • 2 x Motor controllers
  • Servo controller board
  • IMU
  • Screen
  • 2 x Battery monitors
  • 2 x PSUs
  • Amplifier
  • Speaker

All in a space 94 x 83 x 89mm.  And we need to think about thermal management.  Looks like we’ll have to mount the batteries externally!

Our solution is this 3D printed frame.  It holds the circuit boards vertically (for good convection cooling), puts the Pi and its connections at the back (the back is removable for easy access).


All the other little boards are mounted on the reverse of the Pi mounting plate (hidden in the photo).  The whole thing lifts out of the robot if we need access to one of the boards buried near the bottom.

Tracks!

Having seen various tracked robots on Thingiverse and especially this amazing one, I thought we should try and implement Wall-E’s tracks ourselves.

We could have gone with a simple rubber band or timing belt (and in retrospect that would have been MUCH easier), but I really fancied seeing how far I could push 3D printed parts.

So I had a long browse through thingiverse looking at lots of track designs and started to draw up my own.  The FPV rover design had an interesting idea for fine adjustment – they used 2 different sizes of 3D printed pins to join the tracks together to make the whole thing slightly tighter or looser as needed.

In the end I settled on a design which had sprocket wheels mounted on either side of a supporting frame (to avoid nasty torques on the frame).  Obviously the layout of the sprocket wheels on the frame had to match the ‘real’ Wall-E, but I decided to make the sprocket teeth larger (and therefore stronger).

Then the track elements needed designing.  I went with a design which used the links between the tracks as the raised sections and the sprocket teeth sat in a deep well, but did not protrude from the other side.  Like this:

A matching pin is shown too.  After a few trail and error prints to fine tune the pin diameter, well depth, we got something that worked.  And then we needed to print about 36 of them per set of tracks (3 x 4 hour sessions of printing).

The final problem was how to connect these to the motor.  We wanted a fair bit of speed, so I’d ended up buying pololu motors with a 4:1 gearbox.  Having seen these run, I was a bit worried about the high speed, so wanted to gear them down a touch.  I found a bevel gear generator plugin in Onshape and ended up with this:

And that worked!

In fact running these is slightly terrifying – fairly sure if you got your finger in there it’d get a nasty nip…

PiWars 2019!

It was awesome to hear the launch of PiWars 2019 and we loved the space theme.  That led to a week of lunchtime brainstorming – what famous space exploration robots are there?  We ended up discounting most those as being a bit too spindly to 3D print.  So what else could we do with the theme?  What robots are there in space films?  And Wall-E was the obvious winner:

  • tracked – so uneven ground would not be a problem
  • boxy – so shape should be printable

But: The PiWars rules limit the width of robots and Wall-E has a fairly square footprint.  And a big chunk of the width is taken up by the tracks.  So were we going to be able to fit all the electronics into the body?

That has turned into an interesting challenge.  We were fairly sure we were going to use similar electronics to last year, but we were also sure that we would have to reduce the size of them.  Which means designing a custom PCB…

So the challenges we’re going to face:

  • making it look like Wall-E – arguably the most important thing!
  • mechanical design – 3D printing tracks to look like Wall-E’s is going to be hard
  • making the electronics fit into the tiny body – ideally with extra servos to animate Wall-E  🙂
  • Finding places to mount sensors at the right height for the challenges
  • Figuring out how to mount the attachment hardware
  • Plus all the unexpected stuff we haven’t spotted yet!

 

More Peripherals

Following the posts on servos and distance sensors, I thought I’d talk about the other peripherals we’re adding to Tigerbot.

A screen is an under-rated part of a PiWars robot.  Its really handy not to have to cart a laptop around with you between events and have a way to check that the robot is in the mode you think it is (ask me how we know!).  We found this little 128×64 pixel screen on ebay based on the SSD1306. And Adafruit has a lovely tutorial on how to use it.

It can be controlled over either I2C or SPI (just set the pattern of resistors on the back).  With this, you can write your code to have a menu of “modes” (one for each event) switch between them using buttons on your controller and display the mode the robot thinks you’re in on the screen.  No more laptop on the day!

  • Another handy peripheral is an IMU.  This is a combination Gyro, Accelerometer, Barometer and Thermometer all in one.  Of most interest to us is the Gyro.  This is a rate gyro – it tells you how fast the rate of rotation is changing (and NOT the absolute rate of rotation).  This is a 3-axis device – it tells you about rotation around the X, Y, Z axes.  To use it you generally have to calibrate it first – with the robot still and stationary, you take readings from each of the gyros for a while and record the output.  These are your zero readings.  All future readings from the gyro need to subtract the zero readings.  The zero reading can vary with battery voltage and temperature, so be sure to re-calibrate it just before you use it!  Now you can turn the rate into absolute rotation by taking lots of readings and integrating them.What use is a gyro?  There are a couple of obvious events that could use a gyro:
  • Straight line speed test
    • here you’re trying to keep the robot pointed in the same direction all the way to the end
  • Minimal maze
    • for checking that your turns are exactly 90 degrees (if you’re using wheel rotations for this, how do you know if the wheel has slipped on the surface?)

Chassis V2

After some testing (read repeatedly trying to make it do the minimal maze!) we’ve realised that the chassis is very stiff. This means that usually only 3 wheels are touching the ground, which means our turns (and, ahem, straight lines – Shaun) are more variable than they should be.

Solution is to saw the chassis in half 🙂 We’ve separated front and back into separate sections and joined them with a hinge so that the front and back can twist slightly compared to the other end. We’ve added limiters to ensure that it only twists up to 10 degrees of movement so that the obstacle course doesn’t break the robot!

Here’s the print in progress:

And with the rest of the robot installed into it:

Initial testing indicates that the new twisty chassis works better, so that makes me feel much better about totally rebuilding the robot just 2 weeks before the event!