Docker part 2 – our use

For us, using Docker means:

  • we can build the code on our fast laptops and only deploy the built code to the robot’s Pi.
  • The deployed container works just the same on my Pi and on Shaun’s Pi.
  • We can package our build toolchain so that that too “just works” on my laptop and Shaun’s laptop.
  • The robot code and build toolchain can be pushed to the cloud for easy sharing between us
  • If we have to rebuild an SD card on the day, it should be easy.
  • We don’t have to install OpenCV ourselves (someone else has already done the hard bit for us)!

So how do we actually get these benefits?  You define a docker container with a Dockerfile.  This is a text file which has a few commands used to set up the contents of the container.  Our build container (more on that in a moment) has this dockerfile:

# Start with a container that's already set up with OpenCV
# and do the builds in there.

FROM sgtwilko/rpi-raspbian-opencv:stretch-latest

RUN apt update
RUN apt install make gcc
RUN apt install wget
RUN wget https://dl.google.com/go/go1.10.linux-armv6l.tar.gz
RUN tar -C /usr/local -xzf go*.tar.gz

ENV PATH=$PATH:/usr/local/go/bin
ENV GOROOT=/usr/local/go/
ENV GOPATH=/go/
RUN apt install git

RUN mkdir -p $GOPATH/src/gocv.io/x/ && \
    cd $GOPATH/src/gocv.io/x/ && \
    git clone https://github.com/fasaxc/gocv.git

# Pre-build gocv to cache the package in this layer. That
# stops expensive gocv builds when we're compiling the controller.
RUN bash -c "cd $GOPATH/src/gocv.io/x/gocv && \
             source ./env.sh && \
             go build -v gocv.io/x/gocv"

RUN bash -c "cd $GOPATH/src/gocv.io/x/gocv && \
             source ./env.sh && \
             go build -v ./cmd/saveimage/main.go"

# Add the propeller IDE tools so we can extract the propman tool.
RUN wget https://github.com/parallaxinc/PropellerIDE/releases/download/0.38.5/propelleride-0.38.5-armhf.deb
RUN sh -c "dpkg -i propelleride-0.38.5-armhf.deb || true" && \
    apt-get install -y -f && \
    apt-get clean -y

RUN apt-get install libasound2-dev libasound2 libasound2-plugins

# Pre-build the ToF libraries

COPY VL53L0X_1.0.2 $GOPATH/src/github.com/tigerbot-team/tigerbot/VL53L0X_1.0.2
COPY VL53L0X_rasp $GOPATH/src/github.com/tigerbot-team/tigerbot/VL53L0X_rasp
WORKDIR $GOPATH/src/github.com/tigerbot-team/tigerbot/VL53L0X_rasp
RUN API_DIR=../VL53L0X_1.0.2 make all examples

RUN mkdir -p $GOPATH/src/github.com/tigerbot-team/tigerbot/go-controller
WORKDIR $GOPATH/src/github.com/tigerbot-team/tigerbot/go-controller

This breaks down as:

  • start with the docker container by the sgtwilko organisation called rpi-raspbian-opencv with the version stretch-latest (this gets us the latest version of raspbian with opencv pre-installed).
  • Run apt-get to install compilation tools.
  • Set some environment variables
  • git clone our fork of the gocv repo
  • Pre-build gocv
  • Install the propeller IDE to get the propman tool (to flash the propeller with)
  • Prebuild the VL53L0X libraries
  • Create the directory for the go-controller code to be mounted into
  • Set the working directory to be where the go-controller code is mounted in.

A note about layers and caching: docker containers build in layers – docker caches container images at each command in the build.  If you rebuild a container, it will start from the latest container image that hasn’t changed.  So it pays to put the stuff that you won’t change early in the Dockerfile (like our build of OpenCV).

We use 2 different containers in our robot – a build container (above) and a deploy container.  The deploy container Dockerfile looks like this:

# Start with a container that's already set up with OpenCV
# and do the builds in there.

FROM tigerbot/go-controller-phase-1:latest as build

COPY go-controller/controller /go/src/github.com/tigerbot-team/tigerbot/go-controller/controller
COPY go-controller/copy-libs /go/src/github.com/tigerbot-team/tigerbot/go-controller/copy-libs

WORKDIR $GOPATH/src/github.com/tigerbot-team/tigerbot/go-controller

# Copy the shared libraries that the controller uses to a designated
# directory so that they're easy to find in the next phase.
RUN bash -c "source /go/src/gocv.io/x/gocv/env.sh && \
./copy-libs"

# Now build the container image that we actually ship by copying
# across only the relevant files. We start with alpins since it's
# nice and small to start with but we'll be throwing in a lot
# of glibc-linked binaries so the resulting system will be a bit
# of a hybrid.

FROM resin/raspberry-pi-alpine:latest

RUN apk --no-cache add util-linux strace

RUN mkdir -p /usr/local/lib
COPY --from=build /usr/bin/propman /usr/bin/propman
COPY --from=build /lib/ld-linux-armhf.so* /lib
COPY --from=build /controller-libs/* /usr/local/lib/
COPY --from=build /usr/share/alsa /usr/share/alsa
COPY --from=build /go/src/github.com/tigerbot-team/tigerbot/VL53L0X_rasp/bin/* /usr/local/bin/
COPY go-controller/sounds /sounds
COPY --from=build /go/src/github.com/tigerbot-team/tigerbot/go-controller/controller /controller
COPY metabotspin/mb3.binary /mb3.binary
ENV LD_LIBRARY_PATH=/usr/local/lib

ENTRYPOINT []
CMD /controller

Which breaks down like this:

  • Grab the build container contents.
  • Starting with the raspberry-pi-alpine container with tag latest from the resin organisation (a very stripped down linux distribution – the whole OS is 18MB)
  • Install util-linux and strace binaries
  • Copy built artifacts from the build container into this container
  • wipe the ENTRYPOINT (the command run when the container starts)
  • set the command to run when the container starts to /controller

Our build Makefile has these cryptic lines in:

ifeq ($(shell uname -m),x86_64)
	ARCH_DEPS:=/proc/sys/fs/binfmt_misc/arm
endif

/proc/sys/fs/binfmt_misc/arm:
	echo ':arm:M::\x7fELF\x01\x01\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x00\x28\x00:\xff\xff\xff\xff\xff\xff\xff\x00\xff\xff\xff\xff\xff\xff\xff\xff\xfe\xff\xff\xff:/usr/bin/qemu-arm-static:' | sudo tee /proc/sys/fs/binfmt_misc/register

This says – if we’re building on an x86_64 machine (i.e. our 64 bit intel laptops), then put that magic string into /proc/sys/fs/binfmt_misc/register which registers the qemu-arm-static binary as an ARM interpreter in the kernel (using binfmt_misc kernel module).  In other words, use the qemu emulator to make this machine pretend to be ARM architecture while building.

We can now do all our development on intel linux laptops, build on the fast laptop, put the binaries into a deploy container and copy the container over to the Pi for execution.  We can do the copy in a couple of ways.  We can use docker save to output a tar file which we copy over to the Pi and docker load into docker there.  Makefile has:

install-to-pi: controller-image.tar
	rsync -zv --progress controller-image.tar pi@$(BOT_HOST):controller-image.tar
	ssh pi@$(BOT_HOST) docker load -i controller-image.tar

The other way is to docker push <imagename> the image to Dockerhub – this is cloud storage for Docker images.  We can grab that from the cloud on the Pi with docker pull <imagename> allowing us to grab and run the docker image on ANY Pi (connected to a network and running the Docker daemon) – so I can easily grab and try out code that Shaun has built and pushed to Dockerhub on my Pi at my home.

This setup is a reasonably advanced use of Docker and pretty similar to what we have in our day jobs (building an open source software project for deployment on different architectures).

Docker part 1 – what is it?

Once again this year, we’ll be making use of docker on the robot.  Docker is a linux container system.  I’ll explain how we use Docker in the next post, this post will concentrate on introducing Docker.

Containers are a way of segregating processes.  A docker container has its own filesystem, separate from the hosts (though you can mount part of the hosts filesystem into the container if you want to).  Processes running inside the container cannot see processes running outside the container (either on the host or in other containers).  If you use an appropriate network plugin, it is possible to set up networking for the container too (e.g. so that you can run a web application on port 80 in multiple containers on the same host).  The only thing that gets shared between the host and the containers is the host hardware and the linux kernel.

One way of looking at Docker (if you’re used to python) is that its virtualenv on steroids.  Virtualenv gives you a local python environment that you can install libraries into without affecting the systemwide python environment (and potentially breaking other packages).  A docker container gives you a whole filesystem to do stuff in that’s independent of all the other containers and the host.  An example of this: if you have code built to run on Centos, you can install it (and all its dependencies) in a container with Centos and it’ll just work on a Raspbian host.  Or code which worked on an old version of Raspbian, but doesn’t work on the latest.  It makes your code totally independent of what’s on the host, so long as the host is running the Docker daemon.  So I no longer need to be careful about how I setup my SD cards for my robot – so long as they have Docker, the container will run just the same – which makes collaborating on different Pis much easier.  You never run into problems where “well it worked on my machine”.

Power monitoring part 2

Wall-e wouldn’t be Wall-e without his screen; it’s a critical part of his character!

We’ve had the screen working with test programs over SPI for a while but one of our voltage sensors was a dud. That meant we hadn’t been able to get the iconic “CHARGE LEVEL” screen working in full.

Today I swapped out the voltage sensor and added a screen update loop to the code.

Then all of a sudden there’s Wall-e! He feels real now!

Thrust Bearings

We discovered in testing that the main gear of the robot was being pushed away from the motor bevel gear far enough that it was causing the gears to slip under high load (e.g. turning the robot quickly on carpet).  This was causing our heading-hold code trouble too, which would affect our accuracy in the autonomous events.  Here’s a video demonstrating the problem – listen for the clicking noise at start and stop:

The usual solution to this problem is a thrust bearing – normal bearings are good for taking load in a radial direction (like supporting the weight of the robot through an axle).  A Thrust Bearing is good for loads along the axle.

A quick google showed that suitable bearings would cost 9 pounds each.  And we’d have to modify the track frame and drive cog to use it.  At that point Shaun asked “Why don’t you just print one?”.  Why not?  A quick browse of thingiverse turned up a couple of examples of them – suggesting airsoft BBs as ball bearings.  A quick play in Onshape later and I had modified the track frame and drive cog to have a channel for the BBs and a cover to prevent too much dirt getting in.  And a couple of 4 hour prints later, we had a pair of these:

They go together like this:And spin beautifully freely:

Onshape

This years robot is a milestone for us in terms of the amount of 3D printing that has gone into it.

It started last year with the discovery that you could design really complicated things (that worked!) when we made custom mecanum wheels.

So this year we designed tracks, brackets, nerf dart shooter and even thrust bearings!  And all of this has been done in Onshape.

Onshape is a web based parametric CAD package.  It seems that the hobbyist community mostly uses Fusion 360, but for us that was a non-starter because it is Windows only, and I no longer have *any* Windows machines.

Onshape works completely in your web browser and stores its data in the cloud, so I can drop into it for a quick design session from work or at home without having to install anything.  And with their android app, also while commuting on the train.

They have a plugin system too for people to write custom tools when the existing ones are insufficient.  This year we used Nut Pocket (which generates pockets for nuts, so that you can use standard fasteners in your 3D prints) and Bevel Gear Generator (which generated our main drive gears).  

Power monitoring

Since we’re using unprotected LiPo batteries, which would be seriously explosively damaged by over-discharge, we’ve worked some I2C voltage and power monitors into our bot this year.

We’re using these INA219 boards in-line with the battery cables to measure  voltage, current and power.

Out of the box, the sensors can read the bus voltage (i.e. potential difference between ground and the IN- connection).  To get them to read current and power, you need to set a configuration value in one of the registers.

One gotcha we hit was that the bus voltage register is not “right aligned”, some of the low-order bits are used for status flags so you have to take the voltage reading and shift the value 3 bits to the right to extract the voltage and then multiple by 4mV to scale it.

With that out of the way and the calibration register programmed, we now have sensible-looking readings from the battery pack that is powering the Pi:

A: 7.95V <nil> A: 0.459A <nil> A: 3.615W <nil>
A: 7.94V <nil> A: 0.431A <nil> A: 3.420W <nil>
A: 7.97V <nil> A: 0.424A <nil> A: 3.339W <nil>

and, we should be able to alarm if the voltage of the pack drops too low.  (We have a two-cell pack for the Pi, so anything less than 6V would mean that our pack would be damaged.)

Since we had some strange power-related gremlins last year, we split the motor and Pi power so that the motors are powered by a completely separate battery pack.  That means that we have two INA219s; one for each pack.