Docker part 2 – our use

For us, using Docker means:

  • we can build the code on our fast laptops and only deploy the built code to the robot’s Pi.
  • The deployed container works just the same on my Pi and on Shaun’s Pi.
  • We can package our build toolchain so that that too “just works” on my laptop and Shaun’s laptop.
  • The robot code and build toolchain can be pushed to the cloud for easy sharing between us
  • If we have to rebuild an SD card on the day, it should be easy.
  • We don’t have to install OpenCV ourselves (someone else has already done the hard bit for us)!

So how do we actually get these benefits?  You define a docker container with a Dockerfile.  This is a text file which has a few commands used to set up the contents of the container.  Our build container (more on that in a moment) has this dockerfile:

# Start with a container that's already set up with OpenCV
# and do the builds in there.

FROM sgtwilko/rpi-raspbian-opencv:stretch-latest

RUN apt update
RUN apt install make gcc
RUN apt install wget
RUN wget https://dl.google.com/go/go1.10.linux-armv6l.tar.gz
RUN tar -C /usr/local -xzf go*.tar.gz

ENV PATH=$PATH:/usr/local/go/bin
ENV GOROOT=/usr/local/go/
ENV GOPATH=/go/
RUN apt install git

RUN mkdir -p $GOPATH/src/gocv.io/x/ && \
    cd $GOPATH/src/gocv.io/x/ && \
    git clone https://github.com/fasaxc/gocv.git

# Pre-build gocv to cache the package in this layer. That
# stops expensive gocv builds when we're compiling the controller.
RUN bash -c "cd $GOPATH/src/gocv.io/x/gocv && \
             source ./env.sh && \
             go build -v gocv.io/x/gocv"

RUN bash -c "cd $GOPATH/src/gocv.io/x/gocv && \
             source ./env.sh && \
             go build -v ./cmd/saveimage/main.go"

# Add the propeller IDE tools so we can extract the propman tool.
RUN wget https://github.com/parallaxinc/PropellerIDE/releases/download/0.38.5/propelleride-0.38.5-armhf.deb
RUN sh -c "dpkg -i propelleride-0.38.5-armhf.deb || true" && \
    apt-get install -y -f && \
    apt-get clean -y

RUN apt-get install libasound2-dev libasound2 libasound2-plugins

# Pre-build the ToF libraries

COPY VL53L0X_1.0.2 $GOPATH/src/github.com/tigerbot-team/tigerbot/VL53L0X_1.0.2
COPY VL53L0X_rasp $GOPATH/src/github.com/tigerbot-team/tigerbot/VL53L0X_rasp
WORKDIR $GOPATH/src/github.com/tigerbot-team/tigerbot/VL53L0X_rasp
RUN API_DIR=../VL53L0X_1.0.2 make all examples

RUN mkdir -p $GOPATH/src/github.com/tigerbot-team/tigerbot/go-controller
WORKDIR $GOPATH/src/github.com/tigerbot-team/tigerbot/go-controller

This breaks down as:

  • start with the docker container by the sgtwilko organisation called rpi-raspbian-opencv with the version stretch-latest (this gets us the latest version of raspbian with opencv pre-installed).
  • Run apt-get to install compilation tools.
  • Set some environment variables
  • git clone our fork of the gocv repo
  • Pre-build gocv
  • Install the propeller IDE to get the propman tool (to flash the propeller with)
  • Prebuild the VL53L0X libraries
  • Create the directory for the go-controller code to be mounted into
  • Set the working directory to be where the go-controller code is mounted in.

A note about layers and caching: docker containers build in layers – docker caches container images at each command in the build.  If you rebuild a container, it will start from the latest container image that hasn’t changed.  So it pays to put the stuff that you won’t change early in the Dockerfile (like our build of OpenCV).

We use 2 different containers in our robot – a build container (above) and a deploy container.  The deploy container Dockerfile looks like this:

# Start with a container that's already set up with OpenCV
# and do the builds in there.

FROM tigerbot/go-controller-phase-1:latest as build

COPY go-controller/controller /go/src/github.com/tigerbot-team/tigerbot/go-controller/controller
COPY go-controller/copy-libs /go/src/github.com/tigerbot-team/tigerbot/go-controller/copy-libs

WORKDIR $GOPATH/src/github.com/tigerbot-team/tigerbot/go-controller

# Copy the shared libraries that the controller uses to a designated
# directory so that they're easy to find in the next phase.
RUN bash -c "source /go/src/gocv.io/x/gocv/env.sh && \
./copy-libs"

# Now build the container image that we actually ship by copying
# across only the relevant files. We start with alpins since it's
# nice and small to start with but we'll be throwing in a lot
# of glibc-linked binaries so the resulting system will be a bit
# of a hybrid.

FROM resin/raspberry-pi-alpine:latest

RUN apk --no-cache add util-linux strace

RUN mkdir -p /usr/local/lib
COPY --from=build /usr/bin/propman /usr/bin/propman
COPY --from=build /lib/ld-linux-armhf.so* /lib
COPY --from=build /controller-libs/* /usr/local/lib/
COPY --from=build /usr/share/alsa /usr/share/alsa
COPY --from=build /go/src/github.com/tigerbot-team/tigerbot/VL53L0X_rasp/bin/* /usr/local/bin/
COPY go-controller/sounds /sounds
COPY --from=build /go/src/github.com/tigerbot-team/tigerbot/go-controller/controller /controller
COPY metabotspin/mb3.binary /mb3.binary
ENV LD_LIBRARY_PATH=/usr/local/lib

ENTRYPOINT []
CMD /controller

Which breaks down like this:

  • Grab the build container contents.
  • Starting with the raspberry-pi-alpine container with tag latest from the resin organisation (a very stripped down linux distribution – the whole OS is 18MB)
  • Install util-linux and strace binaries
  • Copy built artifacts from the build container into this container
  • wipe the ENTRYPOINT (the command run when the container starts)
  • set the command to run when the container starts to /controller

Our build Makefile has these cryptic lines in:

ifeq ($(shell uname -m),x86_64)
	ARCH_DEPS:=/proc/sys/fs/binfmt_misc/arm
endif

/proc/sys/fs/binfmt_misc/arm:
	echo ':arm:M::\x7fELF\x01\x01\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x00\x28\x00:\xff\xff\xff\xff\xff\xff\xff\x00\xff\xff\xff\xff\xff\xff\xff\xff\xfe\xff\xff\xff:/usr/bin/qemu-arm-static:' | sudo tee /proc/sys/fs/binfmt_misc/register

This says – if we’re building on an x86_64 machine (i.e. our 64 bit intel laptops), then put that magic string into /proc/sys/fs/binfmt_misc/register which registers the qemu-arm-static binary as an ARM interpreter in the kernel (using binfmt_misc kernel module).  In other words, use the qemu emulator to make this machine pretend to be ARM architecture while building.

We can now do all our development on intel linux laptops, build on the fast laptop, put the binaries into a deploy container and copy the container over to the Pi for execution.  We can do the copy in a couple of ways.  We can use docker save to output a tar file which we copy over to the Pi and docker load into docker there.  Makefile has:

install-to-pi: controller-image.tar
	rsync -zv --progress controller-image.tar pi@$(BOT_HOST):controller-image.tar
	ssh pi@$(BOT_HOST) docker load -i controller-image.tar

The other way is to docker push <imagename> the image to Dockerhub – this is cloud storage for Docker images.  We can grab that from the cloud on the Pi with docker pull <imagename> allowing us to grab and run the docker image on ANY Pi (connected to a network and running the Docker daemon) – so I can easily grab and try out code that Shaun has built and pushed to Dockerhub on my Pi at my home.

This setup is a reasonably advanced use of Docker and pretty similar to what we have in our day jobs (building an open source software project for deployment on different architectures).

Leave a Reply

Your email address will not be published. Required fields are marked *