A better PIO program for PWM decode

One of the things that the BLDC controller needs to do is to decode a PWM signal from an AS5048A hall effect sensor. I had been using two RP2040 PIO programs to measure both the high time and the interval, but I was finding it a little awkward to interleave reads from two FIFOs (and it was using all the PIO resources with 4 motors).

After a bit of head scratching, I came up with this program that reads both the high time of the PWM and the interval and sends them in one 32-bit “struct” on the FIFO. The main reason that I wanted to use two programs before was to avoid the possibility of getting out of sync if I alternated high/interval on the same FIFO.

What if the CPU was reading the interval but thought it was the high time or vice versa? Packing them into one 32-bit value neatly solves that problem. The trick is to use the “IN” instruction, which is normally used to read from pins, but it supports reading from another register instead. By using IN to fetch 16 bits from the counter register (X), we can shift 16 bits of the counter into the ISR register, ready to be sent. By using the “autopush” feature on 32 bits the PIO automatically pushes the packed value after the second “IN” instruction. The only gotcha with this approach is that the counters are limited to 16 bits, which is enough to handle roughly 1.5ms intervals. The AS5048A uses a 1ms interval so that’s just about right for my application.

.wrap_target
  mov x, !NULL             ; x = 0xffffffff

loop_high:                 ; do {
  jmp x-- cont_loop_high   ;   x--
cont_loop_high:            ;
  nop                      ;   nop to match number of cycles below
  jmp pin loop_high        ; } while pin is high

  in x, 16                 ; Copy 16 bits of counter into ISR

loop_low:                  ; do {
  jmp x-- cont_loop_low   ;   x--
cont_loop_low:             ;
  jmp pin exit_loop_low    ;   if pin is high: break
  jmp loop_low             ; } while 1

exit_loop_low:
  in x, 16                 ; Copy 16 bits of counter into ISR
.wrap

The interval and count can then be decoded as follows:

uint32_t high = 0xffff - (raw_pio_output>>16);
uint32_t invl = 0xffff - (raw_pio_output & 0xffff);

The PIO assembler file is here on Github.

BLDC controller phase 4: moving to C

While I had got a motor moving in MicroPython, I found I hit a wall with the garbage collector causing multi-millisecond pauses.

Green signal high showing a multi-millisecond pause. Multiple PWM pulses go by on the yellow signal while Python is snoozing!

A BLDC controller needs to be updating its model at at least 1KHz (and preferably more) so a multi-ms pause at random times is out of the question. I tried a few approaches to avoid GC but, even using the “compile-to-machine code” options in MicroPython, I couldn’t seem to control it. Beyond that, you’re only one small mistake from reintroducing GC by allocating something on the heap accidentally…. If I’m going to be one small mistake away from blowing my foot off, I’d rather work in C, where that’s perfectly normal 🙂

Porting the code to C went very smoothly:

  • The Pico C SDK is wonderful, with excellent documentation. Probably the smoothest C SDK bring-up that I’ve ever experienced. Along with copious examples, there’s even a GUI tool to generate a skeleton project that has a hello-world for each peripheral that you intend on using.
  • I translated the PIO programs from Python syntax to native PIO assembler and the C SDK made it easy to build that into the project.
  • Translating the driver code itself, I did a fairly straight port for the initial version, but I opted to roll my own fixed point arithmetic instead of defaulting to floating point. For example, rather than converting a raw PWM value (which ranges from 0-212-1) I stuck with 212 as my base for angles. That means that clamping to 0-360 degrees can be handled by bitwise AND with 212-1 rather than an expensive modulo operation.

And what was the end result? The C code was significantly faster. So fast that my trick of toggling a pin while the code was actively running didn’t seem to be working at first. I had to zoom in on the oscilloscope to catch it!

Equivalent C code, blink and you’ll miss it!

Once I started work on the C code, I created a Github repo, the initial commit of the C code is here. The initial commit was still a prototype, it

  • Measured the angle of the motor.
  • Multiplied up to get the angle relative to the magnetic poles in the motor.
  • Added a offset, controlled by a pair of pushbuttons.
  • Drove the motor PWMs with a sine wave at the offset phase.

Still, it was enough to prove that C was fast and that the motor could be driven efficiently by correctly controlling that phase angle.

BLDC motor phase 3: finding the limits of MicroPython

With motor position sensor and motor drivers in hand, I set to work actually implementing the “real” control algorithm. I was expecting MicroPython to be “too slow” for this kind of work but, initially it seemed to be working well so I thought I’d push it as far as I could…

The key control loop of the field oriented control (FOC) algorithm that I had in mind is as follows:

  • Measure the angle of the wheel.
  • Convert that to an angle relative to the magnetic poles of the motor.
  • To rotate in one direction, add ~90 degrees to that pole angle.
  • To rotate in the other direction, subtract ~90 degrees to that pole angle.
  • Drive the coils to create a magnetic field at the adjusted angle.
  • Vary the strength of the drive to vary the torque (and hence the speed).

What does those steps actually look like?

Measuring the angle of the wheel starts with the PIO programs in phase 2. Once the PIO programs are configured:

sm0 = rp2.StateMachine(0, measure_high_time, freq=125_000_000,  in_base=pin18, jmp_pin=pin18)
sm1 = rp2.StateMachine(4, measure_interval, freq=125_000_000, in_base=pin18, jmp_pin=pin18)

Reading the PIO values looks like this in MicroPython:

    # Read the interval from the measure_interval PIO
    # sm1.get() blocks until the result is ready.
    #
    # PIO program sends 0xffffffff minus the count so 
    # we need to undo that...
    ivl = 0xffffffff - sm1.get()

    while True:
        # Get the "on" time from the measure_high_time PIO.
        high = 0xffffffff - sm0.get()

        # Extract out the angle measurement; which should be 
        # between 0 and 4095.  The 4119 and 15 come from the
        # datasheet of the sensor.
        duty = high*4119//ivl - 15

        # angle in degrees would be (duty * 360) / 4096
        ...
        

Once we have the wheel angle, we need to convert it to an angle relative to the coils/magnets in the motor. This accounts for the fact that a BLDC motor has three wires wound through it (let’s call them A, B, C) but they are not just wound into 3 separate electromagnets at 120 degrees. Instead, there are many small elecromagnets around the circumference of the motor: first an A coil, then a B coil, then a C and so on ABCABC… The rotor has a similar but different number of permanent magnets attached. The overall effect is that, to spin the motor through 360 degrees, we need to cycle power to the 3 coils, N times instead of just once. It’s like the motor is “geared down”.

With that waffly explanation out of the way(!) in code it’s very simple, we just multiply the angle by half the number of “poles” in the motor, 11 in my case. I chose to keep representing angles by 0-4095 because powers of 2 are generally easier to work with. We also need to add a calibration offset to account for the alignment of the sensor on the motor:

pole_angle = (duty * 11 + calibration_offset) % 4096

OK, now for an easy step; depending on which direction we want to rotate, add or subtract 90 degrees. In my prototype I just hard coded it. We add 1024 because the “degrees” I’m using run from 0-4095:

pole_angle = (pole_angle + 1024) % 4096

Why 90 degrees? It has been worked out that, when two magnets (in our case the permanent magnets in the rotor and the electromagnets we’re about to drive) are at 90 degrees to each other, that gives maximum torque. And, with energy being conserved, maximum torque means doing maximum work to drive the motor round and minimal work to make heat in the coils. We need to constantly keep moving the magnetic field to always be 90 degrees ahead of the permanent magnets in order to maximise torque and minimise heat build up.

OK, we’ve got an angle that we want to drive the magnetic field to, how do we do that? The literature makes it sound very difficult and I suspect that there are much more complex algorithms that take more factors into account, but the basic idea is to drive the three inputs of the motor with 3 sine wave voltages, each 120 degrees out of phase with each other. That energises the three coils in a way that the overall voltage sums to zero but, by varying the phase of the three inputs together, we can rotate the magnetic field.

In microcontroller land, we can’t really alter the voltage, but we can use PWM, which is good enough. At the top of the program we set up 3 PWM pins. It’s important that the PWMs run in phase with each other and unfortunately, MicroPython doesn’t expose that feature so we need to poke directly at the PWM registers…

# Define our PWM pins
pwms = [PWM(Pin(n)) for n in [14, 15, 16]]

# Set the PWM frequency.
f = 40000
for pwm in pwms:
    pwm.freq(f*2)  # *2 for phase accurate PWM

# Define constants for the PWM registers that we need to poke.
# These come from the RP2040 datasheet.
PWM_BASE = 0x40050000
PWM_EN = PWM_BASE + 0xa0
PWM_INTR = PWM_BASE + 0xa4

# Note: CH0 and CH7 are the right banks for pins 14-16.
CH0_CSR = PWM_BASE + 0x00
CH0_CTR = PWM_BASE + 0x08
CH7_CSR = PWM_BASE + 0x8c
CH7_CTR = PWM_BASE + 0x94

# Disable all PWMs.
machine.mem32[PWM_EN] = 0
# Set phase correct mode on the two PWM modules that
# relate to pins 14-16.
machine.mem32[CH0_CSR] = machine.mem32[CH0_CSR] | 0x2
machine.mem32[CH7_CSR] = machine.mem32[CH7_CSR] | 0x2
# Reset the counters so that the PWMs start in-phase.
machine.mem32[CH0_CTR] = 0
machine.mem32[CH7_CTR] = 0
# Enable the PWMs together so they stay in sync.
machine.mem32[PWM_EN] = (1<<0) | (1 <<7)

Then, since the RP2040 has plenty of RAM but floating point is slow, I created a lookup table holding the values of sine(angle) for various fractions of the circle where the output value is scaled correctly to be a PWM value (MicroPython always uses 0-65535 for PWM values).

lut_len = 4096
# Add a slight offset to compensate for the on delay of the 
# motor driver.
offset = 0.016
lut = [
    min(
        int(
            (
                (math.sin(i * 2 * math.pi / lut_len) + 1)/2 +
                 offset
            ) / 
            (1+offset) * 
            65535
        ),
        65535
    ) for i in range(lut_len)]

With the look up table pre-calculated, setting the PWM in the main loop is as simple as:

pwms[0].duty_u16(lut[angle%lut_len])
pwms[1].duty_u16(lut[(angle+(4096/3))%lut_len])
pwms[2].duty_u16(lut[(angle+(4096*2/3))%lut_len])

With all that glued together and after much debugging (missing the divide by 2 in the look-up table and having all the values wrap being my favourite!) It actually worked… but the motor juddered every few revs. (Sorry, didn’t get a video of this stage.)

After trying a few things to debug I tried adding a debug pin and toggled it high at the start of the loop and then low at the end. That found the problem:

Most of the time, the debug pin (green) toggled on for a few microseconds each time the position sensor sent a pulse (yellow) but sometimes it was multiple milliseconds!

While debugging code by oscilloscope was new to me, it immediately made me think of garbage collection. Turns out that MicroPython’s GC takes a few ms to run, which is fine for a lot of tasks but not for BLDC control. I did try a few approaches to tame the GC (using the machine code decorators in MicroPython, for example) but I think the PWM library was doing allocations under the hood. To avoid that, I’d have needed to directly poke memory and wouldn’t be able to use any libraries without fear of introducing GC again. Not much fun! It was time to switch to the C SDK…

BLDC controller phase 2: adding feedback through the magic of PIOs

After my hello world of BLDC control I waited eagerly for some breakout boards to arrive. To get past the hello world stage I needed:

  • Some sort of position sensor. I chose the AS5048A Hall effect sensor because it was known to work well with SimpleFOC and the motor that I had bought came with the right sort of magnet to work perfectly with it.
  • A decent motor driver. Up to now I’d been using two L298Ns (to get 3 half bridges), which I had on hand, but that’s all there is to recommend it! At time of ordering, TI seemed to have the best available stocks of motor drivers so I picked up a DRV8313 breakout board. The DRV8313 is a dedicated three-phase motor driver capable of 2.5A and 60V, more than enough for my needs.

The position sensor arrived first and I set about connecting it to my Pico. There are two options for that: SPI, or PWM. I had planned to use SPI because it’s faster and the Pico has an SPI port. However, the pads on the breakout board for SPI were super-small and I didn’t have any wire on hand that was small enough(!) Maybe I should try the PWM option for now? But how to interface PWM to the Pico…?

The PWM output from the sensor sends an (approx) 1KHz signal to the Pico and it varies the “on” time of the signal depending on the angle of the wheel. To decode that precisely, you need to measure the “on” time and the “off” time precisely.

Yellow trace is the PWM signal.

Now, the “straightforward” way to do that would be to set an interrupt on the pin and to use the microsecond clock to measure the time between changes in the signal in the interrupt handler. But the RP2040’s datasheet basically says you’re a clown if you don’t use the PIO peripheral for all your bit-banging needs. I don’t want Eben to think I’m a clown! So, I dove into PIO assembler…

Each PIO peripheral has several state machine cores, which are like mini CPUs that execute their very specialised instructions with very precise timing. The specialised instructions can do things like read the state of a pin, push some data to the main processor over a FIFO, or pull data from the main processor and push it out on one or more pins. In fact, one instruction can often do several of those things, all in one clock cycle. The trade off for this precision and specialisation is that each PIO block has only 32 words of instruction memory, shared between 4 cores!

So, what did I come up with? First I wrote a PIO program to measure the “on” time. How hard can it be, just write a program that:

  • Waits for the pin to go high.
  • Increments a counter while high.
  • When pin goes low, push the counter to the CPU.

Waiting for a pin to change is very easy in the PIO, but incrementing a counter turned out to be a problem! The PIO has no “add” or “increment” instruction! 🤔

You can write the PIO code in MicroPython using a special syntax:

@rp2.asm_pio()
def measure_high_time():
    wrap_target()

    # Set x register to 0
    mov(x, null)

    # Wait for a HIGH.
    wait(1, pin, 0)

    label("high_loop")
    # ???? what goes here ????

    jmp(pin, "high_loop")

    mov(isr, x)
    push(noblock)

    # Not a real instruction, tells the PIO that, after
    # the push(), it should continue after the wrap_target().
    # Saving a jump is cool when you've only got 32 
    # instructions!
    wrap()

Scouring the PIO manual in the RP2040 datasheet, it took me a while to see the answer, but in the end it was the only option. The only “arithmetic” operation that the PIO has is “decrement x and jump if x is non-zero”. If we only have “decrement”, can we do that instead?

  • Set X to something large
  • Wait for pin to be high
  • While pin is high, decrement X (and somehow turn the unwanted jump into a no-op)
  • When pin goes low, send x to CPU

It took me a while to puzzle out how to set X to something large but I came up with this:

wrap_target()
# x = 0
mov(x, null)
# Decrement x, which wraps around to 0xffffffff
# x was 0 so the jump falls through.
jmp(x_dec, "wait")

# Wait for a HIGH.
label("wait")
wait(1, pin, 0)

label("high_loop")
# Decrement x; jump always fires because x is non-0.
jmp(x_dec, "cont_high_loop")
# Jump lands on next instruction anyway, staying inside
# Loop.
label("cont_high_loop")
jmp(pin, "high_loop")

# Send x to the CPU.
mov(isr, x)
push(noblock)

wrap()

And it worked! The value sent to the CPU is 0xffffffff minus the count but that’s easily corrected.

I was able to adapt the approach to make a second PIO program that measures the full cycle time of the PWM (i.e. “on” time + “off” time). That was a little trickier because there’s no equivalent to “jmp(pin)” that loops while a pin is low. The code is here in case it’s useful.

Of course, as soon as I showed Lance my code, he Googled the problem and found someone else had an even neater solution. Turns out you can save a whole instruction(!) by using

mov(x, invert(null))

to set x to 0xffffffff directly. You live and learn!

Enabling a second i2c bus on the Pi 5

One of the problems that we had with our previous PiWars entry was unreliable I2C communication from the Pi to all our peripherals. One reason for that was that we had a lot of devices on the bus.

We’re using a Pi 5 this year and one of its advantages is that there GPIO header has been massively upgraded. It defaults to the same functions as the Pi 4 but it can be reconfigured to trade GPIO pins for extra I2C busses, PWMs and several other functions. So, this evening I had a go at remaking a second I2C bus and, after a bit of digging, it turned out to be very easy.

After reading the device tree documentation in /boot/overlay/README, right on the Pi, it turns out that there’s a pre-made configuration “overlay” for each peripheral that you might want to enable.

The overlays are all listed in that file along with their configuration flags. All I needed to do to enable the I2C2 bus was to add this line to /boot/config.txt and then reboot:

dtoverlay=i2c2-pi5,pins_12_13

Then, to check it was working, I put my oscilloscope on those pins and ran

sudo i2cdetect 2

That gave the output down at the top of the article. The rise time looks slow, which I think is because it’s using the internal pull up. I probably need to add an external pull up radiator if the right value.

Along the way I find the pinctrl command helpful. It can be used to show the function and current state of the pins.

pinctrl funcs 0-27 # show the alternate functions of GPIO pins
pinctrl get 12-13 # get current state of pins 12 and 13

Driving a BLDC motor from zero to slow-and-janky in Python

Motor and Pico, ready for battle…

One of my personal goals for PiWars this year was to try to build a BLDC motor controller. Mainly because I find that kind of thing a fun challenge…

Why is controlling a BrushLess DC motor a challenge? It has no brushes(!) the brushes in a brushed motor physically implement the control algorithm for the coils, energising each coil in the motor in correct sequence in lockstep with the rotation of the motor. Controlling a brushless motor requires the controller to correctly energise the coils in lockstep with the rotation of the motor. This gives great flexibility (but also allows for easily burning out a motor!).

Of course, we could buy a controller off the shelf, but the Pico is out and I wanted to use one in anger. It has been done before by the SimpleFOC project and my plan B was to “just use SimpleFOC” (but there’s less fun in using a library)!

Hello world

At the most basic level, I had the idea that field-oriented control (FOC) was about driving sine wave signals into the three terminals of the motor at 120-degrees out of phase with each other and then rotating those sine waves through 360 degrees in lock-step. In turn that rotates the magnetic field smoothly through 360 degrees and motor’s rotor turns with it. There’s a lot more to it than that, but I didn’t have have a rotary encoder when my motor arrived so I thought “let’s just give that a try in MicroPython”:

  • Assign 3 GPIO pins to the 3 phases of the motor.
  • Make them all PWM outputs at say, 20KHz and connect to motor through a (half bridge) motor driver per pin.
  • Loop, incrementing x each PWM cycle…
    • Set PWM 1 to sin(x)
    • Set PWM 2 to sin(x+120 degrees)
    • Set PWM 3 to sin(x+240 degrees)

And… after some fiddling (and lots of debug prints to the console) something happened!

Writing this now that I’m a bit further on in the project I now know that controlling a motor like this is really bad(TM) because the rotor in the motor “catches up” to the magnetic field, which means that the field stops doing as much work on the rotor and it dumps all that electrical energy we’re feeding it into heat instead :-/ Still, as a hello world, it felt great!

If you’re going to try anything like this and you want to keep the magic smoke inside the motor, you’ll want a benchtop power supply with current limiter and an oscilloscope!

Soldering iron: hot; kettle: on; OpenCV… should finish compiling by PiWars 2024

After a few years where life, a non-silicon baby, and, global pandemic got in the way, the Tigerbot team has dusted off our soldering irons and put the kettle on while we compile OpenCV for PiWars 2024.

We all have our own reasons for entering PiWars:

  • Lance likes to have a pretext to build barely-legal nerf cannons.
  • Nell likes to be the only one who’s event code actually works.
  • I find myself being drawn to the low-level stuff: building boards and programming microcontrollers… Didn’t the Pi foundation release one of those of their very own? Well that discounts reusing any of our old hardware doesn’t it. We’ll have to rebuild all the fun bits around a Pico (or two) 😉

How are we getting on:

  • We’ve made some choices on theme and overall shape of the bot; we want to go back to 4 meccanum-style wheels this year since they were a lot of fun to drive on the Orange Tigerbot.
  • Lance has done a first pass at a CAD model for the bot and he’s started 3D printing. He’s planning to pull together a basic version of the bot with basic motors ASAP so Nell has something to work with.
  • Nell’s started on coding; repurposing our Golang controller code from last time and sketching some code for one of the events.
  • I’ve fallen down the motor control rabbit hole; I really like the idea of using brushless motors this year because it’s a challenge to try writing a motor controller for the Pico…

Docker part 2 – our use

For us, using Docker means:

  • we can build the code on our fast laptops and only deploy the built code to the robot’s Pi.
  • The deployed container works just the same on my Pi and on Shaun’s Pi.
  • We can package our build toolchain so that that too “just works” on my laptop and Shaun’s laptop.
  • The robot code and build toolchain can be pushed to the cloud for easy sharing between us
  • If we have to rebuild an SD card on the day, it should be easy.
  • We don’t have to install OpenCV ourselves (someone else has already done the hard bit for us)!

So how do we actually get these benefits?  You define a docker container with a Dockerfile.  This is a text file which has a few commands used to set up the contents of the container.  Our build container (more on that in a moment) has this dockerfile:

# Start with a container that's already set up with OpenCV
# and do the builds in there.

FROM sgtwilko/rpi-raspbian-opencv:stretch-latest

RUN apt update
RUN apt install make gcc
RUN apt install wget
RUN wget https://dl.google.com/go/go1.10.linux-armv6l.tar.gz
RUN tar -C /usr/local -xzf go*.tar.gz

ENV PATH=$PATH:/usr/local/go/bin
ENV GOROOT=/usr/local/go/
ENV GOPATH=/go/
RUN apt install git

RUN mkdir -p $GOPATH/src/gocv.io/x/ && \
    cd $GOPATH/src/gocv.io/x/ && \
    git clone https://github.com/fasaxc/gocv.git

# Pre-build gocv to cache the package in this layer. That
# stops expensive gocv builds when we're compiling the controller.
RUN bash -c "cd $GOPATH/src/gocv.io/x/gocv && \
             source ./env.sh && \
             go build -v gocv.io/x/gocv"

RUN bash -c "cd $GOPATH/src/gocv.io/x/gocv && \
             source ./env.sh && \
             go build -v ./cmd/saveimage/main.go"

# Add the propeller IDE tools so we can extract the propman tool.
RUN wget https://github.com/parallaxinc/PropellerIDE/releases/download/0.38.5/propelleride-0.38.5-armhf.deb
RUN sh -c "dpkg -i propelleride-0.38.5-armhf.deb || true" && \
    apt-get install -y -f && \
    apt-get clean -y

RUN apt-get install libasound2-dev libasound2 libasound2-plugins

# Pre-build the ToF libraries

COPY VL53L0X_1.0.2 $GOPATH/src/github.com/tigerbot-team/tigerbot/VL53L0X_1.0.2
COPY VL53L0X_rasp $GOPATH/src/github.com/tigerbot-team/tigerbot/VL53L0X_rasp
WORKDIR $GOPATH/src/github.com/tigerbot-team/tigerbot/VL53L0X_rasp
RUN API_DIR=../VL53L0X_1.0.2 make all examples

RUN mkdir -p $GOPATH/src/github.com/tigerbot-team/tigerbot/go-controller
WORKDIR $GOPATH/src/github.com/tigerbot-team/tigerbot/go-controller

This breaks down as:

  • start with the docker container by the sgtwilko organisation called rpi-raspbian-opencv with the version stretch-latest (this gets us the latest version of raspbian with opencv pre-installed).
  • Run apt-get to install compilation tools.
  • Set some environment variables
  • git clone our fork of the gocv repo
  • Pre-build gocv
  • Install the propeller IDE to get the propman tool (to flash the propeller with)
  • Prebuild the VL53L0X libraries
  • Create the directory for the go-controller code to be mounted into
  • Set the working directory to be where the go-controller code is mounted in.

A note about layers and caching: docker containers build in layers – docker caches container images at each command in the build.  If you rebuild a container, it will start from the latest container image that hasn’t changed.  So it pays to put the stuff that you won’t change early in the Dockerfile (like our build of OpenCV).

We use 2 different containers in our robot – a build container (above) and a deploy container.  The deploy container Dockerfile looks like this:

# Start with a container that's already set up with OpenCV
# and do the builds in there.

FROM tigerbot/go-controller-phase-1:latest as build

COPY go-controller/controller /go/src/github.com/tigerbot-team/tigerbot/go-controller/controller
COPY go-controller/copy-libs /go/src/github.com/tigerbot-team/tigerbot/go-controller/copy-libs

WORKDIR $GOPATH/src/github.com/tigerbot-team/tigerbot/go-controller

# Copy the shared libraries that the controller uses to a designated
# directory so that they're easy to find in the next phase.
RUN bash -c "source /go/src/gocv.io/x/gocv/env.sh && \
./copy-libs"

# Now build the container image that we actually ship by copying
# across only the relevant files. We start with alpins since it's
# nice and small to start with but we'll be throwing in a lot
# of glibc-linked binaries so the resulting system will be a bit
# of a hybrid.

FROM resin/raspberry-pi-alpine:latest

RUN apk --no-cache add util-linux strace

RUN mkdir -p /usr/local/lib
COPY --from=build /usr/bin/propman /usr/bin/propman
COPY --from=build /lib/ld-linux-armhf.so* /lib
COPY --from=build /controller-libs/* /usr/local/lib/
COPY --from=build /usr/share/alsa /usr/share/alsa
COPY --from=build /go/src/github.com/tigerbot-team/tigerbot/VL53L0X_rasp/bin/* /usr/local/bin/
COPY go-controller/sounds /sounds
COPY --from=build /go/src/github.com/tigerbot-team/tigerbot/go-controller/controller /controller
COPY metabotspin/mb3.binary /mb3.binary
ENV LD_LIBRARY_PATH=/usr/local/lib

ENTRYPOINT []
CMD /controller

Which breaks down like this:

  • Grab the build container contents.
  • Starting with the raspberry-pi-alpine container with tag latest from the resin organisation (a very stripped down linux distribution – the whole OS is 18MB)
  • Install util-linux and strace binaries
  • Copy built artifacts from the build container into this container
  • wipe the ENTRYPOINT (the command run when the container starts)
  • set the command to run when the container starts to /controller

Our build Makefile has these cryptic lines in:

ifeq ($(shell uname -m),x86_64)
	ARCH_DEPS:=/proc/sys/fs/binfmt_misc/arm
endif

/proc/sys/fs/binfmt_misc/arm:
	echo ':arm:M::\x7fELF\x01\x01\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x00\x28\x00:\xff\xff\xff\xff\xff\xff\xff\x00\xff\xff\xff\xff\xff\xff\xff\xff\xfe\xff\xff\xff:/usr/bin/qemu-arm-static:' | sudo tee /proc/sys/fs/binfmt_misc/register

This says – if we’re building on an x86_64 machine (i.e. our 64 bit intel laptops), then put that magic string into /proc/sys/fs/binfmt_misc/register which registers the qemu-arm-static binary as an ARM interpreter in the kernel (using binfmt_misc kernel module).  In other words, use the qemu emulator to make this machine pretend to be ARM architecture while building.

We can now do all our development on intel linux laptops, build on the fast laptop, put the binaries into a deploy container and copy the container over to the Pi for execution.  We can do the copy in a couple of ways.  We can use docker save to output a tar file which we copy over to the Pi and docker load into docker there.  Makefile has:

install-to-pi: controller-image.tar
	rsync -zv --progress controller-image.tar pi@$(BOT_HOST):controller-image.tar
	ssh pi@$(BOT_HOST) docker load -i controller-image.tar

The other way is to docker push <imagename> the image to Dockerhub – this is cloud storage for Docker images.  We can grab that from the cloud on the Pi with docker pull <imagename> allowing us to grab and run the docker image on ANY Pi (connected to a network and running the Docker daemon) – so I can easily grab and try out code that Shaun has built and pushed to Dockerhub on my Pi at my home.

This setup is a reasonably advanced use of Docker and pretty similar to what we have in our day jobs (building an open source software project for deployment on different architectures).