More last-minute redesigns

Its always funny that however much you plan and design digitally, you still find issues when putting it all together.

In this case I’d been working to assemble and test the nerf gun on a fake “bot front” – a duplicate of the front parts of the robot where the accessories connect to it, bolted to a wooden board. It seems the hexagonal axle I’d designed to mount the gun to the robot were not long enough. OK, simple enough fix in CAD, and reprint. The axle halves were then epoxied together and fitted up to the robot. And now we’re not getting enough friction from the press-fit “cup” at the end of the axle to the servo output shaft.

This time the fix was to create a separate cup that screws down onto the servo shaft for a tight and reliable fit, with a hexagonal cutout to put the end of the shaft into. Which fitted perfectly onto the original, too-short axles.

Here’s a shot of the front of the nerf gun on the test stand. The cup is behind the motor cable right at the centre of the shot.

And while we’re here, here’s a side view of the nerf gun on the test stand:

Here you can see the Pi, the gun-mounted camera and the PCA9685 servo driver, which we talk to over I2C to set servo positions.

Motor failure

Hardware failure is a fact of life with an event like PiWars. This time one of the gun flywheel motors suddenly stopped working. Swapping the ESC from the other motor, I found that the fault stayed with the motor. Putting a multimeter on the windings, it seems that one of the phases has gone open-circuit (which suggests that a wire has snapped somewhere inside the motor).

Fortunately, I have a policy of always buying 1 more of each unique part than we actually need for the robot. That came about after we had a gearbox failure just before a previous piwars.

In this case, that policy was especially handy since these motors (EMax MT2206) have been in use since 2018 (we used them on tigerbot and wall-e), and it turns out you can no longer get them in the UK – we’d originally got them from TME in Poland, and since Brexit, TME will not sell to UK customers.

Anyway, after swapping in the spare, the nerf gun is operational again. Phew!

Remote control

Shaun’s got remote control mode working. There are 2 remote control modes – Direct and drive-by-wire.

In direct mode, the joystick inputs are translated directly into motor speed settings.

In drive-by-wire mode, the robot maintains a model of what the user desires from the robot – e.g. the heading, speed, strafing, etc. And then applies those “set-points” through a PID controller to make the robot’s actual heading, speed, strafing match the desire.

This makes the robot more resilient to disturbances – you’ve probably seen those videos where someone kicks Boston Dynamics Atlas robot and it recovers? Its a bit like that. In our case, the disturbances are likely to come from uneven floors, loose gravel or turntables on the obstacle course, roughness in the mecanum wheel rollers, etc, etc.

Anyway, here’s Shaun driving round the kitchen in direct mode:

Our favourite use of drive-by-wire came from Tigerbot (Piwars 2018), where the “disturbance” was the turntable. The robot maintained the desired heading, no matter what was going on underneath it. Jump to about 20s in to see the good bit.

Ladders

We’ve now got the ladders printed, attached the Time Of Flight sensor and Camera and added the whole thing to the robot:

The time of flight sensor is this one. It gives an 8×8 “image” of distances. Hopefully mounting it with the camera we can combine the image information with depth information.

The way the sensor mount attaches to the ladders is like a pantograph – the mechanical arrangement ensures that the sensor head continues to point at the same angle even when we raise the ladder. We’re hoping that putting the ladder up high means we can see more of the floor for the minesweeper event.

Another mechanical bug was found here – we need the camera cable to be splittable into two because the cable runs through the bodywork. Parts on order

Firebot mechanical parts (4)

Integrating it all!

A robot isn’t really a robot until you combine the parts together. And that’s also the point that you realise that you’ve designed the parts so that its a real pain to assemble the thing…

For example, we found that the Pi covered the holes which attach the mounting rail to the chassis. While there is an ordering which you can use to assemble the robot, once the battery, Pi and mounting rail are all in, its a massive pain if you need to get the Pi out again (for example to put a PCB Hat on it with connectors for all the things that need to plug into it). Shaun had to redesign how the mounting rail fits on the robot to be able to get it in and out easily:

Now the mounting rail screws into semi-circular blobs mounted in the old mounting holes, and we can now get the Pi in and out to work on the connector PCB:

And once you get it all together, you get to find out if it works as a whole. This is the robot performing a pre-programmed test pattern:

Next step is to get the cosmetic parts printed and on, and get to work seriously on the software. And print the attachments, debug the code, etc, etc. So much to do with just 2 weeks to go!

Firebot mechanical parts (3)

The nerf gun!

This is probably the best view to understand the operation of the gun – with the top shell removed.

The large area in the middle is shaped to allow a standard 6-shot nerf magazine to be inserted. Note the magazine release clip on the rear.

The servo with the arc-shaped actuator pushes the dart out of the magazine and into the flywheels at the front. Those are driven by standard RC brushless motors at 10K+ RPM. Once the pusher gets the dart between those, the dart is grabbed and accelerated out of the gun at silly speeds. We need to be careful to limit the speed so that the dart is below the half joule energy limit required to remain classed as a toy (see EN71).

The studs on the front of the gun unit are holes for mounting the Pi Camera to allow autonomous aiming.

The hexagonal axle connects to a servo mounted on the robot’s accessory point to control elevation of the gun. The robot itself will rotate to traverse the gun.

Here’s a video demonstrating the nerf gun in action.

I’m operating the pusher servo by hand (I don’t have it hooked up to the servo driver yet). You can hear the flywheels grinding down the dart tip(!), because I’m too slow to push it in, hopefully the servo will do it faster.

Firebot mechanical parts (2)

In the last post I talked about the design requirements. In this post I’m going to talk about what we’ve ended up with.

So – here are the (more or less) final mechanical parts:

The ladders hold a “sensor block” which mounts the forward-facing camera.

The front “bumpers” are removable with dovetails to lock them in place. This allows for easy changing of the accessories for different events.

The central section of the cab is removable to reveal the channel for storing barrels and mounting the Nerf gun. Here is the robot configured for Eco Disaster:

Those strange keyhole cutouts at the corners are for holding the magnetic rotary encoder PCBs in just the right spot to read the magnet mounted on the brushless motors that Shaun’s worked so hard on.

The front of the chassis and the rear of the chassis are in two parts, with a pivot between them, allowing the front and back wheels to twist relative to each other. This is to ensure that all 4 mecanum wheels touch the ground (for small amounts of unevenness). The pivot is restricted to about 10 degrees of movement. Mecanum wheels (when moving in some directions) rely on the forces produced by opposite wheels to cancel out – this can only happen if they’re all touching the ground. If one wheel isn’t touching the ground, you can get unintended rotations of the robot.

Here’s a view of just the chassis parts, with the pivot point highlighted in yellow. An M5 bolt, washer and nyloc nut go through that pivot to connect the chassis parts.

Firebot mechanical parts

So, while Shaun’s been working hard on the brushless motor controller, I’ve been playing with CAD, specifically Onshape.

An aside – Onshape is awesome – parametric CAD in the browser, so no need to install anything, works on all platforms that can browse the web, etc, etc. There’s even android and iphone apps so you can view models on the move (though with a small screen its a bit of a pain to make edits). I have a “free” account with Onshape, so all my models are freely available for anyone to browse/copy/etc, I’ll post the link in a later post.

We had a lot of fun last time building Wall-E, so we wanted this year’s concept to be “cute”. The overall concept chosen was a low-poly Fire Engine. And in keeping with “cute”, we’re going to base it on the 1910-1930 Leyland Cub, like this:

Acabashi, CC BY-SA 4.0 https://creativecommons.org/licenses/by-sa/4.0, via Wikimedia Commons https://commons.wikimedia.org/wiki/File:1931_Leyland_Cub_fire_engine_at_Hatfield_Heath_Festival_2017_2.jpg

Other choices which impacted the design:

  • we wanted the robot to have a central channel to be able to store barrels. Ideally we’d be able to move 3 barrels at once in Eco Disaster event
  • we wanted to use mecanum wheels – translating in any direction is fun, and particularly handy for the Hindenburg Disaster event
  • most of the events seem to be aimed at camera/image recognition, so we need a camera.
  • we’ve seen people have problems in past PiWars with robots built to the maximum dimensions, so we’re aiming for a little smaller than those

Docker part 2 – our use

For us, using Docker means:

  • we can build the code on our fast laptops and only deploy the built code to the robot’s Pi.
  • The deployed container works just the same on my Pi and on Shaun’s Pi.
  • We can package our build toolchain so that that too “just works” on my laptop and Shaun’s laptop.
  • The robot code and build toolchain can be pushed to the cloud for easy sharing between us
  • If we have to rebuild an SD card on the day, it should be easy.
  • We don’t have to install OpenCV ourselves (someone else has already done the hard bit for us)!

So how do we actually get these benefits?  You define a docker container with a Dockerfile.  This is a text file which has a few commands used to set up the contents of the container.  Our build container (more on that in a moment) has this dockerfile:

# Start with a container that's already set up with OpenCV
# and do the builds in there.

FROM sgtwilko/rpi-raspbian-opencv:stretch-latest

RUN apt update
RUN apt install make gcc
RUN apt install wget
RUN wget https://dl.google.com/go/go1.10.linux-armv6l.tar.gz
RUN tar -C /usr/local -xzf go*.tar.gz

ENV PATH=$PATH:/usr/local/go/bin
ENV GOROOT=/usr/local/go/
ENV GOPATH=/go/
RUN apt install git

RUN mkdir -p $GOPATH/src/gocv.io/x/ && \
    cd $GOPATH/src/gocv.io/x/ && \
    git clone https://github.com/fasaxc/gocv.git

# Pre-build gocv to cache the package in this layer. That
# stops expensive gocv builds when we're compiling the controller.
RUN bash -c "cd $GOPATH/src/gocv.io/x/gocv && \
             source ./env.sh && \
             go build -v gocv.io/x/gocv"

RUN bash -c "cd $GOPATH/src/gocv.io/x/gocv && \
             source ./env.sh && \
             go build -v ./cmd/saveimage/main.go"

# Add the propeller IDE tools so we can extract the propman tool.
RUN wget https://github.com/parallaxinc/PropellerIDE/releases/download/0.38.5/propelleride-0.38.5-armhf.deb
RUN sh -c "dpkg -i propelleride-0.38.5-armhf.deb || true" && \
    apt-get install -y -f && \
    apt-get clean -y

RUN apt-get install libasound2-dev libasound2 libasound2-plugins

# Pre-build the ToF libraries

COPY VL53L0X_1.0.2 $GOPATH/src/github.com/tigerbot-team/tigerbot/VL53L0X_1.0.2
COPY VL53L0X_rasp $GOPATH/src/github.com/tigerbot-team/tigerbot/VL53L0X_rasp
WORKDIR $GOPATH/src/github.com/tigerbot-team/tigerbot/VL53L0X_rasp
RUN API_DIR=../VL53L0X_1.0.2 make all examples

RUN mkdir -p $GOPATH/src/github.com/tigerbot-team/tigerbot/go-controller
WORKDIR $GOPATH/src/github.com/tigerbot-team/tigerbot/go-controller

This breaks down as:

  • start with the docker container by the sgtwilko organisation called rpi-raspbian-opencv with the version stretch-latest (this gets us the latest version of raspbian with opencv pre-installed).
  • Run apt-get to install compilation tools.
  • Set some environment variables
  • git clone our fork of the gocv repo
  • Pre-build gocv
  • Install the propeller IDE to get the propman tool (to flash the propeller with)
  • Prebuild the VL53L0X libraries
  • Create the directory for the go-controller code to be mounted into
  • Set the working directory to be where the go-controller code is mounted in.

A note about layers and caching: docker containers build in layers – docker caches container images at each command in the build.  If you rebuild a container, it will start from the latest container image that hasn’t changed.  So it pays to put the stuff that you won’t change early in the Dockerfile (like our build of OpenCV).

We use 2 different containers in our robot – a build container (above) and a deploy container.  The deploy container Dockerfile looks like this:

# Start with a container that's already set up with OpenCV
# and do the builds in there.

FROM tigerbot/go-controller-phase-1:latest as build

COPY go-controller/controller /go/src/github.com/tigerbot-team/tigerbot/go-controller/controller
COPY go-controller/copy-libs /go/src/github.com/tigerbot-team/tigerbot/go-controller/copy-libs

WORKDIR $GOPATH/src/github.com/tigerbot-team/tigerbot/go-controller

# Copy the shared libraries that the controller uses to a designated
# directory so that they're easy to find in the next phase.
RUN bash -c "source /go/src/gocv.io/x/gocv/env.sh && \
./copy-libs"

# Now build the container image that we actually ship by copying
# across only the relevant files. We start with alpins since it's
# nice and small to start with but we'll be throwing in a lot
# of glibc-linked binaries so the resulting system will be a bit
# of a hybrid.

FROM resin/raspberry-pi-alpine:latest

RUN apk --no-cache add util-linux strace

RUN mkdir -p /usr/local/lib
COPY --from=build /usr/bin/propman /usr/bin/propman
COPY --from=build /lib/ld-linux-armhf.so* /lib
COPY --from=build /controller-libs/* /usr/local/lib/
COPY --from=build /usr/share/alsa /usr/share/alsa
COPY --from=build /go/src/github.com/tigerbot-team/tigerbot/VL53L0X_rasp/bin/* /usr/local/bin/
COPY go-controller/sounds /sounds
COPY --from=build /go/src/github.com/tigerbot-team/tigerbot/go-controller/controller /controller
COPY metabotspin/mb3.binary /mb3.binary
ENV LD_LIBRARY_PATH=/usr/local/lib

ENTRYPOINT []
CMD /controller

Which breaks down like this:

  • Grab the build container contents.
  • Starting with the raspberry-pi-alpine container with tag latest from the resin organisation (a very stripped down linux distribution – the whole OS is 18MB)
  • Install util-linux and strace binaries
  • Copy built artifacts from the build container into this container
  • wipe the ENTRYPOINT (the command run when the container starts)
  • set the command to run when the container starts to /controller

Our build Makefile has these cryptic lines in:

ifeq ($(shell uname -m),x86_64)
	ARCH_DEPS:=/proc/sys/fs/binfmt_misc/arm
endif

/proc/sys/fs/binfmt_misc/arm:
	echo ':arm:M::\x7fELF\x01\x01\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x00\x28\x00:\xff\xff\xff\xff\xff\xff\xff\x00\xff\xff\xff\xff\xff\xff\xff\xff\xfe\xff\xff\xff:/usr/bin/qemu-arm-static:' | sudo tee /proc/sys/fs/binfmt_misc/register

This says – if we’re building on an x86_64 machine (i.e. our 64 bit intel laptops), then put that magic string into /proc/sys/fs/binfmt_misc/register which registers the qemu-arm-static binary as an ARM interpreter in the kernel (using binfmt_misc kernel module).  In other words, use the qemu emulator to make this machine pretend to be ARM architecture while building.

We can now do all our development on intel linux laptops, build on the fast laptop, put the binaries into a deploy container and copy the container over to the Pi for execution.  We can do the copy in a couple of ways.  We can use docker save to output a tar file which we copy over to the Pi and docker load into docker there.  Makefile has:

install-to-pi: controller-image.tar
	rsync -zv --progress controller-image.tar pi@$(BOT_HOST):controller-image.tar
	ssh pi@$(BOT_HOST) docker load -i controller-image.tar

The other way is to docker push <imagename> the image to Dockerhub – this is cloud storage for Docker images.  We can grab that from the cloud on the Pi with docker pull <imagename> allowing us to grab and run the docker image on ANY Pi (connected to a network and running the Docker daemon) – so I can easily grab and try out code that Shaun has built and pushed to Dockerhub on my Pi at my home.

This setup is a reasonably advanced use of Docker and pretty similar to what we have in our day jobs (building an open source software project for deployment on different architectures).

Docker part 1 – what is it?

Once again this year, we’ll be making use of docker on the robot.  Docker is a linux container system.  I’ll explain how we use Docker in the next post, this post will concentrate on introducing Docker.

Containers are a way of segregating processes.  A docker container has its own filesystem, separate from the hosts (though you can mount part of the hosts filesystem into the container if you want to).  Processes running inside the container cannot see processes running outside the container (either on the host or in other containers).  If you use an appropriate network plugin, it is possible to set up networking for the container too (e.g. so that you can run a web application on port 80 in multiple containers on the same host).  The only thing that gets shared between the host and the containers is the host hardware and the linux kernel.

One way of looking at Docker (if you’re used to python) is that its virtualenv on steroids.  Virtualenv gives you a local python environment that you can install libraries into without affecting the systemwide python environment (and potentially breaking other packages).  A docker container gives you a whole filesystem to do stuff in that’s independent of all the other containers and the host.  An example of this: if you have code built to run on Centos, you can install it (and all its dependencies) in a container with Centos and it’ll just work on a Raspbian host.  Or code which worked on an old version of Raspbian, but doesn’t work on the latest.  It makes your code totally independent of what’s on the host, so long as the host is running the Docker daemon.  So I no longer need to be careful about how I setup my SD cards for my robot – so long as they have Docker, the container will run just the same – which makes collaborating on different Pis much easier.  You never run into problems where “well it worked on my machine”.