Giving Wall-e a screen

Wall-e wouldn’t be complete without a screen.  We’re using this  128×128 colour OLED screen that works with the PiOLED kernel driver.

After enabling SPI using raspi-config and wiring it up to the SPI bus and a couple of GPIOs needed to access its reset and data/command pins:

we were able to get it working with the fbtft driver, which exposes the screen as a standard framebuffer device.

Figuring out the colour map

I hadn’t worked with the framebuffer before but it turned out to be fairly simple to use.  Basically, it exposes the screen as a special type of file; if you open that file and write a couple of bytes to it, it updates a pixel on the screen and then moves the cursor to the next pixel.  Once you’ve written 128 pixels, it moves to the next line.  You can use the seek operation to move the cursor to a different place in the file, which is the same as moving the cursor to a different place on screen.

This particular screen supports 16-bit colour, with 5 bits for red, 6 bits for green and 5 for blue, so the process for writing a colour to the screen is something like this:

  • Calculate your red, green and blue intensity.
  • Scale red and blue to the range 0-31 (i.e. 5 bits of precision)
  • Scale green to 0-63 (i.e. 6 bits).
  • Pack the bits into a 16 bits: rrrrrggggggbbbbb and then break the 16-bits up into two bytes: rrrrrggg and gggbbbbb
  • Write those two bytes to the address of the pixel; first the gggbbbbb and then the rrrrrggg byte.

Since we’re writing our code in golang, I searched around for a golang drawing library and found the gg library.
As a prototype, I used that to draw a mock-up of Wall-e’s screen and then scanned the resulting gg Image, extracting the pixels and writing them to the frame buffer in the 16-bit format:

The code for the above looks like this:

func drawOnScreen() {
	// Open the frame buffer.
	f, err := os.OpenFile("/dev/fb1", os.O_RDWR, 0666)
	if err != nil {
		panic(err)
	}

	// Loop, simulating a change to battery charge every half second.
	charge := 0.0
	for range time.NewTicker(500 * time.Millisecond).C {
		// Create a drawing context of the right size
		const S = 128
		dc := gg.NewContext(S, S) 
		dc.SetRGBA(1, 0.9, 0, 1) // Yellow

		// Get the current heading
		headingLock.Lock()
		j := headingEstimate
		headingLock.Unlock()

		// Move the current origin over to the right.
		dc.Push()
		dc.Translate(60, 5)
		dc.DrawString("CHARGE LVL", 0, 10)

		// Draw the larger power bar at the bottom. Colour depends on charge level.
		if charge < 0.1 {
			dc.SetRGBA(1, 0.2, 0, 1)
			dc.Push()
			dc.Translate(14, 80)
			DrawWarnign(dc)
			dc.Pop()
		}

		dc.DrawRectangle(36, 70, 30, 10)

		for n := 2; n < 13; n++ { if charge >= (float64(n) / 13) {
				dc.DrawRectangle(38, 75-float64(n)*5, 26, 3)
			}
		}

		dc.Fill()

		dc.DrawString(fmt.Sprintf("%.1fv", 11.4+charge), 33, 93)

		dc.SetRGBA(1, 0.9, 0, 1)

		// Draw the compass
		dc.Translate(14, 30)
		dc.Rotate(gg.Radians(j))
		dc.Scale(0.5, 1.0)
		dc.DrawRegularPolygon(3, 0, 0, 14, 0)
		dc.Fill()

		dc.Pop()

		charge += 0.1
		if charge > 1 {
			charge = 0
		}

		// Copy the colours over to the frame buffer.
		var buf [128 * 128 * 2]byte
		for y := 0; y < S; y++ {
			for x := 0; x < S; x++ { c := dc.Image().At(x, y) r, g, b, _ := c.RGBA() // 16-bit pre-multiplied rb := byte(r >> (16 - 5))
				gb := byte(g >> (16 - 6)) // Green has 6 bits
				bb := byte(b >> (16 - 5))

				buf[(127-y)*2+(x)*128*2+1] = (rb << 3) | (gb >> 3)
				buf[(127-y)*2+(x)*128*2] = bb | (gb << 5)
			}
		}
		_, err = f.Seek(0, 0)
		if err != nil {
			panic(err)
		}

		lock.Lock()
		_, err = f.Write(buf[:])
		lock.Unlock()
		if err != nil {
			panic(err)
		}
	}
}

 

The Frame

The last couple of weekends, I’ve been working on the least sexy part of the robot – the mounting frame.  As has been mentioned, the space inside the robot is VERY tight this year, so making everything fit is a real challenge.

We need to fit in:

  • The Pi and its (not quite) Hat
  • 2 x Motor controllers
  • Servo controller board
  • IMU
  • Screen
  • 2 x Battery monitors
  • 2 x PSUs
  • Amplifier
  • Speaker

All in a space 94 x 83 x 89mm.  And we need to think about thermal management.  Looks like we’ll have to mount the batteries externally!

Our solution is this 3D printed frame.  It holds the circuit boards vertically (for good convection cooling), puts the Pi and its connections at the back (the back is removable for easy access).


All the other little boards are mounted on the reverse of the Pi mounting plate (hidden in the photo).  The whole thing lifts out of the robot if we need access to one of the boards buried near the bottom.

Tour of the main PCB

As mentioned in my previous post, this year we needed (an excuse) to learn KiCad and build a custom PCB.  Thankfully, we did succeed in soldering it up , despite the tiny pitch on some of the components.

Picture of the board

The PCB dives into a few parts.  I expect you’ll all recognise the Pi header in the top left.  Above that, in yellow on the annotated image, we have the SPI peripherals: the screen and the IMU (which we use mainly for the gyroscope).

Annotated board
Yellow: peripheral connectors; Pink: Parallax Propeller; Green: Time-of-flight sensor connectors; Red: Isolation chips

Below the header, in pink, we have the Parallax propeller chip, a fast microcontroller that we use to decode the signals from the motors.   Each motor can put out 200k pulses per second, which isn’t really possible to handle from the GPIO pins because Linux can’t really handle that many interrupts per second.

To the right, in yellow, we have connectors for the “noisy” off-board components.  These sit over their own ground plane, so that, if we want to, we can drive them from a completely isolated power supply. From top to bottom:

  • “noisy” 5v power
  • motor driver control 1
  • motor encoder 1
  • motor driver control 2
  • motor encoder 2
  • servo controller
  • 2 x power monitors

To bridge the gap between the microcontroller and the noisy world of the motors, (in red) we have a pair of ISO7742 chips.  These provide two input and two output signals, which are level shifted from 3.3v to 5v and are  isolated through an internal capacitor.  Unlike an optoisolator, they were super-simple to use, requiring 3.3v and 5v power and grounds, a couple of decoupling capacitors and some pull-ups on their enable pins.

Similarly, below that, we have an isolated i2c line for driving the servo board (which runs from the “noisy” 5v power supply.

In the bottom left (in green) we have 6 connectors for optical time-of-flight sensors.

The time of flight sensors, Propeller, servo controller and voltage monitors are all i2c controlled, which poses a couple of problems:

  • i2c busses tend to become unstable with more than a handful of devices (because each device adds capacitance to the bus, making it harder for any device to drive the bus)
  • we have no control over the addresses of many of the devices; for example, all the time-of-flight sensors use the same address.

To address those problems, we included an i2c multiplexer in the design (to the left of the Propeller), allowing us to switch any combination of devices on and off the bus.

Multiplexer schematic
Multiplexer schematic
Picture of the multiplexer
Multiplexer

Despite having very little space to play with, we were able to squeeze in a bit of prototyping area, which we’ve used to address errata.  For example, I found that I’d missed a couple of pull-ups on the i2c port that the propeller was attached to.  A bit of thin kynar wire to the rescue:

Tracks!

Having seen various tracked robots on Thingiverse and especially this amazing one, I thought we should try and implement Wall-E’s tracks ourselves.

We could have gone with a simple rubber band or timing belt (and in retrospect that would have been MUCH easier), but I really fancied seeing how far I could push 3D printed parts.

So I had a long browse through thingiverse looking at lots of track designs and started to draw up my own.  The FPV rover design had an interesting idea for fine adjustment – they used 2 different sizes of 3D printed pins to join the tracks together to make the whole thing slightly tighter or looser as needed.

In the end I settled on a design which had sprocket wheels mounted on either side of a supporting frame (to avoid nasty torques on the frame).  Obviously the layout of the sprocket wheels on the frame had to match the ‘real’ Wall-E, but I decided to make the sprocket teeth larger (and therefore stronger).

Then the track elements needed designing.  I went with a design which used the links between the tracks as the raised sections and the sprocket teeth sat in a deep well, but did not protrude from the other side.  Like this:

A matching pin is shown too.  After a few trail and error prints to fine tune the pin diameter, well depth, we got something that worked.  And then we needed to print about 36 of them per set of tracks (3 x 4 hour sessions of printing).

The final problem was how to connect these to the motor.  We wanted a fair bit of speed, so I’d ended up buying pololu motors with a 4:1 gearbox.  Having seen these run, I was a bit worried about the high speed, so wanted to gear them down a touch.  I found a bevel gear generator plugin in Onshape and ended up with this:

And that worked!

In fact running these is slightly terrifying – fairly sure if you got your finger in there it’d get a nasty nip…

More Peripherals

Following the posts on servos and distance sensors, I thought I’d talk about the other peripherals we’re adding to Tigerbot.

A screen is an under-rated part of a PiWars robot.  Its really handy not to have to cart a laptop around with you between events and have a way to check that the robot is in the mode you think it is (ask me how we know!).  We found this little 128×64 pixel screen on ebay based on the SSD1306. And Adafruit has a lovely tutorial on how to use it.

It can be controlled over either I2C or SPI (just set the pattern of resistors on the back).  With this, you can write your code to have a menu of “modes” (one for each event) switch between them using buttons on your controller and display the mode the robot thinks you’re in on the screen.  No more laptop on the day!

  • Another handy peripheral is an IMU.  This is a combination Gyro, Accelerometer, Barometer and Thermometer all in one.  Of most interest to us is the Gyro.  This is a rate gyro – it tells you how fast the rate of rotation is changing (and NOT the absolute rate of rotation).  This is a 3-axis device – it tells you about rotation around the X, Y, Z axes.  To use it you generally have to calibrate it first – with the robot still and stationary, you take readings from each of the gyros for a while and record the output.  These are your zero readings.  All future readings from the gyro need to subtract the zero readings.  The zero reading can vary with battery voltage and temperature, so be sure to re-calibrate it just before you use it!  Now you can turn the rate into absolute rotation by taking lots of readings and integrating them.What use is a gyro?  There are a couple of obvious events that could use a gyro:
  • Straight line speed test
    • here you’re trying to keep the robot pointed in the same direction all the way to the end
  • Minimal maze
    • for checking that your turns are exactly 90 degrees (if you’re using wheel rotations for this, how do you know if the wheel has slipped on the surface?)

A Gopher in a Tiger?

Golang mascot, Renee French

In the past, we’ve used Python and C++ for our robots but this year we switched to Go.  Why the change?  It seemed like a good idea at the time

To be honest, the main reason was that I signed up to lead the coding effort this year.  I haven’t had much C++/Qt experience (so it wasn’t easy for me to pick up last year’s code) but I’ve been working in Go in my day job for a couple of years;  I enjoy working with Go and the language has some features that are appealing for building robots:

  • “Naturally” written Go is just plain faster than “naturally” written Python (by some margin).
  • Go can take advantage of more than one core by running multiple goroutines at once (and the computer scientist in me can’t resist a bit of CSP). The normal Python interpreter is limited to one core.
  • It felt like a good choice because it sits at the right level, giving access to low-level primitives (like pointers and structs) for interfacing with C and hardware while also offering garbage collection and other modern features for rapid development.

I have found Go to be a good language to program a bot.  The biggest downside was that the library ecosystem is a bit less mature than Python or C(++).  That meant that getting the hardware driver layer of the bot together required quite some work:

  • We found that the Go wrapper for OpenCV (gocv) required a patch to work on the Pi.  (I found the patch in a forum post but I can’t dig it out to link to.)
  • We didn’t find a working Go driver for the VL53L0X time-of-flight sensors, so (after some false starts) we took the existing C-wrapper that GitHub user cassou had already ported for the Pi and wrapped it in a Go library using CGo (Go’s C function call interface).
  • We ported a Python sample joystick driver for the DS4 to Go.  The Linux joystick interface turned out to be easy to access.
  • There were a few i2c libraries without a clear winner.  We ended up using golang.org/x/exp/io/i2c.

While it made some work, I find the low-level bit banging quite fun so it wasn’t much of a downside 🙂

Chassis V2

After some testing (read repeatedly trying to make it do the minimal maze!) we’ve realised that the chassis is very stiff. This means that usually only 3 wheels are touching the ground, which means our turns (and, ahem, straight lines – Shaun) are more variable than they should be.

Solution is to saw the chassis in half 🙂 We’ve separated front and back into separate sections and joined them with a hinge so that the front and back can twist slightly compared to the other end. We’ve added limiters to ensure that it only twists up to 10 degrees of movement so that the obstacle course doesn’t break the robot!

Here’s the print in progress:

And with the rest of the robot installed into it:

Initial testing indicates that the new twisty chassis works better, so that makes me feel much better about totally rebuilding the robot just 2 weeks before the event!

 

3d printing

This year I’m lucky enough to have access to a 3D printer.  These things are amazing.  It is incredible to be able to design something in CAD and then have it in your hand the next day.

Our process has been design the whole robot in a web-based CAD package (like Fusion 360).  As an aside: OMG – web based CAD!  I can’t believe it exists and is free!  Hat tip to Tom Oinn (@approxeng) for introducing me to the idea of it.

CAD allows you to see how the whole thing is going to fit together before you’ve spent a single penny on anything ‘real’.  Once you’re happy with the design, you can download an STL file (the 3D model of the part) and load it into your slicer software.  The slicer’s job is to turn the 3D model of the part into a list of movements of the print head (aka g-code).  It is here that you decide what the infill of the part will be and if you need any support material, etc, etc.

You then send the g-code file to the printer – in our case by copying it onto an SD card, though I’ve recently set up Octoprint on a spare Pi, which gives me a web server to control the printer (i.e. upload g-code files, start prints, etc) and a webcam so I can watch it work while I’m at work.  Prints take HOURS.  Our V2 chassis took 12 hours to print – which is why being able to monitor prints from work is awesome.

Nothing beats being able to discuss and modify the design of a robot part at lunchtime with your team-mates, then kick off a print and be able to bring in the finished part the next morning to hand over and have them try it out on the robot that evening.

3D printer prices have dropped massively recently – my machine is a slightly more expensive one (a genuine Prusa i3 Mk2S kit for those who care) but clones of this machine can be bought for £100 now!  Note that the cheaper kits often take more time to get “dialled in” than the more expesive kits – you need to decide if you are time-poor or cash-poor…

As for running costs, printer filament (usually PLA) costs about 25GBP per kilogram reel.  My slicer (slic3r) tells me how much filament will be used to print a part, and our biggest part (the chassis) used about 7GBP worth of filament.  I think we’ll end up using most of a reel for Tigerbot and 25GBP is cheap compared with all the electronic parts, and is MUCH cheaper than if you buy ready made parts (wheels, etc).  Speciality filaments like the rubbery TPU can be more expensive (we’re using TPU for the tyres).

Chasing motor gremlins

Not our motors

We spend a big chunk of last weekend trying to track down an issue with our motor driving logic.  The problem was that  sometimes a fast change of direction would cause the i2c bus to die; writes from the Pi would fail and the bot would go crazy as result.

We knew it was likely to be one of a couple of factors:

  • High current draw from the quick change in direction causing a brownout.
  • Motor switching causing interference/voltage spikes.

Unfortunately, not owning an oscilloscope, it was hard to pinpoint the exact cause so we resorted to magic capacitive pixie dust and software changes:

  • We added large reservoir capacitors to the power inputs of the various boards to provide a store of charge in case the power rail momentarily dropped.
  • We added small decoupling capacitors too to help filter any noise.

Those changes did seem to help but they didn’t eliminate the problem completely.  An extra layer of software changes seems to have done the trick:

  • We changed the i2c driver code to close and reopen the device file after a failure. The hope is that that resets the bus more thoroughly than simply retrying the write.
  • After John mentioned that he’d seen issues with it in the past, we took control of the GPIO pin that is attached to the propeller’s reset pin and started actively driving it rather than letting it be weakly pulled up with a resistor.
  • We beefed up our retry loop, with escalation.  If it fails enough times, it resets the propeller and reflashes it.  That takes about a second but it might just save us on the day!
  • We implemented a maximum ramp rate for the motors so that we change their speed a little slower.
  • We put the motor PWMs out-of-phase so that they don’t all start and stop on the same clock cycle.

With all those changes in place, we’ve seen a few retries but it’s hasn’t escalated to a reset yet so, fingers crossed, the problem is fixed enough.

Peripherals – Servos

So two of the PiWars 2018 events suggest using servos to operate something: duck-shoot and golf.

Servos have been around for a long time and have a very simple interface.  About every 20ms, you need to send them a pulse.  That pulse needs to be between 1ms and 2ms long.  A pulse length of 1.5ms will cause the servo to move to the centre position, 1ms and 2ms correspond to the two ends of travel.  Note that some servos can move beyond these limits, and some can be damaged if you drive them beyond these limits!  If you fail to send a pulse every 20ms, the servo will power down (stop actively driving the motor to a particular position).

See https://www.raspberrypi.org/forums/viewtopic.php?t=46771 for more details and Pi driven solutions.

In Tigerbot, our servos are driven by the Propeller Hat.  This is a microcontroller with 8 cores.  It takes some of the load off the Pi and because it isn’t running an operating system, it is possible to *guarantee* pulse timings.  Our controlling Pi then sends desired servo positions over to the Propeller using I2C, then the Propeller sends servo pulses.  Here’s a demo: