Assembly!

 

Its always amazing how much you learn when you actually attempt to assemble the robot.  There’s always something you forgot, no matter how careful you were in design…

Fixed missing pull up resistors on one of the I2C lines.
We got lucky and this cable was exactly the right length! All the others needed work though.
Not enough clearance for the screen connector
Voltage/Current monitors too close together
Lower profile capacitors so that it actually fits in the case!

Giving Wall-e a screen

Wall-e wouldn’t be complete without a screen.  We’re using this  128×128 colour OLED screen that works with the PiOLED kernel driver.

After enabling SPI using raspi-config and wiring it up to the SPI bus and a couple of GPIOs needed to access its reset and data/command pins:

we were able to get it working with the fbtft driver, which exposes the screen as a standard framebuffer device.

Figuring out the colour map

I hadn’t worked with the framebuffer before but it turned out to be fairly simple to use.  Basically, it exposes the screen as a special type of file; if you open that file and write a couple of bytes to it, it updates a pixel on the screen and then moves the cursor to the next pixel.  Once you’ve written 128 pixels, it moves to the next line.  You can use the seek operation to move the cursor to a different place in the file, which is the same as moving the cursor to a different place on screen.

This particular screen supports 16-bit colour, with 5 bits for red, 6 bits for green and 5 for blue, so the process for writing a colour to the screen is something like this:

  • Calculate your red, green and blue intensity.
  • Scale red and blue to the range 0-31 (i.e. 5 bits of precision)
  • Scale green to 0-63 (i.e. 6 bits).
  • Pack the bits into a 16 bits: rrrrrggggggbbbbb and then break the 16-bits up into two bytes: rrrrrggg and gggbbbbb
  • Write those two bytes to the address of the pixel; first the gggbbbbb and then the rrrrrggg byte.

Since we’re writing our code in golang, I searched around for a golang drawing library and found the gg library.
As a prototype, I used that to draw a mock-up of Wall-e’s screen and then scanned the resulting gg Image, extracting the pixels and writing them to the frame buffer in the 16-bit format:

The code for the above looks like this:

func drawOnScreen() {
	// Open the frame buffer.
	f, err := os.OpenFile("/dev/fb1", os.O_RDWR, 0666)
	if err != nil {
		panic(err)
	}

	// Loop, simulating a change to battery charge every half second.
	charge := 0.0
	for range time.NewTicker(500 * time.Millisecond).C {
		// Create a drawing context of the right size
		const S = 128
		dc := gg.NewContext(S, S) 
		dc.SetRGBA(1, 0.9, 0, 1) // Yellow

		// Get the current heading
		headingLock.Lock()
		j := headingEstimate
		headingLock.Unlock()

		// Move the current origin over to the right.
		dc.Push()
		dc.Translate(60, 5)
		dc.DrawString("CHARGE LVL", 0, 10)

		// Draw the larger power bar at the bottom. Colour depends on charge level.
		if charge < 0.1 {
			dc.SetRGBA(1, 0.2, 0, 1)
			dc.Push()
			dc.Translate(14, 80)
			DrawWarnign(dc)
			dc.Pop()
		}

		dc.DrawRectangle(36, 70, 30, 10)

		for n := 2; n < 13; n++ { if charge >= (float64(n) / 13) {
				dc.DrawRectangle(38, 75-float64(n)*5, 26, 3)
			}
		}

		dc.Fill()

		dc.DrawString(fmt.Sprintf("%.1fv", 11.4+charge), 33, 93)

		dc.SetRGBA(1, 0.9, 0, 1)

		// Draw the compass
		dc.Translate(14, 30)
		dc.Rotate(gg.Radians(j))
		dc.Scale(0.5, 1.0)
		dc.DrawRegularPolygon(3, 0, 0, 14, 0)
		dc.Fill()

		dc.Pop()

		charge += 0.1
		if charge > 1 {
			charge = 0
		}

		// Copy the colours over to the frame buffer.
		var buf [128 * 128 * 2]byte
		for y := 0; y < S; y++ {
			for x := 0; x < S; x++ { c := dc.Image().At(x, y) r, g, b, _ := c.RGBA() // 16-bit pre-multiplied rb := byte(r >> (16 - 5))
				gb := byte(g >> (16 - 6)) // Green has 6 bits
				bb := byte(b >> (16 - 5))

				buf[(127-y)*2+(x)*128*2+1] = (rb << 3) | (gb >> 3)
				buf[(127-y)*2+(x)*128*2] = bb | (gb << 5)
			}
		}
		_, err = f.Seek(0, 0)
		if err != nil {
			panic(err)
		}

		lock.Lock()
		_, err = f.Write(buf[:])
		lock.Unlock()
		if err != nil {
			panic(err)
		}
	}
}

 

The Frame

The last couple of weekends, I’ve been working on the least sexy part of the robot – the mounting frame.  As has been mentioned, the space inside the robot is VERY tight this year, so making everything fit is a real challenge.

We need to fit in:

  • The Pi and its (not quite) Hat
  • 2 x Motor controllers
  • Servo controller board
  • IMU
  • Screen
  • 2 x Battery monitors
  • 2 x PSUs
  • Amplifier
  • Speaker

All in a space 94 x 83 x 89mm.  And we need to think about thermal management.  Looks like we’ll have to mount the batteries externally!

Our solution is this 3D printed frame.  It holds the circuit boards vertically (for good convection cooling), puts the Pi and its connections at the back (the back is removable for easy access).


All the other little boards are mounted on the reverse of the Pi mounting plate (hidden in the photo).  The whole thing lifts out of the robot if we need access to one of the boards buried near the bottom.

Tour of the main PCB

As mentioned in my previous post, this year we needed (an excuse) to learn KiCad and build a custom PCB.  Thankfully, we did succeed in soldering it up , despite the tiny pitch on some of the components.

Picture of the board

The PCB dives into a few parts.  I expect you’ll all recognise the Pi header in the top left.  Above that, in yellow on the annotated image, we have the SPI peripherals: the screen and the IMU (which we use mainly for the gyroscope).

Annotated board
Yellow: peripheral connectors; Pink: Parallax Propeller; Green: Time-of-flight sensor connectors; Red: Isolation chips

Below the header, in pink, we have the Parallax propeller chip, a fast microcontroller that we use to decode the signals from the motors.   Each motor can put out 200k pulses per second, which isn’t really possible to handle from the GPIO pins because Linux can’t really handle that many interrupts per second.

To the right, in yellow, we have connectors for the “noisy” off-board components.  These sit over their own ground plane, so that, if we want to, we can drive them from a completely isolated power supply. From top to bottom:

  • “noisy” 5v power
  • motor driver control 1
  • motor encoder 1
  • motor driver control 2
  • motor encoder 2
  • servo controller
  • 2 x power monitors

To bridge the gap between the microcontroller and the noisy world of the motors, (in red) we have a pair of ISO7742 chips.  These provide two input and two output signals, which are level shifted from 3.3v to 5v and are  isolated through an internal capacitor.  Unlike an optoisolator, they were super-simple to use, requiring 3.3v and 5v power and grounds, a couple of decoupling capacitors and some pull-ups on their enable pins.

Similarly, below that, we have an isolated i2c line for driving the servo board (which runs from the “noisy” 5v power supply.

In the bottom left (in green) we have 6 connectors for optical time-of-flight sensors.

The time of flight sensors, Propeller, servo controller and voltage monitors are all i2c controlled, which poses a couple of problems:

  • i2c busses tend to become unstable with more than a handful of devices (because each device adds capacitance to the bus, making it harder for any device to drive the bus)
  • we have no control over the addresses of many of the devices; for example, all the time-of-flight sensors use the same address.

To address those problems, we included an i2c multiplexer in the design (to the left of the Propeller), allowing us to switch any combination of devices on and off the bus.

Multiplexer schematic
Multiplexer schematic
Picture of the multiplexer
Multiplexer

Despite having very little space to play with, we were able to squeeze in a bit of prototyping area, which we’ve used to address errata.  For example, I found that I’d missed a couple of pull-ups on the i2c port that the propeller was attached to.  A bit of thin kynar wire to the rescue:

Tracks!

Having seen various tracked robots on Thingiverse and especially this amazing one, I thought we should try and implement Wall-E’s tracks ourselves.

We could have gone with a simple rubber band or timing belt (and in retrospect that would have been MUCH easier), but I really fancied seeing how far I could push 3D printed parts.

So I had a long browse through thingiverse looking at lots of track designs and started to draw up my own.  The FPV rover design had an interesting idea for fine adjustment – they used 2 different sizes of 3D printed pins to join the tracks together to make the whole thing slightly tighter or looser as needed.

In the end I settled on a design which had sprocket wheels mounted on either side of a supporting frame (to avoid nasty torques on the frame).  Obviously the layout of the sprocket wheels on the frame had to match the ‘real’ Wall-E, but I decided to make the sprocket teeth larger (and therefore stronger).

Then the track elements needed designing.  I went with a design which used the links between the tracks as the raised sections and the sprocket teeth sat in a deep well, but did not protrude from the other side.  Like this:

A matching pin is shown too.  After a few trail and error prints to fine tune the pin diameter, well depth, we got something that worked.  And then we needed to print about 36 of them per set of tracks (3 x 4 hour sessions of printing).

The final problem was how to connect these to the motor.  We wanted a fair bit of speed, so I’d ended up buying pololu motors with a 4:1 gearbox.  Having seen these run, I was a bit worried about the high speed, so wanted to gear them down a touch.  I found a bevel gear generator plugin in Onshape and ended up with this:

And that worked!

In fact running these is slightly terrifying – fairly sure if you got your finger in there it’d get a nasty nip…