Power monitoring part 2

Wall-e wouldn’t be Wall-e without his screen; it’s a critical part of his character!

We’ve had the screen working with test programs over SPI for a while but one of our voltage sensors was a dud. That meant we hadn’t been able to get the iconic “CHARGE LEVEL” screen working in full.

Today I swapped out the voltage sensor and added a screen update loop to the code.

Then all of a sudden there’s Wall-e! He feels real now!

Power monitoring

Since we’re using unprotected LiPo batteries, which would be seriously¬†explosively damaged by over-discharge, we’ve worked some I2C voltage and power monitors into our bot this year.

We’re using these INA219 boards in-line with the battery cables to measure¬† voltage, current and power.

Out of the box, the sensors can read the bus voltage (i.e. potential difference between ground and the IN- connection).  To get them to read current and power, you need to set a configuration value in one of the registers.

One gotcha we hit was that the bus voltage register is not “right aligned”, some of the low-order bits are used for status flags so you have to take the voltage reading and shift the value 3 bits to the right to extract the voltage and then multiple by 4mV to scale it.

With that out of the way and the calibration register programmed, we now have sensible-looking readings from the battery pack that is powering the Pi:

A: 7.95V <nil> A: 0.459A <nil> A: 3.615W <nil>
A: 7.94V <nil> A: 0.431A <nil> A: 3.420W <nil>
A: 7.97V <nil> A: 0.424A <nil> A: 3.339W <nil>

and, we should be able to alarm if the voltage of the pack drops too low.  (We have a two-cell pack for the Pi, so anything less than 6V would mean that our pack would be damaged.)

Since we had some strange power-related gremlins last year, we split the motor and Pi power so that the motors are powered by a completely separate battery pack.  That means that we have two INA219s; one for each pack.

Giving Wall-e a screen

Wall-e wouldn’t be complete without a screen.¬† We’re using this¬† 128×128 colour OLED screen that works with the PiOLED kernel driver.

After enabling SPI using raspi-config and wiring it up to the SPI bus and a couple of GPIOs needed to access its reset and data/command pins:

we were able to get it working with the fbtft driver, which exposes the screen as a standard framebuffer device.

Figuring out the colour map

I hadn’t worked with the framebuffer before but it turned out to be fairly simple to use.¬† Basically, it exposes the screen as a special type of file; if you open that file and write a couple of bytes to it, it updates a pixel on the screen and then moves the cursor to the next pixel.¬† Once you’ve written 128 pixels, it moves to the next line.¬† You can use the seek operation to move the cursor to a different place in the file, which is the same as moving the cursor to a different place on screen.

This particular screen supports 16-bit colour, with 5 bits for red, 6 bits for green and 5 for blue, so the process for writing a colour to the screen is something like this:

  • Calculate your red, green and blue intensity.
  • Scale red and blue to the range 0-31 (i.e. 5 bits of precision)
  • Scale green to 0-63 (i.e. 6 bits).
  • Pack the bits into a 16 bits: rrrrrggggggbbbbb and then break the 16-bits up into two bytes: rrrrrggg and gggbbbbb
  • Write those two bytes to the address of the pixel; first the gggbbbbb and then the rrrrrggg byte.

Since we’re writing our code in golang, I searched around for a golang drawing library and found the gg library.
As a prototype, I used that to draw a mock-up of Wall-e’s screen and then scanned the resulting gg Image, extracting the pixels and writing them to the frame buffer in the 16-bit format:

The code for the above looks like this:

func drawOnScreen() {
	// Open the frame buffer.
	f, err := os.OpenFile("/dev/fb1", os.O_RDWR, 0666)
	if err != nil {
		panic(err)
	}

	// Loop, simulating a change to battery charge every half second.
	charge := 0.0
	for range time.NewTicker(500 * time.Millisecond).C {
		// Create a drawing context of the right size
		const S = 128
		dc := gg.NewContext(S, S) 
		dc.SetRGBA(1, 0.9, 0, 1) // Yellow

		// Get the current heading
		headingLock.Lock()
		j := headingEstimate
		headingLock.Unlock()

		// Move the current origin over to the right.
		dc.Push()
		dc.Translate(60, 5)
		dc.DrawString("CHARGE LVL", 0, 10)

		// Draw the larger power bar at the bottom. Colour depends on charge level.
		if charge < 0.1 {
			dc.SetRGBA(1, 0.2, 0, 1)
			dc.Push()
			dc.Translate(14, 80)
			DrawWarnign(dc)
			dc.Pop()
		}

		dc.DrawRectangle(36, 70, 30, 10)

		for n := 2; n < 13; n++ { if charge >= (float64(n) / 13) {
				dc.DrawRectangle(38, 75-float64(n)*5, 26, 3)
			}
		}

		dc.Fill()

		dc.DrawString(fmt.Sprintf("%.1fv", 11.4+charge), 33, 93)

		dc.SetRGBA(1, 0.9, 0, 1)

		// Draw the compass
		dc.Translate(14, 30)
		dc.Rotate(gg.Radians(j))
		dc.Scale(0.5, 1.0)
		dc.DrawRegularPolygon(3, 0, 0, 14, 0)
		dc.Fill()

		dc.Pop()

		charge += 0.1
		if charge > 1 {
			charge = 0
		}

		// Copy the colours over to the frame buffer.
		var buf [128 * 128 * 2]byte
		for y := 0; y < S; y++ {
			for x := 0; x < S; x++ { c := dc.Image().At(x, y) r, g, b, _ := c.RGBA() // 16-bit pre-multiplied rb := byte(r >> (16 - 5))
				gb := byte(g >> (16 - 6)) // Green has 6 bits
				bb := byte(b >> (16 - 5))

				buf[(127-y)*2+(x)*128*2+1] = (rb << 3) | (gb >> 3)
				buf[(127-y)*2+(x)*128*2] = bb | (gb << 5)
			}
		}
		_, err = f.Seek(0, 0)
		if err != nil {
			panic(err)
		}

		lock.Lock()
		_, err = f.Write(buf[:])
		lock.Unlock()
		if err != nil {
			panic(err)
		}
	}
}

 

Tour of the main PCB

As mentioned in my previous post, this year we needed (an excuse) to learn KiCad and build a custom PCB.  Thankfully, we did succeed in soldering it up , despite the tiny pitch on some of the components.

Picture of the board

The PCB dives into a few parts.¬† I expect you’ll all recognise the Pi header in the top left.¬† Above that, in yellow on the annotated image, we have the SPI peripherals: the screen and the IMU (which we use mainly for the gyroscope).

Annotated board
Yellow: peripheral connectors; Pink: Parallax Propeller; Green: Time-of-flight sensor connectors; Red: Isolation chips

Below the header, in pink, we have the Parallax propeller chip, a fast microcontroller that we use to decode the signals from the motors.¬† ¬†Each motor can put out 200k pulses per second, which isn’t really possible to handle from the GPIO pins because Linux can’t really handle that many interrupts per second.

To the right, in yellow, we have connectors for the “noisy” off-board components.¬† These sit over their own ground plane, so that, if we want to, we can drive them from a completely isolated power supply. From top to bottom:

  • “noisy” 5v power
  • motor driver control 1
  • motor encoder 1
  • motor driver control 2
  • motor encoder 2
  • servo controller
  • 2 x power monitors

To bridge the gap between the microcontroller and the noisy world of the motors, (in red) we have a pair of ISO7742 chips.  These provide two input and two output signals, which are level shifted from 3.3v to 5v and are  isolated through an internal capacitor.  Unlike an optoisolator, they were super-simple to use, requiring 3.3v and 5v power and grounds, a couple of decoupling capacitors and some pull-ups on their enable pins.

Similarly, below that, we have an isolated i2c line for driving the servo board (which runs from the “noisy” 5v power supply.

In the bottom left (in green) we have 6 connectors for optical time-of-flight sensors.

The time of flight sensors, Propeller, servo controller and voltage monitors are all i2c controlled, which poses a couple of problems:

  • i2c busses tend to become unstable with more than a handful of devices (because each device adds capacitance to the bus, making it harder for any device to drive the bus)
  • we have no control over the addresses of many of the devices; for example, all the time-of-flight sensors use the same address.

To address those problems, we included an i2c multiplexer in the design (to the left of the Propeller), allowing us to switch any combination of devices on and off the bus.

Multiplexer schematic
Multiplexer schematic
Picture of the multiplexer
Multiplexer

Despite having very little space to play with, we were able to squeeze in a bit of prototyping area, which we’ve used to address errata.¬† For example, I found that I’d missed a couple of pull-ups on the i2c port that the propeller was attached to.¬† A bit of thin kynar wire to the rescue:

Mad not-quite-HATting

With space at a premium in this year’s bot (it amounts to a squared-off Pi):

Pi overlaid with a square, showing the size of the enclosure

and because I fancied having a go at it, and, Lance had his hands full with the CAD work, we decided to design a custom PCB. It’s¬†now on its way to us from Shenzen ūüôā

Picture of our boards in bubble wrap
The board house sent us a photo when they finished production, which was a nice touch

The custom PCB needed to combine several functions that were separate boards and breakouts last year:

  • I2C mulitiplexer
  • propeller microcontroller for interfacing with the motor drivers/encoders
  • connectors for all the peripherals (IMU, motors and distance sensors)

along with some new functions that we wanted to squeeze in (screen and voltage sensors to monitor the batteries, for example).¬† I also thought it’d be a good idea to add some isolation and level shifting chips to the design so that we could better isolate the motors and support having noisy and clean power supplies.¬† Previously, we’d had gremlins that we thought might be down to noise and brownouts caused by the motors.

Since it’s pretty much the only game in town for open source PCB design (and Lance and I had used it before), we used Kicad to draw the schematic and then design the board itself.

We’ll cover a¬† bit more detail on the design in the next few posts but, if you are interested in making your own board, my number one tip is to punch “Kicad” into Youtube’s search box.¬† There are great how-to videos on there that guide you through the whole process.¬† My second tip is to search for “Kicad push and shove”; which will show you how to use the automatic layout tools.

Note about the title: while it fits on the Pi, our board isn’t a HAT because there are rules about what makes a HAT ūüėõ¬† also, we’re mad to try soldering some of the chips we plan to put on there; they’re tiny :-O

 

 

 

 

A Gopher in a Tiger?

Golang mascot, Renee French

In the past, we’ve used Python and C++ for our robots but this year we switched to Go.¬† Why the change?¬† It seemed like a good idea at the time

To be honest, the main reason was that I signed up to lead the coding effort this year.¬† I haven’t had much C++/Qt experience (so it wasn’t easy for me to pick up last year’s code) but I’ve been working in Go in my day job for a couple of years;¬† I enjoy working with Go and the language has some features that are appealing for building robots:

  • “Naturally” written Go is just plain faster than “naturally” written Python (by some margin).
  • Go can take advantage of more than one core by running multiple goroutines at once (and the computer scientist in me can’t resist a bit of CSP). The normal Python interpreter is limited to one core.
  • It felt¬†like a good choice because it sits at the right level, giving access to low-level primitives (like pointers and structs) for interfacing with C and hardware while also offering garbage collection and other modern features for rapid development.

I have found Go to be a good language to program a bot.  The biggest downside was that the library ecosystem is a bit less mature than Python or C(++).  That meant that getting the hardware driver layer of the bot together required quite some work:

  • We found that the Go wrapper for OpenCV (gocv) required a patch to work on the Pi.¬† (I found the patch in a forum post but I can’t dig it out to link to.)
  • We didn’t find a working Go driver for the¬†VL53L0X time-of-flight sensors, so (after some false starts) we took the existing C-wrapper that GitHub user cassou had already ported for the Pi and wrapped it in a Go library using CGo (Go’s C function call interface).
  • We ported a Python sample joystick driver for the DS4¬†to Go.¬† The Linux joystick interface turned out to be easy to access.
  • There were a few i2c libraries without a clear winner.¬† We ended up using¬†golang.org/x/exp/io/i2c.

While it made some work, I find the low-level bit banging quite fun so it wasn’t much of a downside ūüôā

Chasing motor gremlins

Not our motors

We spend a big chunk of last weekend trying to track down an issue with our motor driving logic.  The problem was that  sometimes a fast change of direction would cause the i2c bus to die; writes from the Pi would fail and the bot would go crazy as result.

We knew it was likely to be one of a couple of factors:

  • High current draw from the quick change in direction causing a brownout.
  • Motor switching causing interference/voltage spikes.

Unfortunately, not owning an oscilloscope, it was hard to pinpoint the exact cause so we resorted to magic capacitive pixie dust and software changes:

  • We added large reservoir capacitors to the power inputs of the various boards to provide a store of charge in case the power rail momentarily dropped.
  • We added small decoupling capacitors too to help filter any noise.

Those changes did seem to help but they didn’t eliminate the problem completely.¬† An extra layer of software changes seems to have done the trick:

  • We changed the i2c driver code to close and reopen the device file after a failure. The hope is that that resets the bus more thoroughly than simply retrying the write.
  • After John mentioned that he’d seen issues with it in the past, we took control of the GPIO pin that is attached to the propeller’s reset pin and started actively driving it rather than letting it be weakly pulled up with a resistor.
  • We beefed up our retry loop, with escalation.¬† If it fails enough times, it resets the propeller and reflashes it.¬† That takes about a second but it might just save us on the day!
  • We implemented a maximum ramp rate for the motors so that we change their speed a little slower.
  • We put the motor PWMs out-of-phase so that they don’t all start and stop on the same clock cycle.

With all those changes in place, we’ve seen a few retries but it’s hasn’t escalated to a reset yet so, fingers crossed, the problem is fixed enough.

Introducing Tigerbot

Panic panic panic, must pull my head up out of the code and start blogging!

It’s been an eventful 18 months for the Metabot team.¬† The balance of the team moved from our old home at Metaswitch to a new company, Tigera.¬† We couldn’t resist a name change so I’d like to introduce Tigerbot….

(It’ll be orange and black striped by the big day, we promise!)

The picture above shows¬†the bot’s bare bones before we added the motor drivers and power circuitry. Lance’s new 3D printer has been getting a good workout, churning out the new chassis, wheels and attachments.

Just in case there are any PiWars organiser’s reading, the bot’s a lot further along by now, of course!¬† Not long after the above photo, Lance had the bot to “remote control car” stage:

This is the traditional point for him to lose interest move on to building whizzy attachments while the code monkeys on the team get to work.

Joking aside, “remote control car” is a huge milestone:

  • chassis printed
  • power electronics in place to drive the motors, Pi and sensors
  • Propeller hat mounted on the Pi; this little board gives us a fast little micro-controller to do hard real time motor speed and servo position control; a big hat tip to John who wrote all the Propeller spin code for Metabot II, which we’re using largely unaltered
  • interconnect board soldered up, exposing the I2C bus, which we use¬†talking to the Propeller and sensors
  • remote control connected; we’re using a DualShock 4 controller again paired over Bluetooth
  • It’s a bot!¬† IT WORKS!!

Now, the rest is “just code”…