Design for the autonomous challenges

Three years ago we only attempted one challenge autonomously: Nebula, recognising and approaching four coloured balls. This year we’re going for four or possibly five autonomous challenges, and IMO they’re all harder than that one! Eco Disaster in particular is fiendish.

It needs a ton of code, so how do we make it manageable by putting some structure on the problem? I’ve no idea yet if it will work out, but I’ve gone for the following overall design.

  • We have some model of what we think the arena is. E.g., for Escape Route, what we think the ordering of the coloured blocks is. Need to allow for some level of uncertainty or probability here, rather than dealing with certain knowledge.
  • We have an estimate of where the bot is within the arena – again, uncertain rather than exact.
  • On each iteration of performing a challenge, we:
    • Use available sensors to update our beliefs about what the arena is, and where the bot is within it.
    • Decide the next position for the bot to move to.
    • Initiate movement towards that position.

That still sounds pretty general, but I’ve found it helpful so far. We’re mostly just using the camera as our sensor, but we also have an accurate IMU and measurements of wheel rotations, which hopefully will add up to nearly accurate dead reckoning.

Thus far I’ve coded the next level of detail for Escape Route, which boils down to:

  • Working out the colour, and hence the size, of the first block that we need to move past.
  • Generating target positions, in sequence, to move the bot around it.
  • Repeat for the second and third blocks.

We use the camera to work out the colour, and also to check and adjust bot positioning when we think we should be facing the edge of each block. If we manage to get the ToF hooked up, we might be able to use that as well, as an extra check against bumping into things.