Having failed at line following in previous years due to various problems with IR sensors (spacing, distance from point of rotation, etc), we’ve decided to go with mounting a Pi camera on the bottom of the robot and running an openCV algorithm to track the line.
OpenCV is amazing – you can do so much with only a few lines of code – e.g. face detection: http://www.knight-of-pi.org/opencv-primer-face-detection-with-the-raspberry-pi/
Next job is to extract from the image a vector so that we can know which way to go. We take the point where the line crosses the middle row, then do a polar transform and plot the amount of black in each direction. The directions with the most will be along the line – in either direction. We then assume the correct direction is the one towards the front of the robot.
You also then need to add behaviour for when the line is away from the edge of the picture and in danger of not being in the next frame (i.e more it towards the centre)! And code to deal with the case where there is no line in the shot (hunt around until you find it…)
And finally Tuning. Lots of factors affect how well the line following works – not least speed of the robot and the update rate of your sensor/control loop. That can really only be done with a real course on the real robot, and you can get scuppered if the course designer has put sharp curves or hairpins on the course…