Thanks for the feedback! I haven't tested this idea using a red laser but I assume it would work fine. This is truly a physics question and I'm unsure if a different wavelength of light would produce better results. My green laser pointer has a wavelength of around 400nm and a red would be around 800nm, closer to the Infrared spectrum (so better?).
The time constant can be thought of as shutter speed. Longer time is equal to longer exposure so you could 'in theory' see the laser light farther away, but the problem arises when you have over-exposure and all details are lost. Because of this, the robot will go to a brighter spot on the floor if it is a point spot, ambient should be okay.
My code had no documentation, but the routines are quite simple:
get_fulcrum() --> Is a center of gravity calculation. Think of the five sensors equally spaced on a beam with a pivot. Each sensor get heavier as more light is detected thus get_fulcrum() simply determines where the pivot or fulcrum is located.
Things get messy but in the main loop I have a PI controller (based upon error from the center sensor).
Anyway, I would look forward to any updates, clean-up, tweaks to this code. The big issue with this first implementation is that the robot continually moves forward. It would be cool to make this more smart. Ideas?
Again, thanks to you Paul.