The tracks are actually all geared together in one long distributed drive train, connected to one large drive motor that fills up the entire center segment of the robot. Think small rotating universal joints INSIDE slightly larger structural universal joints, it’s really cool! The Omnitreads all have odd numbers of segments because each non-motor segment contains the manifolds, solenoid valves and electronics to control the joint adjacent and towards the center. It would be mechanically simpler to put a motor in each segment, but they would each have to be smaller (and lighter) to still fit in all the other necessary stuff. For flat ground that would probably be fine, but it really helps to have the weight concentrated in the center of the robot to climb high curbs and over wide gaps. To do really extreme stuff like climb in a pipe, or over a hanging ledge, most or all of the weight of the robot has to be pulled up by just one set of tracks at a time. The one motor drive system lets us deliver all the power from that huge motor to any track or tracks on the robot at any time.
So all the tracks are driven together at the same speed, which is what you usually want. If for some reason you don’t want this though (something’s jammed with a twig, or just to save power perhaps) the tracks on each side of each segment of the robot can be individually disengaged from the drive train by a an actuated clutch driven by one of those little Solarbotics GM15 planetary motors. Since you can’t always tell right away when a track is really badly jammed, each track drive sprocket also has a brass shear pin (a 0-80 brass screw with the tip machined down to 0.032") which will break and disconnect the tracks on a side from the drive train if the torque to drive them goes above twice the worst-case edge-climbing torque. That design was a fun month for me!
As for sensors, the only built-in ones are accelerometers in each segment which we use to decide which way is currently ‘up’ to make control of the robot easier on us humans. We’ve thrown cameras on the front, but it’s more of a delivery platform for whatever sensor is useful at the time. Power and a CAN bus runs the length of the robot, and there is a payload space about the size of a deck of cards in each non-motor segment. When the robot goes tetherless two of these spaces are used for batteries and two for compressors, but in some situations (i.e. search and rescue) you would want a tether anyway. The drive shaft even pokes out the front if you want a power take-off for something.
Nothing is particularly autonomous about it at the moment, it usually takes three people with a good view of the robot working together on three game controllers (one thumb-joystick per joint) to move this beast. There’s a slider on the top-right which also controls the overall speed of the robot, or the position of the flippers depending, and the various buttons control individual joint stiffness and turn on/off the internal compressors if we’re using them.
It takes some practice, but mostly in terms of the three operators communicating with each other and having the same idea about a strategy for getting around/over an obstacle. Actually controlling the robot is pretty intuitive for most people who have tried it (especially if they have ANY experience with video games).
My friend John did his masters research on a haptic OT4 controller, the “Joysnake”:
It worked quite well and let a single operator do complex things like climb stairs. The next step was to make it really haptic, like with force feedback, but then John switched to Aerospace engineering to pursue a PHD. Side note: this is the same John who is now building that USB-to-Serial-adapter-to-fake-USB-keyboard module I wanted to hack an Orangutan programmer for. He ended up going with a (Pololu) USB to serial adapter and an AT90USBKey. Slightly less soldering (which makes his boss happier), and four times the board footprint. Oh well.
There was also a group from the “Intelligence Technology Innovation Center” which did some good work using a genetic algorithm to control a simulated OT4 autonomously in a simulated environment. They got really good results with virtual sensor data from IR range sensors and force sensors on the track trays, all things we could conceivably incorporate into future versions of the real robot. Some of our best strategies for difficult things like climbing stairs actually come from watching videos of the genetic algorithm doing similar things in simulation!
So the brains are sitting on computer somewhere, and the brawn is on a shelf here, watch out if we ever put them together!