Hi all.
I’ve got six Zumo 32U4 robots I want to use for a class. I am just beginning to understand how to adapt to this robot the code I’ve already created for my home-made Arduino-based robots (obstacle avoiding, wall following, complex behaviours…).
I am havng trouble with obstacle detection. I want the robot to see if there is an object at front-left, front, front-right position (and take decisions about subsequent robot moves: turn, go back…). Jumpers are as out-of-the-box, so I have left/front/right proximity sensors connected, but in this test I am just using the front one with
proxSensors.initFrontSensor()
proxSensors.read()
proxSensors.countsFrontWithLeftLeds() and countsFrontWithRightLeds()
Here is my (not optimized) code:
#include <Wire.h>
#include <Zumo32U4.h>
Zumo32U4ButtonA buttonA;
Zumo32U4ProximitySensors proxSensors;
Zumo32U4LCD lcd;
void setup()
{
// Initialize the front-sensor mode
proxSensors.initFrontSensor();
// Wait for the user to press button A.
lcd.clear();
lcd.print(F("Press A"));
buttonA.waitForButton();
lcd.clear();
}
void loop()
{
int obs = obstacle();
delay(200);
}
int obstacle()
// Check if an obstacle is present.
// Return:
// 0 if no obstacle
// -1 if obstacle is on the left
// +1 if obstacle is on the right
// +2 if obstacle is in front
{
// A sensor reading must be greater than or equal to this
// threshold in order for the program to consider that sensor as
// seeing an object.
const uint8_t sensorThreshold = 4;
// delta between left and right readings, to consider the two readings as different
const uint8_t sensorDelta = 1;
// Read the front proximity sensor and gets its left and right values (the
// amount of reflectance detected while using the left or right LEDs)
proxSensors.read();
uint8_t leftValue = proxSensors.countsFrontWithLeftLeds();
uint8_t rightValue = proxSensors.countsFrontWithRightLeds();
// LCD stuff
lcd.clear();
lcd.gotoXY(0, 0);
lcd.print(leftValue);
lcd.print(' ');
lcd.print(rightValue);
int retValue = 0;
if (leftValue >= sensorThreshold || rightValue >= sensorThreshold) // An object is visible
if (rightValue - leftValue >= sensorDelta) // The right value is larger, and delta is at least
// sensorDelta: the object is probably nearer the
// right LED
retValue = +1;
else if (leftValue - rightValue >= sensorDelta) // The left value is larger, and delta is at least
// sensorDelta: the object is probably nearer
// the left LED
retValue = -1;
else retValue = +2; // The two readings are equal: the obstacle is in front
lcd.gotoXY(0, 1);
lcd.print(retValue);
return retValue;
}
The important part is just the one that writes to LCD the values read from the sensors.
I remark that:
- the sensors are very sensitive (they see a wall 1 m apart), ok
- the sensitivity angle is quite large, ok
- the read values depend on the obstacle material/colour, which is also usual
- the signal saturates for nearby obstacles: when a large object (a wall) is between 0 cm and about 25-30 cm, reading is always 6!
Saturation is a problem for accurate obstacle avoiding.
Is there something wrong im ny approach?
Should I use initThreeSensors instead and work with all of them?
Should I use setBrightnessLevels, which perhaps can be emploied to increase the number of levels given back by read instructions, so that what now is only “6”, might be expanded to a range of values to be used to discriminate between near and far objects?
Thanks
Giorgio