Zumo 32U4: troubles with obstacle detection with IR sensors

Hi all.
I’ve got six Zumo 32U4 robots I want to use for a class. I am just beginning to understand how to adapt to this robot the code I’ve already created for my home-made Arduino-based robots (obstacle avoiding, wall following, complex behaviours…).

I am havng trouble with obstacle detection. I want the robot to see if there is an object at front-left, front, front-right position (and take decisions about subsequent robot moves: turn, go back…). Jumpers are as out-of-the-box, so I have left/front/right proximity sensors connected, but in this test I am just using the front one with
proxSensors.countsFrontWithLeftLeds() and countsFrontWithRightLeds()

Here is my (not optimized) code:

#include <Wire.h>
#include <Zumo32U4.h>

Zumo32U4ButtonA buttonA;
Zumo32U4ProximitySensors proxSensors;
Zumo32U4LCD lcd;

void setup()
  // Initialize the front-sensor mode

  // Wait for the user to press button A.
  lcd.print(F("Press A"));

void loop()
  int obs = obstacle();

int obstacle()
// Check if an obstacle is present. 
// Return:
//   0 if no obstacle
//  -1 if obstacle is on the left
//  +1 if obstacle is on the right
//  +2 if obstacle is in front
  // A sensor reading must be greater than or equal to this
  // threshold in order for the program to consider that sensor as
  // seeing an object.
  const uint8_t sensorThreshold = 4;
  // delta between left and right readings, to consider the two readings as different
  const uint8_t sensorDelta = 1;

  // Read the front proximity sensor and gets its left and right values (the
  // amount of reflectance detected while using the left or right LEDs)
  uint8_t leftValue  = proxSensors.countsFrontWithLeftLeds();
  uint8_t rightValue = proxSensors.countsFrontWithRightLeds();

// LCD stuff
  lcd.gotoXY(0, 0);
  lcd.print(' ');

  int retValue = 0;
  if (leftValue >= sensorThreshold || rightValue >= sensorThreshold)  // An object is visible
    if (rightValue - leftValue >= sensorDelta)    // The right value is larger, and delta is at least 
                                                                       // sensorDelta: the object is probably nearer the 
                                                                       // right LED
      retValue = +1;
    else if (leftValue - rightValue >= sensorDelta)  // The left value is larger, and delta is at least 
                                                                             //  sensorDelta: the object is probably nearer 
                                                                             // the left LED
      retValue = -1;
    else retValue = +2;    // The two readings are equal: the obstacle is in front

  lcd.gotoXY(0, 1);

  return retValue;

The important part is just the one that writes to LCD the values read from the sensors.

I remark that:

  1. the sensors are very sensitive (they see a wall 1 m apart), ok
  2. the sensitivity angle is quite large, ok
  3. the read values depend on the obstacle material/colour, which is also usual
  4. the signal saturates for nearby obstacles: when a large object (a wall) is between 0 cm and about 25-30 cm, reading is always 6!

Saturation is a problem for accurate obstacle avoiding.
Is there something wrong im ny approach?
Should I use initThreeSensors instead and work with all of them?
Should I use setBrightnessLevels, which perhaps can be emploied to increase the number of levels given back by read instructions, so that what now is only “6”, might be expanded to a range of values to be used to discriminate between near and far objects?

I tried with initThreeSensors and the side IR sensors but obviously the principle is the same, so there was no improvement.
On the contrary, the idea of using setBrightnessLevels() was good: I’ve expanded the brightness vector like this:

  uint16_t myBrightnessLevels[] = {1, 2, 4, 9, 15, 23, 32, 42, 55, 70, 85, 100, 120, 135, 150, 170};   // default is { 4, 15, 32, 55, 85, 120 }
  proxSensors.setBrightnessLevels(myBrightnessLevels, 16);

making it finer and extending it on both sides, partly and approximatively preserving its log-like nature.
Now I have a finer scale: value 15 as the reading from the front sensor now corresponds to about 10-15 cm from the obstacle, which is far better than it was. My “avoid and explore” code is beginning to work almost satisfactorily.


  1. when the robot is very close to an obstacle, the reading suddenly drops to low values, so the robot cannot start too close to e.g. a wall otherwise it does not see the obstacle;

  2. Sometimes the 15 value is not reached while getting close to the obstacle, and the max value remains e.g. 14. In this case of course the robot bumps into the obstacle; I cannot lower the recoil threshold, however, because in most cases the 14 value means the obstacle is still far away… no solution I’m afraid…

  3. Of course it is not necessary to expand the scale in both directions as I did (lower and higher brightness values).

This is all by now. I’ll continue my tests.
Any observations from other experimenters?


I am glad you found a way to get it working better. Could you post a video of the issues you are currently having? It is difficult to give meaningful advice without seeing how the robot is performing and what kind of environment it is in.