Pololu Robotics & Electronics
Menu
My account Comments or questions? About Pololu Contact Ordering information Distributors

Pololu Forum

3pi Walled Maze Solver

Hello,

My name is Brian and I’m working with a team of students to create a walled maze solver robot using the 3pi. So far, our robot is able to traverse a maze using a left-hand on the wall strategy. This code is based on the wall follower sample posted on the site. Our robot has one analog distance sensor on the left for scaling the wall (AD7). It also has 3 digital distance sensors on the front, left, and right (PC5, PD0, PD1). Our goal is to integrate the wall follower code with the line maze solver code. In other words, we want to solve a non-looped walled maze by learning on the first run and cutting out deadends on the second run.

Right now, we’re having two distinct problems. The robot is able to stop at a finish line. However, it will stop at any deadend because it stops when all digital sensors are tripped. We aren’t quite sure how to determine the finish line. Secondly, the robot will turn to the right if it ever detects something directly in front, due to our follow segment PID not being able to correct it’s position fast enough. We have found it hard to record turns based on that fact. Here is what our code looks like so far after modifying the sample code. I apologize for it being all in one source code file, and some lines of code that we tested are commented out. If anyone has any suggestions for determining a deadend and/or turn memorization, it would be much appreciated!

#include <pololu/orangutan.h> 
// The 3pi include file must be at the beginning of any program that 
// uses the Pololu AVR library and 3pi. 
#include <pololu/3pi.h> 
  
// This include file allows data to be stored in program space.  The 
// ATmegaxx8 has 16x more flash than RAM, so large 
// pieces of static data should be stored in program space. 
#include <avr/pgmspace.h> 
  
// Introductory messages.  The "PROGMEM" identifier causes the data to 
// go into program space. 
const char welcome_line1[] PROGMEM = " Pololu"; 
const char welcome_line2[] PROGMEM = "3\xf7 Robot"; 
const char name_line1[] PROGMEM = "Wall"; 
const char name_line2[] PROGMEM = "Follower"; 
  
// A couple of simple tunes, stored in program space. 
const char welcome[] PROGMEM = ">g32>>c32"; 
const char go[]      PROGMEM = "L16 cdegreg4"; 
  
// Refresh the LCD display every tenth of a second. 
const int display_interval_ms = 100; 
  
#define MS_ELAPSED_IS(n) (get_ms() % n == 0) 
#define TIME_TO_DISPLAY (MS_ELAPSED_IS(display_interval_ms)) 

char path[100] = " ";
unsigned char path_length = 0;
  
void initialize() 
{ 
        // Set PC5 as an input with internal pull-up enabled 
        set_digital_input(IO_C5, PULL_UP_ENABLED);

		set_digital_input(IO_D0, PULL_UP_ENABLED);

		set_digital_input(IO_D1, PULL_UP_ENABLED);
  
        // Play welcome music and display a message 
        print_from_program_space(welcome_line1); 
        lcd_goto_xy(0,1); 
        print_from_program_space(welcome_line2); 
        play_from_program_space(welcome); 
        delay_ms(1000); 
  
        clear(); 
        print_from_program_space(name_line1); 
        lcd_goto_xy(0,1); 
        print_from_program_space(name_line2); 
        delay_ms(1000); 
  
        // Display battery voltage and wait for button press 
        while(!button_is_pressed(BUTTON_B)) 
        { 
                clear(); 
                print_long(read_battery_millivolts()); 
                print("mV"); 
                lcd_goto_xy(0,1); 
                print("Press B"); 
                delay_ms(100); 
        } 
  
        // Always wait for the button to be released so that 3pi doesn't 
        // start moving until your hand is away from it. 
        wait_for_button_release(BUTTON_B); 
        clear(); 
        print("Go!"); 
  
        // Play music and wait for it to finish before we start driving. 
        play_from_program_space(go); 
        while(is_playing()); 
} 
  

  
void turn_in_place() { 
        /*if (TIME_TO_DISPLAY) { 
                clear(); 
                lcd_goto_xy(0,0); 
                print("Front"); 
                lcd_goto_xy(0,1); 
                print("Obstacle");				
        }*/ 
  
        // Turn to the right in place 
        set_motors(80, -80); //(left clockwise, right counterclockwise)
		delay_ms(100);
} 

void turn_around() {
		/*if (TIME_TO_DISPLAY) { 
                clear(); 
                lcd_goto_xy(0,0); 
                print("Dead"); 
                lcd_goto_xy(0,1); 
                print("End"); 
        }*/

        // Turn 180 degrees around
        set_motors(80, -80);
        delay_ms(200);
}

// Turns according to the parameter dir, which should be 'L', 'R', 'S'
// (straight), or 'B' (back).
void turn(char dir)
{
	switch(dir)
	{
	case 'L':
		// Turn left.
		set_motors(-80,80);
		delay_ms(200);
		break;
	case 'R':
		// Turn right.
		set_motors(80,-80);
		delay_ms(200);
		break;
	case 'B':
		// Turn around.
		set_motors(80,-80);
		delay_ms(400);
		break;
	case 'S':
		// Don't do anything!
		break;
	}
}
  
// Displays the current path on the LCD, using two rows if necessary.
void display_path()
{
	// Set the last character of the path to a 0 so that the print()
	// function can find the end of the string.  This is how strings
	// are normally terminated in C.
	path[path_length] = 0;

	clear();
	print(path);

	if(path_length > 8)
	{
		lcd_goto_xy(0,1);
		print(path+8);
	}
}

// This function decides which way to turn during the learning phase of
// maze solving.  It uses the variables found_left, found_straight, and
// found_right, which indicate whether there is an exit in each of the
// three directions, applying the "left hand on the wall" strategy.
char select_turn(unsigned char found_left, unsigned char found_straight, unsigned char found_right)
{
	// Make a decision about how to turn.  The following code
	// implements a left-hand-on-the-wall strategy, where we always
	// turn as far to the left as possible.
	if(found_left)
		return 'L';
	else if(found_straight)
		return 'S';
	else if(found_right)
		return 'R';
	else
		return 'B';
}

// Path simplification.  The strategy is that whenever we encounter a
// sequence xBx, we can simplify it by cutting out the dead end.  For
// example, LBL -> S, because a single S bypasses the dead end
// represented by LBL.
void simplify_path()
{
	// only simplify the path if the second-to-last turn was a 'B'
	if(path_length < 3 || path[path_length-2] != 'B')
		return;

	int total_angle = 0;
	int i;
	for(i=1;i<=3;i++)
	{
		switch(path[path_length-i])
		{
		case 'R':
			total_angle += 90;
			break;
		case 'L':
			total_angle += 270;
			break;
		case 'B':
			total_angle += 180;
			break;
		}
	}

	// Get the angle as a number between 0 and 360 degrees.
	total_angle = total_angle % 360;

	// Replace all of those turns with a single one.
	switch(total_angle)
	{
	case 0:
		path[path_length - 3] = 'S';
		break;
	case 90:
		path[path_length - 3] = 'R';
		break;
	case 180:
		path[path_length - 3] = 'B';
		break;
	case 270:
		path[path_length - 3] = 'L';
		break;
	}

	// The path is now two steps shorter.
	path_length -= 2;
}

void follow_segment() {
		
		int last_proximity    = 0; 
        const int base_speed  = 50; 
        const int set_point   = 300;
		long integral = 0; 
  
        // This is the "main loop" - it will run forever. 
        while(1) 
        { 
				
                // If something is directly in front turn to the right in place 
                int front = is_digital_input_high(IO_C5); 
    			int right = is_digital_input_high(IO_D1); 
    			int left = is_digital_input_high(IO_D0);
     
    			int left_proximity = analog_read(7); // 0 (far away) - 650 (close) 

    			//unsigned int sensors[5];
    			//read_line(sensors,IR_EMITTERS_OFF);
                
                int proportional = left_proximity - set_point; 
                int derivative = left_proximity - last_proximity;
    			integral += derivative; 
  
                // Proportional-Derivative Control Signal 
                int pd = proportional / 6 + integral / 200 + derivative * 2/5; 
  
                int left_set  = base_speed + pd; 
                int right_set = base_speed - pd; 
  
                last_proximity = left_proximity; // remember last proximity for derivative 
    			if (left_set < (base_speed - 100)){
     					set_motors(base_speed + 10, base_speed - 10);
     					delay_ms(80);
    			}
			
				set_motors(left_set, right_set);    

				//DEADEND
				if (front == 0 && right == 0 && left == 0) {
                        //turn_around(); 
                       	return; 
                } 
				

				else if (front == 0){
						turn_in_place();
						continue;
				}
				
        }
}

void maze_solve() {
		while(1)
		{
			follow_segment();

			// These variables record whether the robot has seen a line to the
			// left, straight ahead, and right, whil examining the current
			// intersection.
			unsigned char found_left=0;
			unsigned char found_straight=0;
			unsigned char found_right=0;

			int front = is_digital_input_high(IO_C5); 
			int right = is_digital_input_high(IO_D1); 
			int left = is_digital_input_high(IO_D0);

			//unsigned int sensors[5];
			//read_line(sensors,IR_EMITTERS_ON);
			//unsigned int position = read_line(sensors, IR_EMITTERS_ON);

			//if (front == 0)
				//turn_in_place();

			if (front == 0 && right == 0 && left == 0)
				//turn_around();
				break;

			// Intersection identification is complete.
			// If the maze has been solved, we can follow the existing
			// path.  Otherwise, we need to learn the solution.
			unsigned char dir = select_turn(found_left, found_straight, found_right);

			// Make the turn indicated by the path.
			turn(dir);

			// Store the intersection in the path variable.
			path[path_length] = dir;
			path_length ++;

			// You should check to make sure that the path_length does not
			// exceed the bounds of the array.  We'll ignore that in this
			// example.

			// Simplify the learned path.
			simplify_path();

			// Display the path on the LCD.
			display_path();
		}

		while(1)
		{
			// Beep to show that we finished the maze.
			set_motors(0,0);
			play(">>a32");

			// Wait for the user to press a button, while displaying
			// the solution.
			while(!button_is_pressed(BUTTON_B))
			{
				if(get_ms() % 2000 < 1000)
				{
					clear();
					print("Solved!");
					lcd_goto_xy(0,1);
					print("Press B");
				}
				else
					display_path();
				delay_ms(30);
			}
			while(button_is_pressed(BUTTON_B));

			delay_ms(1000);

			// Re-run the maze.  It's not necessary to identify the
			// intersections, so this loop is really simple.
			int i;
			for(i=0;i<path_length;i++)
			{
				// SECOND MAIN LOOP BODY  
				follow_segment();

				// Drive straight while slowing down, as before.
				//set_motors(50,50);
				//delay_ms(50);
				//set_motors(40,40);
				//delay_ms(200);

				// Make a turn according to the instruction stored in
				// path[i].
				turn(path[i]);
			}
		
			// Follow the last segment up to the finish.
			follow_segment();

			// Now we should be at the finish!  Restart the loop.
		}
}			

int main() 
{ 
        // set up the 3pi 
        initialize(); 
  
        maze_solve();

        // This part of the code is never reached.  A robot should 
        // never reach the end of its program, or unpredictable behavior 
        // will result as random code starts getting executed.  If you 
        // really want to stop all actions at some point, set your motors 
        // to 0,0 and run the following command to loop forever: 
        // 
        while(1); 
}

Hi, Brian.

What distinguishes the finish from a dead end?

When it detects something in front can you turn off the wall follower code, turn 90 degrees to the right and then turn it back on?

- Ryan

Hi Ryan,

Thanks for the quick reply. That is one of the problems we are having. We can program the robot to stop at a deadend, but it does just that and stops at ANY deadend. I tried utilizing the reflectance sensors and put a blacktape square at the finish to make it different from the other deadends. Then we realized that we can’t use the reflectance sensors along with ports PC5, PD0, and PD1. Using the read_line and pololu_3pi_init functions locks at least some of those ports up. In other words, we need a better way to distinguish the finish from a regular deadend. We might end up making the finish an open area, and tell the robot to stop if all front and side digital sensors are NOT tripped.

If you look at my code, you’ll see that there is a turn_in_place function that does just that. The follow_segment function follows the wall until the front sensor detects a wall, then it calls turn_in_place and makes a 90 degree turn to the right.

//DEADEND
            if (front == 0 && right == 0 && left == 0) {
                        //turn_around(); 
                          return; 
                } 
            

            else if (front == 0){
                  turn_in_place();
                  continue;
            }

Like I said before, we are trying to model this after the line maze solver. Our code is slightly different in the follow_segment function because it turns to the right immediately after detecting something in front. So it doesn’t get a chance to break out of follow_segment, check which way it will go, and THEN make the turn. If we make the turn, then try to determine the turn, it makes the logic a lot harder. Basically, we are having trouble finding a solution for recording the turns, while not affecting the regular PID control. Here is the intersection identification code, which is still pretty much like the line maze solver code still.

while(1)
      {
         follow_segment();

         // These variables record whether the robot has seen a line to the
         // left, straight ahead, and right, whil examining the current
         // intersection.
         unsigned char found_left=0;
         unsigned char found_straight=0;
         unsigned char found_right=0;

         int front = is_digital_input_high(IO_C5); 
         int right = is_digital_input_high(IO_D1); 
         int left = is_digital_input_high(IO_D0);

         //unsigned int sensors[5];
         //read_line(sensors,IR_EMITTERS_ON);
         //unsigned int position = read_line(sensors, IR_EMITTERS_ON);

         //if (front == 0)
            //turn_in_place();

         if (front == 0 && right == 0 && left == 0)
            //turn_around();
            break;

         // Intersection identification is complete.
         // If the maze has been solved, we can follow the existing
         // path.  Otherwise, we need to learn the solution.
         unsigned char dir = select_turn(found_left, found_straight, found_right);

         // Make the turn indicated by the path.
         turn(dir);

         // Store the intersection in the path variable.
         path[path_length] = dir;
         path_length ++;

         // You should check to make sure that the path_length does not
         // exceed the bounds of the array.  We'll ignore that in this
         // example.

         // Simplify the learned path.
         simplify_path();

         // Display the path on the LCD.
         display_path();
      }

Again, we are having trouble identifying turns and recording them using only the 3 front and side digital sensors. In theory, it should be simple. i.e. - if front is tripped, make right turn and record a right. if front is not and left/right are tripped, record a straight. However, we have found it is not as simple as that.

Brian

You can use PC5, PD0 and PD1 and still use the reflectance sensors. You just need to read and follow the advice of the warning on the wall follower sample project:

I guess a lot of this turn detection depends on the dimensions of the maze and the sensing region of your sensors. I can see how the logic might be hard, but I don’t have much better advice besides trying to break the problem down into smaller tasks and work those out first.

This sounds like a cool project; we don’t know any others who have attempted something like this and we would love to hear how it turns out. If you have any more questions, please ask!

- Ryan

Well, we made some more progress, but it isn’t working perfectly. I have the 1 analog sensor (5-10cm) pointing at a 45 degree angle to the left, and this is for scaling the wall using the PID control. There are 3 digital sensors (5cm) on the front, left, and right. As the robot approaches the first right hand turn, the right digital sensor turns off because there is no longer a wall there. This triggers the robot to slow down on a delay, moves up a bit to line up with the intersection, then checks to see if a wall is in front or not. If there is a wall in front, it obviously has to turn right. However, if there is no front wall it will keep going straight.

Left turns are a bit harder, but set up similarly. As it approaches a left hand turn, the left digital sensor turns off, it moves up a bit at a slower speed, and then checks for a front wall. However, it will make the left turn regardless of a front wall being there because we are using a left-hand on the wall method.

The software will record turns (R-right, L-left), but not straight paths or deadends (B). So it can make it through the maze recording only turns. This is a problem for the second run because it will go down to the first intersection and then perform all the turns recorded at once. We also still haven’t found a solution for distinguishing deadends from the finish line. We are getting very very close, but we still have those few problems that keeps us from getting it 100% right.

EDIT: I forgot to mention that as soon as one of the side digital sensors turns off, the PID control turns off and the robot slowly goes straight. In other words, while the robot is in an intersection there is no PID control for scaling the left wall. We found it best to use simple commands, such as set_motors(40,40) and the turn function in the line maze solver example program. Basically, whenever both side digital sensors are tripped, the PID control is turned on and is only used for scaling straight paths. Turns are determined and set by the digital sensors. For example, when the robot is in the middle of the intersection and checking for other walls, it can then call the turn function and make the necessary turn. When making a right-hand corner turn, the left and front digital sensors should be on. This set of circumstances tells the robot to make a 90 degree right hand turn and proceed forward slowly until both side digital sensors are on again.

Wow, it sounds like you are very close to having it working! During the second run can you add a delay after a turn so it does not do all of the turns at once?

- Ryan

Hello.

I think I might not be understanding exactly what you’re doing, because I can imagine cases where the above strategy won’t work. For example, let’s say you’re traveling down a long, straight path with the option to make several left turns along the way. How do you record that the solution involves driving (straight) past the first two lefts and then taking the third?

- Ben

Ryan, I think it would be possible to add delays in the second run loop. However, it would be hard to tell how far straight the robot needs to go, especially if we were competing in a maze competition and we didn’t know the lengths of the straight paths beforehand. Although, that just gave me the idea of coding the robot to scale the wall, after a turn, until it hits the next turn during the second run. We can try something like that, but I’m not 100% sure that would work.

Ben, as I stated in previous posts, we are trying to implement the same strategy that the line maze solver example utilizes. Assuming those first two lefts led to dead ends (non-looped maze), we would want the maze simplifier to cut out those paths that lead to dead ends. At least, that’s how we understand it from the line maze solver code sample. I think if we were to test out the line maze solver code with this example, it would drive straight (without recording an initial straight, take the first left (record an L), then record a dead end (B), record another L (for the left coming out of the dead end), and lastly record a straight (S) between the first and second left turns that lead to dead ends. So, it would record the second left turn to a dead end similarly. As we understand it from the line maze solver, the simplifier cuts out the paths that lead to dead ends as it makes the first run. The robot would display the simplified path as a S (for the straight between the first and second left turns), another S (for the straight between the second and third lefts), and then a L for the last left turn. Therefore, the two LBL paths that are the dead ends have been cut out. This is just how we understand it after analyzing the line maze solver, so correct me if we’re missing something.

We know what we want to accomplish, but we aren’t so sure about how to code this and that is why I came to the forums. The main references we’ve been using are the line maze solver sample and the wall follower sample. Integrating all this is obviously not as easy as just combining the two code samples. Right now, we’re mostly relying on trial and error methods by slightly modifying our code to correct problems, and testing it out in the walled mazes we have built to see if it has been fixed. We also looked at the 3pi wall maze solver video, posted by lufamseed, and that is where we got the idea for the 3 digital sensors.

We definitely don’t have it all figured out. So by all means, give us some suggestions if you think something doesn’t make sense or doesn’t look like it would logically work. Like I said, the maze recording of directions is something that has been giving us the most problems and there is a very good chance that our coding method, so far, won’t ever logically work. We are trying to use the same strategy as the line maze solver and it’s not working as intended. Our other problem is still distinguishing dead ends from the finish line. Our ears are open to any other strategies or suggestions. Thanks for showing an interest.

Cutting out paths that lead to dead ends doesn’t mean you don’t need to store “straights”. Your path array should specify what your robot should do at each intersection, where an “intersection” is any place the robot has the ability to turn left or right. At any given intersection, the solution path might require a “left”, “right”, or “straight”. You never need to store a “back” because the backs all get optimized away when you simplify your solution path, but there very well could be straights. In the case I described in my previous post, the corresponding path segment would look like: S, S, L.

Just imagine how you would tell a person to navigate the maze. You might say, “Take your third left, then take your second right, the immediately go left and walk until you see the dead-end.” You could encode these instructions for your robot as:

S, S, L, S, R, L

If you restrict yourself to only left and right instructions, you are losing crucial information about which lefts and rights need to be skipped. Your solution path becomes:

L, R, L

And this is just not detailed enough to get you through the maze. Does this make sense?

- Ben

Hey Ben, thanks for the quick response. How I’m understanding it is that a ‘S’ (straight) tells the robot to pass paths that lead to dead ends. Before, I was thinking that a ‘S’ would be for any straight paths in between intersections. So I did get the S, S, L path right in my previous post, but I misunderstood how the straight instructions were read by the robot logic.

We do understand that we can’t just use left and right turn instructions, and that is part of the problem because we aren’t successfully recording any straight paths with the code I posted. With some slight code modifications we can record 'S’s, but it records way too many. It would show something like, SSSSRSSSSSSSSSRSSSSSSSSSSSL. That’s the only way we have been able to record straights. The way we have it now, it just records the turns without any straights and we don’t want just turns being recorded because it won’t logically work as you said.

We can’t quite figure out how to integrate the straight path recording on our code so far. We fully understand that, as it is now, the robot instructions won’t be detailed enough to get through the maze on the second run. If anything, we need coding help because have a decent grasp on how it should work. It’s been difficult to program the logic with only 3 sensor HIGH/LOW input parameters. If you could suggest better ways to code the intersection identification IF statements, that would be very helpful. Perhaps some of our IF statements are located in the wrong areas or not taking important parameters into account?

It sounds as if you are recording the same intersection multiple times. Very generally speaking, I think your algorithm should be:

Exploring:

  1. Drive straight (i.e. follow the wall) until one of your side-looking sensors stops seeing the wall or your front sensor sees a wall. This in itself seems like a very tricky task since you need to follow the wall on your left at the appropriate distance without being deviated by an upcoming left turn. If you notice a very sudden change in the distance measured by your wall-following sensor, you should immediately disregard its measurements and switch over a pre-programmed “drive straight” mode. I suggest you use trial and error to determine an appropriate left motor speed and right motor speed that make your 3pi drive in a reasonably straight line without any feedback from your sensors. While following the wall, I think you can get away with somewhat weak PID constants; you know you will mostly be going straight, so your motors should never need large, sudden corrections from the PID algorithm.

  2. Turn off your intersection-detecting code and wall-following code. If you see a wall in front of you and on your left and right, you are at a dead end and you should execute a pre-programmed 180° turn and record a “back”, which will then let you prune your path when you record your next intersection. If you see an opening on your left and right, execute a pre-programmed sequence that uses timing to get you to the center of the intersection. At this point, characterize the intersection to determine what your options are, choose a way to go, and record the choice. Execute a pre-programmed sequence to accomplish this, such as turning 90° left or turning 90° right. Start driving straight, and after a suitable delay, re-engage your wall-following code and turn your intersection-detecting code back on.

  3. Repeat until you find the place you have marked as your finish square.

Solution Run:

This is the same as exploring, except you no longer need to characterize intersections. As soon as you detect an intersection, you know what action is required (left, straight, or right), and you can immediately take that action.

- Ben

Ok, that sounds reasonable and I think it would be possible even given our limited sensor inputs. So far, we have the pre-programmed turns, and delayed timing to get into the middle of the intersection. Those things are working great. We just need to keep refining the intersection identification. It’s been a really fun project so far and we’ll let you know how it turns out. Thanks again!

Please do let us know how it goes. I’m happy to continue to give you advice and high-level feedback on your code, but I think the hard part will be the hard-coded timings necessary to get everything to work out right, and that isn’t something I can help much with (I think it will just take trial and error on your part using an actual robot on an actual course).

- Ben

Here is our latest progress on our project. Still not quite able to get the simplifier working correctly, but it seems to record turns and dead ends. Straight paths are still not being recorded correctly either.

http://www.youtube.com/watch?v=Q6zAv68-qaI&feature=player_embedded

I apologize for the delayed reply. It looks like it is handling the wall following quite well.

One thing I did when writing my line maze solver was to use the buzzer for debugging purposes. Whenever the robot characterized an intersection and decided which way to go, I played a note corresponding to the direction. If I heard the wrong notes or a note at the wrong time, it would give me a better idea where my code might have problems and how to fix them.

- Ben

Well, we ended up getting a 95% on the final presentation and a 99% in the overall sequence of courses (4 sequential courses). Below is the link to our final video presentation. I know the maze could have been a lot more complex, but I have limited space because I live in a studio apartment. I smoothed out the left and right turns as best as I could. Thanks.

1 Like

Congratulations on your grade and well-executed solve! I added your video to our gallery of 3pi videos.

- Ryan

Thank you Ryan! It’s an honor to have my video in the gallery. I hope to continue playing with the 3pi, and I have definitely found a new hobby!