Has anyone tried to make a robot move around a room detecting obstacles and then storing that information as well as it’s coordinates so that it doesn’t crash into anything? I mean have you tried it your self not just seen it on youtube.
I would imagine to do it you would probably need to mount a laptop on the robot which makes the whole thing very big and instantly very complicated but would be cool all the same.
I’ve done it with a $300,000 robot platform. Had an onboard computer to do all of the processing and control.
The biggest hurdle for most would be the sensors, I used a 180° laser range finder combined with 32 ultrasonic senors placed all around it, and differential steering with encoders, this was smart enough to very accurately monitor location. I used the laser ranger to do most of the mapping and the ultrasonics were mostly for unexpected object avoidance. The laser ranger was only good for a single height of detection, the the ultrasonics could pick up the higher objects.
The laser range finder was very accurate, so the data collected from it was also used to relocate the robot if it matched enough prerecorded objects, this reduced the effects of encoder and wheel inaccuracy and location drifting.
Before I get too much credit, the robot was prebuilt, and came with drivers for the sensors and motor drivers, so my job was a higher level programming that included mapping, localisation and path planning.
The robot, having a computer onboard, was about 500x700mm and about 500mm high and weighed more than I wanted to try an lift. I’m sure you could do this with a smaller robot, but you really need the good sensors (the laser ranger gave me a 0-30m range with a reading every 1°).
On the other hand, if you want to build a robot to just wonder around and avoid things, that would be very simple, but nothing new really.
Hey there. I’ve actually done that. I did it twice. Once with my bluetooth wall-e and i did it last night actually with my hacked tomy omnibot. That is how i found this conversation
I used the EZ-B (which has direct support for some pololu components). Specifically the motor controller, which is great!
For obtaining distance data, the wall-e used a sharp distance sensor. The sharp distance sensor is IR and not very accurate. So the resolution was out of whack. I will post pics for you soon. It wasn’t fantastic.
Now, the omnibot worked much better because he has a ping distance sensor. the HC-SR04 specifically. I am working on the next video, so the room mapping will be included. I’ll be sure to post it here for ya!
Love to see what you come up with also dude.
the thing that gets me is the program for mapping… why hasn’t anyone coded an interface to use a doom wad (2d) to navigate as a player or (there was an old project) UnrealBots that ran in java but would use the UT engine to navigate a map in the real world.
hardware is easy for me, but without a simple mapping (not a complicated Slam model) its just a bump-n-go bot or fancy R/C car.
i just need a simple starting point
The problem is, that “complicated SLAM” is what’s needed to get even a “simple mapping.” Navigating a map, once created, is pretty simple – as you say, games have done it for twenty years or more. Creating the map without human intervention, that’s something totally different…
I’ve seen projects that walk through a subway station with a camera, and then run it through number crunching, and out comes a Unreal Engine model of the station with textures and geometry. However, this used a LOT of processing, and some proprietary reconstruction algorithms. (This was a military trade show)
I’ve also seen research projects create maps using Kinect sensors. There’s a sample for the OpenCV platform (or was it the ROS platform?) that does something like this. If you want to get this going on your own, that might be the simplest way to go.