Multi Robot system with wireless and camera using Zumos


I’m trying to build a multi robot system that has some basic properties. I want each of them to avoid collisions, and move randomly. I would like them for them to have wireless for communication and a camera on board.

I bought the zumo robot at this link :, and I got them the basic programs working, and now I want extend it. I’m fairly new to Arduinos, and I’m wondering. Can I just plug in a wireless and a camera on this board? If so, what parts would be appropriate?

On a side note, I’d like to do some heavy image processing on this board based on the camera feed. Am I too ambitious in trying to do that on this board? I know I can offload it to a separate computer, but it would be great to be able to do it on board.

Thanks in advance.


It sounds like you have not done much research into this yet. In general, this is something that would be rather hard and, if you do not have experience optimizing code in assembly language to fit in a tiny amount of memory, I do not think you would have much success running any sort of image processing on the ATmega32U4 microcontroller on the board.

There are a few projects around to do image recognition. I have seen a demonstration of the CMUcam5 Pixy doing simple image recognition of a colored object against a contrasting background. There is also an OpenCV software package that can run on the RaspberryPi. If you are patient and willing to do the work to put a lot of little pieces together, it might be possible to do a project like that but, like many things worth doing, it probably will not be easy.



Thanks for your reply. I should have been more clearer on this. My research area is based on Computer Vision. I’m developing some algorithms for Computer Vision that I’d like to run it on a set of 4 robots working together. I’ve done some work with image processing on the RPi, and my problem would have been solved if there were some assembled bot for the RPi.

To get some hands on robotics experience, I got the Zumo bot and played around it. I like the speed and controllability of the robot, so I wanted to extend it. I now understand the ATmega32U4 is not suitable for the image processing I’m interested in. I had a look at the CMUcam5 Pixy and it seems to be focus on a specific application. I don’t think I can get the RGB images out of it for offline processing.

Right now, my hack is that I’m taping a RPi with a camera on top of the Zumo bot. This is proving to be workable for now. Unfortunately, I can’t make use of the image input to drive the robot.

Let me know if you have any suggestions. I did come across the diddyborg project -, which may suit my needs better. I’ll have a look at that.

If you are familiar with the Raspberry Pi, interfacing that to your Zumo might be the most straight forward option. The Raspberry Pi has a TTL Serial port exposed on its header and, with one of our logic level shifters to convert the 5V signals from the Zumo to 3.3V for the RPi, you might be able to write code for the Zumo that listens for input from the RPi and that sends data from the built in sensors on the Zumo back to the RPi. The “Expansion areas” and “Pin assignments” sections of the Zumo 32U4 User’s Guide document where the pins for the serial UART on the Zumo are located. Resources like these are linked to on the “Resources” tab of the Zumo’s product page.

If you just want a basic moving platform with power regulation like the DiddyBorg you mentioned, another option would be to do away with the Zumo control board and its sensors; control the motors on the Zumo chassis directly with something like our Pololu DRV8835 Dual Motor Driver Kit for Raspberry Pi; and use a voltage regulator in the space we provide on that kit to power the RPi from the batteries. You will probably want to use the Zumo Chassis Kit (No Motors) for this because it is less expensive than the full Zumo 32U4 robot and comes with an acrylic panel that holds the motors in place (where the Zumo 32U4 relies on the control board to do this).


Thanks Nathan, this is really helpful. I will definitely try the interfacing of the two boards together with this information.

One more thing I want to understand is how I can create some sort of ID for the bots. Basically, I want to create a very lightweight ID’ing mechanism for my robots, that I can use to make sure they can move away or closer to each other based on the ID. For example, if I have two clusters of robots, I want the ones with the same cluster id to move together, and with a different cluster id to move apart. It can be implemented with some basic k means algorithm, but I’m not sure what to use (maybe something like RFID or bluetooth) to transmit their IDs to each other, based on how close they are to each other.

It sounds like you want a wireless communication system for the robots to be able to share their positions. There are a number of wireless communication methods that you could use to establish communication between robots, but I do not have any experience with any that also provide position information. It seems like WiFi and Bluetooth wireless communication methods that are commonly used with the Raspberry Pi. With WiFi, every node should have its own unique MAC address; and if you use TCP/IP on top of that, every node should have its own unique IP address.

If you need to get the position of a robot to combine with the robot’s ID (if you are not doing this with the machine vision), there is a technique called dead reckoning that can use data from the encoders on a robot to get a rough estimate of its position. One of our customers put up this blog post about how the algorithim works and his experiences with using the technique on our Zumo chassis.