Data output question

I am kind of new to this. I have a robot made up from Oranguton X2 with VNH2. Its a quite simple target seeking robot, while it avoids obstacles. There are some variables in the program that I want to see the values of. I have a couple of questions.

  1. What is the easiest way to see how did the values of those variables change, after robot completed its task.
  2. Is there a way to see the values of those variable changing “live” during robot is running.

Any kind of suggestions will be highly appreciated.


I can think of several potential solutions:

  1. The easiest approach would probably be to use the LCD to display the data you’re interested in. You could have the X2 log the variable values over time and write that log entry by entry to the LCD as you push one of the user pushbuttons once the robot is finished.

  2. If you can keep the robot tethered, you could stream the variable values to your computer via the X2’s USB connection and monitor them with a terminal program (e.g. hyperterm). A USB connection to your PC could restrict your robot in a way that interferes with the tasks it’s trying to perform, however.

  3. Buy a wireless serial module (e.g. XBee) and use it to stream data from your X2 to your PC wirelessly as the robot runs.

  4. Buy an in-circuit debugger/emulator and monitor/manipulate your program as it runs using the mega644’s JTAG interface.

- Ben

Thanks for your reply. Actually I want to save the values of those variables (intermittently), so at the end I could plot a graph of how those values changed during that time, to put those graph up in my paper.
Besides those buying solutions you mentioned, is there a way that I could store values in some array or something, and in the end download that array somehow? (though I am not sure how would that be possible)

What you want to do sounds like a mixture of my first and second suggestions. As your robot runs, store your variable values in an array in RAM or in EEPROM. When your robot finishes, connect it to a computer via the X2’s USB connection and output the saved values serially to a terminal program (such as hyperterm) in response to some trigger from you, such as a button press. You can then save the data from your terminal program to a file and import it into something like Excel to make a plot. Does this make sense? Do you know how to program something like this?

- Ben

I only know how to display data to LCD. But I am not sure how to save the array specifically to ROM or FLASH. And I have no experience on hyperterm as well.

Could you kindly guide me on those things and if possible point me to some right documentation that I can get help from.
I’ll be very much thankful.

Can you give me a better idea of how much data you need to log? What kind of values are you recording, how often, and for how long?

- Ben

The data will be around 10 values (out of which 8 are double, and 2 are int). I am quite flexible on the refresh time, but I need data to be sufficient. So its desirable the refresh rate to be in centi-seconds (or deci) to be least. Usually robot reaches the destination within few minutes. Worst case is around 15 min I suppose.

(Actually the data I need to monitor is distance to the target, angle to the target, obstacle avoidance parameters, neural network weights, and few underlying variables)

Unfortunately the amount of data you want to log far exceeds what could be stored on the X2. Doubles are 4-byte values and ints are 2-byte values, so you are looking to record 36 bytes 10 or 100 times per second for 15 minutes, which comes to around 300 KB or 3 MB of data. For this level of logging, I’d really recommend wirelessly streaming data from the X2 to a computer or using a datalogger to write the data to a microSD card.

Also, you should be aware that working with floats and doubles is fairly processor intensive and slow. I don’t know how much experience you have working on microcontrollers, but you cannot approach your microcontroller programs the way you would programs for a modern PC. Too many complex operations might significantly affect the speed of your main loop. I typically recommend avoiding using floats and doubles unless it’s absolutely necessary and you can determine that you can spare the processing power. Often it is possible to represent decimals with integer data types, and in many applications sixteen bits is sufficient resolution. For example, if your application has to do with manipulating dollar values, you can store the values as integer representations of cents rather than floating point representations of dollars. Your neural network computations could be orders of magnitude faster if you can rework the math to use integer values between, say, 1000 and 0 rather than 1.0 and 0.

Can you tell me more about your project and how the neural network is involved? It sounds very interesting. Also, what is the current state of your robot? Is it built or just in the designing stages? Does it work?

- Ben

Thanks for your reply. I don’t have much experience in programming microcontrollers. And I think I definitely can reduce the data size by changing some double values in integers. Can you kindly tell me how much data can be stored in X2? And how can I get them out using the usb cable connection if I want to see only some of the integer values?

About the project, the robot is already built and it is running. I am including neural networks in autonomous robot control for it to remember (and then avoid accordingly) different kind of obstacles by maneuvering though an obstacle ridden environment. I have this system tested in Matlab, I was just trying to implement this system step by step into the robot software for this physical robot. I haven’t put all my neural network into X2 yet (though its target seeking behavior is implemented), so you could be right, I have to make my programming less resource hungry. I just wasn’t sure how could I get the data out to see whether the results are close to what I got from simulation, so I can continue programming all the stuff inside robot.

I am trying to avoid adding the wireless or some other module, because I don’t want things to be more complex for me. Besides, that will add more time in completion of this project, which I can hardly spare.

The X2 has 4KB of RAM. This RAM is used for the AVR’s stack, so you need to make sure you leave some room in RAM for your program’s stack needs (the amount of stack space you need will depend in part on how deeply nested your functions are and how many local variables they use). You could probably use 2 or 3KB of RAM for logging your data in global arrays.

One way to log data in RAM is to declare a global array of structs. For example:

struct logDataStruct
  unsigned int targetDistance;  // 2 bytes
  unsigned int targetAngle;  // 2 bytes
  unsigned char networkWeights[5];  // 5 bytes

struct logDataStruct data[100];  // size of array is struct size times 100 (900 bytes in this example)

You would then have a global index to this array that starts at 0 and counts up to the data array size (100 in this example) as you log values:

if (index < 100 && it's log time)
  data[index].targetDistance = curTargetDistance;
  data[index].targetAngle = curTargetAngle;
  data[index].networkWeights[0] = networkWeights[0];

Once you’re ready to output the data to a computer, you can use the serial wrapper functions in SPI.c. Take a look at the UART commands section of SPI.c; all of the functions are commented in that file. Let me know if you still have questions about how to use these functions. Just following your robot with a laptop that’s plugged into the X2 via USB might be the best solution for you (if such a tether is physically possible), since then you could just stream as much data as you want in real time.

You could store a lot more data if you used the X2’s flash, but writing to flash is much more complicated on the AVR, and it’s not something I have too much experience with, so you’d mostly be on your own in figuring it out. You could try googling around for things like “writing to flash on AVR”.

So how are you training your neural network?

- Ben

Thanks for your reply. I’ll try to use this, and see if it works for me. I’ll certainly ask more questions if I get stuck somewhere.

For the neural network, I am using reinforcement model. With learning of the robot entirely on ‘impact’ with obstacle (robot will learn whether it could go over an obstacle, drag it, or if it is solid enough to stop the robot etc). After a few certain number of impacts, robot will learn about the obstacles to avoid them in future accordingly. So the obstacle avoidance in not totally collision free. I am using multilayer perceptron model (as neural network building model) in three layers, while all weights are updated only at impact as explained.


How many training runs do you expect it will take for the network to learn appropriate responses? In my past work with neural networks I used thousands of training rounds (or more); it seems much more difficult doing that in the physical world with a single training run taking minutes rather than small fractions of a second.

- Ben

You are right though. But there are certain factors. I have kept the number of actual and virtual weights to a considerably smaller number, (I try to increase the accuracy by increasing the perceptrons instead of the number weights in an individual model). This gives me faster learning response. Plus I can always crank up the gains in weight update laws for speedy learning. Also for me a ‘learned’ robot will only be able to distinguish between certain crude characteristics of obstacles, rather than the details about the obstacles. (because robot’s main task in my case is reaching to the target, not the mapping). For obstacle avoidance, I have also included a virtual memory (made out of filter) to avoid the obstacles especially around edges, so the load on neural network could be reduced.
I tried that on simulation, and after playing around with the variable factors, I was able to get the required results. So now I am trying to implement that on robots (I’ve prepared another lego robot too for testing) to check whether I can have the desired results on physical architectures too. But again I still have my fears as you mentioned about the learning curve of the robot. I’ll see what results I get from implementations. (fingers crossed)

Well, it sounds like a really cool project. Please let us know how it turns out!

- Ben

Hello Ben ,
Thanks so much for your kindly and helpful answers.
Actually, Based on your guide, you mentioned that using serial wrapper functions in SPI.c for outputting the data to PC. Could you explain somethings more details about how to choose these functions , or how to use them?
I really appreciate the helps you offered us .


Do you have an Orangutan X2? The functions in SPI.c that I mentioned above have been integrated into the latest version of the Pololu AVR Library. Take a look at the Orangutan Serial Port Communication section of the library command reference for more information on using the library to send data from your Orangutan X2 to the PC using its virtual COM port. If things are still unclear, please let me know.

- Ben