I have a MinIMU-9 and I’m trying to understand what I need to do with regards to offsets for the accelerometer.
When any of the Axis are pointed up (or down), the reading is close to 1g, as you would expect. However, when an Axis is horizontal, the reading is ~4g when I would have thought it would be around 0. When I take these readings the board is stationary.
According to the documentation, the zero-g level offset is supposed to be factory calibrated so I thought what that meant was that I don’t have to perform a calibration routine myself.
Previously, when I was getting ~4g at horizontal, I was holding the board horizontal with an alligator clip on one corner, while on the other side, I had four female jumper wires attached to the header pins. When I inserted the header pins into a breadboard instead, I could get a reading of ~0 for horizontal. Do you think that the way it was being held could have caused the problem?
I’ve got another question now though. If I point an axis up (arrow up), then I get ~1g. If I point it down (arrow down), I get ~3g. That’s still a difference of 2g which is the same difference as for +1 and -1g but does it make any sense for it to be 3g?
The results you are reporting are unexpected. It does not make sense for it to be 3g when the axis is pointed downward. It sounds like something is wrong with your code. If you post it, we might be able to notice something.
Sorry for the delay. A couple of us looked at your code and don’t see anything wrong with it. But it is not your complete code so we can’t be so sure about your whole program. For example, you don’t have the code for turning on the accelerometer here. Could you post all of the code of the simplest program that does not do what you expect it to? Also, what processor are you using?
Thanks for posting a minimal example. It’s very helpful! I noticed your clock speed is set to 400kHz. Can you try reducing it to something like 100kHz, to see if that helps? Another thing to try would be to insert the literal values for the bytes you expect to receive and verify that your program’s math works correctly for negative numbers. Also, can you try decreasing your sensitivity, to see if that helps?
I figured out what is wrong, here is a C# program that describes the problem:
using System;
using System.Threading;
namespace ConsoleApplication1
{
class Program
{
static void your_math()
{
byte high = 0xC1;
byte low = 0x80;
var data = (Int16)(((high << 8) | low) >> 4);
System.Console.WriteLine("your result: " + (double)data);
}
static void correct_math()
{
byte high = 0xC1;
byte low = 0x80;
var data = (Int16)((high << 8) | low) >> 4;
System.Console.WriteLine("correct result: " + (double)data);
}
static void Main(string[] args)
{
your_math();
correct_math();
while (true) ;
}
}
}
The problem is that by shifting to the right before turning your number from an unsigned number to a signed one, you are basically putting a bunch of 0’s on the left side of the number, which messes up the 2’s complement form. By the way, I made this program by following my suggestion to you of simplifying your program by inserting literal values for the bytes you expect to receive.
Thanks very much for your help. I’m sorry for not taking your advice regarding checking the math. Probably should have explored that first, given that it was not producing the expected negative values. Doh.