I have been playing with a few accelerometers - such as the LSM303DLHC. My purpose in doing so was simple - to detect a “sound” - or I believe more technically correct, a vibration caused by a sound wave. I was not as much concerned with units as I was with relative changes between readings.
For example, I run a LSM303DLHC continuously, for many hours, taking readings at 200 hz. I’d save the readings to a log file and when I look at the data, I see most of the readings are the same. By that, I mean 99.99% of the x axis readings are the same value +/- 1 or 2, the same with the Y and Z axis. There were times when I could see a dramatic change in the readings - and those times corresponded to times when I knew a “powerful”, but very brief, sound to have been present. This is exactly what I wanted to be able to see.
Now, I’d like to explain to some people exactly what my data shows. That’s where I’m a little ignorant about what this is all about.
As it has been explained to be, when the sensor is sitting still, with absolutely no vibration present (not that this can actually be achieved), it’s readings (non-zero) are due to the rotation of the earth. I’ve seen how varying the height of the of the sensor, but otherwise letting it remain still, varies the Z axis reading.
Can someone explain … if the sensor remains in a fixed position (with “zero” vibration/movement), why don’t the X and Y axis readings remain “constant” - regardless of where the sensor is placed ?
I have my sensor run by an 8051, and placed the entire circuit on a piece of plywood. I then set that on a table, and took 4 sets of readings, each one with the circuit rotated 90 degrees. The Z axis readings did remain the same, but the X and Y axis readings varied. I’m not sure why. I was sort of expecting that with one orientation the X readings might be ‘abc’, then rotated 180 degrees the X readings to be something like ‘-abc’.