MinIMU-9 v5 and AHRS questions

I am planning to connect the IMU to an A* (micro), at least for initial testing. I haven’t actually done the connection yet, but am trying to “think ahead”. I’ve studied your AHRS library to try to understand better how to use the device. I have so little expertise in this area that I need to ask some likely dumb questions.

I need to implement a “simple AHRS” for a wheeled differential-drive robot. I really only want to know if the robot is traveling “straight” when I want it to go straight, or determine what angle it has turned when I want it to rotate. To do those things, I think I only care about heading, and don’t care about pitch and roll. Does this seem reasonable?

While I’ve not used accelerometers and gyros, I have used a magnetometer in my environment (a hi-rise condo). I found that due to significant amounts of iron (rebar and stiffening cables) in my floor, a magnetometer is useless for understanding or even contributing to understanding heading. I don’t understand your AHRS code well enough to understand the underlying principles, so I can’t determine what impact no magnetometer input would have on the algorithm. I’ve followed some of the URLs in the code comments, but have not yet tracked down enough information to help. Can you point me to references to the theory from which the code was derived? Can you estimate the impact of no magnetometer?

I plan on having a Raspberry Pi interact serially with the A* for other reasons. So I thought it could be reasonable for the A* to interact with the IMU to get accelerometer and gyro information and then let the Pi do all the apparently heavy floating point mathematics. Seems like a Pi 3 should be a lot faster at floating point math. Does that sound reasonable? Once I go that far, I could in fact just let the Pi talk directly to the IMU. Right now I have that sort of flexibility. Any advice?


Well, pretty soon after I initially posted I was ashamed of myself for posting so hastily (I should not post at the end of a long day). I think I’ve now answered all of my questions.

After doing more searching today, I concluded that since I can only use the accelerometer and gyroscope, I have to use a different approach than your AHRS. I’ve found a few good sources of the proper theory and a couple of partial implementations I can test. These involve integrating the output of the gyro to produce the heading (yaw angle) and using sensor fusion (complementary filter) with the accelerometer to help improve accuracy.

This different approach should be a lot less math intensive. Thus it seems that the two choices for implementation are (1) doing all the work in the Arduino, or (2) doing all the work in the Pi. I plan to start on the A* to leverage your LSM6 library.


If you are only interested in heading, the accelerometer will probably not be very useful to you (it is mainly used as a reference for the pitch and roll angles in an AHRS).

Our Arduino library will probably make it a little easier to get started with an A-Star, but you are right that the Pi will be much more capable of doing floating point calculations quickly. It should not be too difficult to interface with the LSM6 from a Raspberry Pi; if it helps, here is some Python code for doing so. Note that it is pretty minimal and somewhat of a work in progress:

import smbus
import struct
import collections

class Regs(object):
  CTRL1_XL    = 0x10
  CTRL2_G     = 0x11
  CTRL3_C     = 0x12
  OUTX_L_G    = 0x22
  OUTX_L_XL   = 0x28

Vector = collections.namedtuple('Vector', 'x y z')

class LSM6(object):

  def __init__(self, slave_addr = 0b1101011):
    self.bus = smbus.SMBus(1) = slave_addr
    self.g = Vector(0, 0, 0)
    self.a = Vector(0, 0, 0)
  def enable(self):
    self.bus.write_byte_data(, Regs.CTRL1_XL, 0x50) # 208 Hz ODR, 2 g FS
    self.bus.write_byte_data(, Regs.CTRL2_G, 0x58) # 208 Hz ODR, 1000 dps FS
    self.bus.write_byte_data(, Regs.CTRL3_C, 0x04) # IF_INC = 1 (automatically increment register address)

  def read_gyro(self):
    byte_list = self.bus.read_i2c_block_data(, Regs.OUTX_L_G, 6)
    self.g = Vector(*struct.unpack('hhh', bytes(byte_list)))

  def read_accel(self):
    byte_list = self.bus.read_i2c_block_data(, Regs.OUTX_L_XL, 6)
    self.a = Vector(*struct.unpack('hhh', bytes(byte_list)))

  def read(self):


Thanks very much for this code. Should be helpful.

I’ve attached the IMU to an A* and have been testing, looking at the output of the accelerometer and gyroscope for various types of movement.

While trying to understand the sensor and how best to use it, I found this application note. It states “Time interval Ts is critical to get accurate results. The actual value should match as closely as possible to the target value; any discrepancy will cause errors similar to non-unity gyroscope sensitivity.” Ts refers to the rate at which the sensor gets sampled. I’ve seen similar warnings/observations in other material.

This led me to wonder about a couple of things. First, the default setup for the IMU is to sample internally at 1.67 KHz. The internal FIFO is not used, so a new sample gets to the output registers at that frequency. Your “out of the box” sketch called Serial then reads the output registers at a frequency of 10 Hz. Thus it seems there could be a little jitter in the actual sample time, but it would almost alway be a bit greater than 10 Hz.

Then I got to thinking about the delay(100) statement used to establish the 10 Hz. That seems inappropriate to do precise sampling, because the loop reads the output registers, which will take time a, prints some stuff which takes time b and then delays 100 ms; so the actual sample period is a + b + 100ms. I’m going to try doing a true interval of some period between samples, which I think is more appropriate and would get closer to 100ms (or whatever the period might be).

This line of thinking made me wonder about the Pi gather information from the IMU. It is much less of a real time device than the Arduino. Thus I wonder if a Pi could really get sensor information on a truly periodic basis. I’ve never used a Pi for anything approaching real time needs like the IMU appears to require. I wonder if maybe the IMU FIFO would be required.

Any comments? Am I overthinking (again)?


I’m not sure what kind of calculations you are planning to do, but since you are just trying to tell how much your differential drive robot has turned, I would recommend simply reading the vertical axis of the gyro periodically, multiplying the reading by the time since the last reading, and then adding it to an accumulator variable. The accumulator variable will then represent how much your robot has turned since the algorithm started running.

I’ve used this a gyro in this way to make a line following robot that learns the course. We also have example code showing how to sense turns this way on the Zumo 32U4 and on the Romi 32U4.

The Raspberry Pi works pretty well for this type of thing. My minimu9-ahrs program runs on the Raspberry Pi and reads from the MinIMU-9 approximately every 20 ms. Without doing anything too special, I was able to get my main loop to run within a few percent of the desired speed. When I switched to using a timerfd, the timing got much better; you can see my numbers on the GitHub issue. Of course, since it’s not a real-time operating system, there is not much to guarantee that your program will get to run on time and not be delayed for hundreds of milliseconds. There might be some ways to tweak the Linux scheduler that might help, but I don’t know much about that (see the GitHub issue for more info).

I don’t think you will need to use the IMU FIFO. Since you are sampling much faster than the physical system can change, it should not matter whether there is a little jitter in the exact timing of your input. Your input timing might be jittering a few milliseconds because of the 1.67 ms sampling time, but the movements of the robot that you are measuring probably take at least hundreds of milliseconds to happen.

From that ST application note, you might be thinking that Ts, the time between samples, is a constant that you calculate ahead of time. I think it’s better to think of it as a dynamic thing that you measure every time you take a reading. So if your robot gets busy somehow and readings start getting delayed, and the reading rate goes down by 10%, your results will still be pretty accurate because the time measurements that you multiply by will go up by 10%. You should, of course, make sure that you are measuring time in a good way so that the sum of all the time intervals you measured is close to the real time that passed on average over a long period of time.


Thanks for a thorough and extremely helpful response. I’d like to follow up
and will address the individual topics in order.

“How much turned”:

I’ve actually already been doing what you suggest in a simple bench setup.
I have the IMU attached via I2C to an A*. The A* connects to my Mac via a
USB cable so that I can capture the IMU data via the console. I then scale,
integrate, and plot the results using Excel (all manual but best option I
have). In my initial tests, I found that I can definitely detect rotation
around Z, but the accuracy was less than I’d hoped. For rotations of
roughly 5 degrees, I got a reported 3.5 degrees or so, and for a 90 degree
rotation something more like 82-85 degrees.

After digesting the information in your response, I made changes in my test
program so that I only reported the gyro z axis and the result of the
micros() function. I then used the actual delta T (Ts) for the integration.
Plotting that gave me very close to 90 degrees(certainly within the margin
of my ability to execute a true 90 degrees) for rotation in either
direction. A huge improvement! I have not yet tested smaller turns.

“line following robot”:

I browsed the code for the line follower. I think I understand most of it
but I have a few questions:

  • in lg3.cpp line 27-28, why did you choose to disable the hi-pass
    filter (this may not apply to the sensor in the LSM6DS33 I use which also
    has a hi pass filter)
  • in TurnSensor.h line 7 you mention the gyro offset; in my testing I’ve
    found that the offset “drifts” at least over a period of minutes; I’ve been
    thinking that I’d need to take a few samples while motionless to determine
    the current offset and then perform movement; did you see the “drift” in
    your experience
  • in turn_sensor.cpp lines 54-56 mentions a “fudge factor” that does not
    appear to be used; how did you know that one is required; how did you
    determine the value; why did you not use it (this too may not apply to
    the LSM6DS33)

I looked at the Romi code, which seems the most relevant to my components
and have some questions:

  • in TurnSensor.cpp lines 9-21 you discuss a convention for turn angle;
    I think I understand it at one level (avoiding floating point
    calculations), but not completely; can you elaborate on the purpose/value
  • in TurnSensor.cpp lines 46-48 you set the full scale reading to 1000
    dps vs the default 245; I don’t understand the reasoning, and I worry there
    is something I am missing that can help me; can you elaborate; in fact, I
    confess I am not sure how to interprets “full scale”; for example, in the
    tests I mentioned above, the maximum/minimum (raw) values were
    approximately +15000/-15000, and the (scaled) values for the dps was
    approximately +130/-130; however, I was turning sort of slowly, tho perhaps
    about as fast as my robot should turn; I guess the full scale thing really
    constrains how fast a rotation happens, is that correct
  • in TurnSensor.cpp lines 59-71 you calculate an offset (in effect
    ’zero’ the gyro); over what period of time do you expect that offset value
    to remain valid; as I said, in my experience it varies over minutes, tho
    maybe not enough to matter over minutes (this more or less duplicates a
    question above but for my sensor)
  • in TurnSensor.cpp lines 103-106 you assign a uint_16 from micros()
    which returns an unsigned long; that means you expect a dt that is less
    than about 65 milliseconds; is that correct; further, micros() wraps in
    about 70 minutes and I see no provision for dealing with wrap; presumably
    you’d never have to worry about it as long as the A* was never power up for
    that long; comments?

I attempted to look at your Pi code, but I never built up the skill with
C++ required to follow it well. I’d hope to use either Java or Python (I
recognize the attendant performance penalty). I may have to compromise on
some sort of hybrid approach. Or I may have to bite the bullet and learn
C++ for things like this.


Given that at least for now I don’t want to use C++ on the Pi, and it will
be even less real time using Python or Java, I am still on the fence
regarding the FIFO. I don’t yet feel comfortable that I could make it work,
but the buffering concept would seem to helpful. On the other hand, there
could be some uncertainty about the continuity of the timing of retrieving
one FIFO to the next. In any case, probably not worth worrying about for

I agree with your last paragraph, as I believe I’ve proven it correct with
he latest tests I’ve made. And you are most certainly correct than any
expected rotation of robot will take place over a time period of 1/2 second
or more. That said, I wonder if reducing the device sample rate to e.g.
even 52 Hz makes sense.

Also, a somewhat more general question. In the description of the LSM6DS33
carrier you sell I see “With an appropriate algorithm, the gyro can be used
to very accurately track rotation on a short timescale, while the
accelerometer can help compensate for gyro drift over time by providing an
absolute frame of reference.” Can you elaborate?

It sounds like you found my line-following code and are asking about this part:

// CTRL5: High-pass filter disabled
l3gWriteReg(0x24, 0x00);

The high-pass filter on the L3GD20H is disabled by default and I think I was just trying to make sure that the sensor was using its default settings, instead of accidentally using some settings that were applied earlier by a different version of the code.

I have not noticed the offset for any of our gyros changing over the course of many minutes, but I also have not tried to look for that effect, and I have not characterized how long a gyro offset value remains valid. I’d be interested to see the code you used to determine that the offset is changing, and the numbers you got for it.

Regarding the commented-out fudge factor for the gyro that you found in my line-following code: I think I experimented with multiplying gyro readings by that amount when I was working on dead reckoning. It seemed like my robot’s heading measurement was drifting over time so I thought I could just multiply the gyro readings by something to correct for that. I chose the fudge factor with trial and error but I don’t think it ended up being helpful.

In TurnSensor.cpp for the Romi’s RotationResist example, we need to keep a variable that represents an angle and we need to decide what units to use for that variable. You could use familiar units like degrees, millidegrees, or radians. Instead, I chose to use units such that 0x20000000 represents a 45 degree counter-clockwise rotation. This means that I can store angles in a 32-bit unsigned integer, and when the integer overflows from 0xFFFFFFFF to 0, that corresponds to my angle overflowing from approximately 359.9 degrees to 0 degrees. Heading angles and unsigned integers in C++ both use modular arithmetic so it seemed like it could be useful to take advantage of that. One consequence of this is that if you pick up a Romi running the RotationResist example and turn it by 270 degrees, it knows it can just turn another 90 degrees to get back to where it was facing, instead of changing direction and turning 270 degrees in the opposite direction. It doesn’t have to explicitly do any modular arithmetic to get that behavior; it happens because unsigned integers in C++ use modular arithmetic when they overflow. On the other hand, if you want to command your robot to turn 180 degrees or more in one particular direction, it is hard to express that with these units because an int32_t can only hold angles from -180 degrees to 180 degrees, so that is a trade off. You might have to (internally) chop that movement up into several 90 degree segments if you are doing something like that.

Later in that file, I change the full-scale range of the LSM6DS33 gyro to 1000 dps instead of the default of 245 dps because it is possible for the robot to turn faster than 245 degrees per second, and when that happens I want to measure how fast it is turning instead of having the reading be capped. By increasing the full-scale setting, we can measure faster rotations, but we also give up some precision.

Yes, I am expecting turnSensorUpdate to be called as frequently as possible while the robot is performing a turn, so it should be called more often than every 65 ms.

Yes, micros() overflows from 0xFFFFFFFF to 0 roughly every 70 minutes. However, you can still subtract two different readings of micros() to tell how much time has passed and the overflow will not affect this. For example, if you start measuring some time period at time 0xFFFFFF00, and then you end your measurement at time 0x100, you would perform the subtraction 0x100 - 0xFFFFFF00 and get -0xFFFFFE00, but you would store that result in a uint32_t, which cannot hold negative numbers, so its value would end up being 0x200, which is correct. In that Romi code, we are only looking at the lower 16 bits of micros(), but this still works, because the lower 16 bits count up from 0 to 0xFFFF and then overflow back to 0. This is another instance of using modular arithmetic of integers to our advantage.

You asked about this quote from our LSM6DS33 carrier product page:

With an appropriate algorithm, the gyro can be used to very accurately track rotation on a short timescale, while the accelerometer can help compensate for gyro drift over time by providing an absolute frame of reference.

The algorithms we’ve been discussing here are examples of how the gyro can be used to accurately track rotation on a short timescale. Over a long timescale, errors from the integration will eventually build up and you won’t have an accurate reading of the rotation of your robot. We’re just talking about using one axis of the gyro to get a heading, but many applications use all three axes so they can measure yaw, pitch, and roll. In applications like that, it can be helpful to use the accelerometer to tell which way is down, and use that to slowly correct your orientation measurement over time. The accelerometer signal is relatively noisy compared to the gyro signal, but its advantage is that it measures an absolute frame of reference instead of just measuring changes. This sort of thing is called “sensor fusion” and you can see examples of it in minimu9-ahrs and minimu-9-ahrs-arduino, which also use a magnetometer.


Thanks once again for a thorough response.

Responding to your request “I’d be interested to see the code you used to
determine that the offset is changing, and the numbers you got for it.” The
code I used is the Pololu LSM6 library for the LSM6DS33. In particular I
just ran the Serial example, which prints the device readings via the
Serial support. So I guess I’ll explain the technique. I inserted code to
wait for the console to fire up before starting to print. Then I simply
powered up the A*, started the console from the Arduino IDE and held the
device still (on a desk) for a second or two and then made the movements I
was trying to measure. I used all the defaults set up in the library; for
the gyroscope, that is 245 DPS full scale. I would then take an average of
the z axis gyro readings while still (something like 20-70 samples, or
roughly 0.5 seconds). Over the course of a few days, I found the average
"offset" ranged from -347 to -412, or about -3 to -3.6 DPS or roughly 1% to
1.3% of full scale. The time between the trials I made was a little as 5
minutes and as much as several hours.

I guess the offset is not huge, and probably the variation is less than
that indicated above over a period of a few hours. I hope so. I’d have to
look in more detail to determine the average change in offset over a set of
trials run over a few hours.

Sorry to take so long to get back to you. I used your Python code above and
was able to successfully calculate a total angle of rotation. Thanks very

While I’ve written a fair amount of Python code, I’ve recently become
concerned about it performance. As a long time (tho recently not that
active) Java programmer, I thought it would be interesting to see if I
could duplicate the Python function in Java. Using the I2C function in
Pi4j, I was able to duplicate the usage of the IMU to successfully
calculate a total angle of rotation. I have a lot of work to do to clean up
an optimize both languages, but at some point I’ll do a performance

Would Pololu have any interest in a Java port? Or at least whatever
performance comparison I produce?

Thanks again for the help.

Yes, if you want to share your Java software, I’m sure there would be interest from other users; we could add a link to it from the Resources section of the product page if you’d like to make it available. We’d also be interested to hear the results of your performance comparison.


I include below

  • the Java equivalent (for the most part) for the Python LSM6 class mentioned in an earlier post; this class depends on the Pi4J project

  • a Java program that uses the Java LSM6 class to calculate the yaw of the device using the Z axis of the gyroscope; the program measures the time needed to do the calculation of the yaw, but ignores the acquisition time

  • a Python script that uses the Python LSM6 class to do the same thing as the Java program

  • a brief discussion of the performance comparison between the two languages

The Performance Comparison

I plan on using the IMU as part of an ambitious project, with many things happening at once, and I became concerned about the use of Python in terms of performance. I decided to create the simple tests shown in the code below to get a hint about the performance differences. I can in no way be considered a Python expert, nor, at least any longer, a Java expert. I admit that I made some choices for data forms, etc. that might not come close to optimal performance in either language. But I did attempt to implement the approach as identically as possible. So accept this comparison with some skepticism about general performance differences.

The performance tests loop 200 times. In each iteration, the test reads the gyro, gets a timestamp, calculates the current yaw angle, gets a timestamp, and outputs the difference between the two timestamps.

I used a spreadsheet to find the average time to do the calculation. For Java, the average was 6.967 microseconds. For Python 3, the average was 82.983 microseconds. Java appears to be about 12X the performance of Python 3 in this simple test. This reference suggests that one can expect Java to be between 1X and 50X faster than Python.

Now, there is a reality check. The time between samples taken was 20 milliseconds. So even 83 microseconds is “in the noise” in this context. However, in the context of a number of threads/programs running simultaneously, a 12X difference could be important.

The following code is the Java LSM6 class. It should not necessarily be considered “production ready” but certainly works.

 * This class represents the LSM6DS33, containing an accelerometer and 
 * a gyroscope. The access to the module is via I2C.
 * This implementation is a singleton to insure that there is only one. It 
 * is highly unlikely that will matter, but better safe than sorry.
package pi4j_i2c;


 * @author gregflurry
public class LSM6 {
    // the one and only instance of the class
    private static LSM6 instance = null;

    // the device used for communication
    private static I2CDevice lsm6 = null;

     * Constructor (private)
     * Gets an I2C bus and then gets the device for the LSM6DS33.
     * @throws IOException
     * @throws 
    private LSM6() throws IOException, I2CFactory.UnsupportedBusNumberException {
        // get the I2C bus for communication
        I2CBus i2c = I2CFactory.getInstance(I2CBus.BUS_1);

        // create an I2C device for an individual device on the bus to communicate with;
        // in this case an LSM6DS33
        lsm6 = i2c.getDevice(0x6b);

        // read the "who am I" register
        int who =;

        // check the ID
        if (who != 105) {
            throw new IOException("Can't connect to LSM6DS33");

     * This returns the one and only instance of the class representing the LSM6DS33.
     * Instance gets created if necessary.
     * @return instance of the I2C device for the LSM6D33
     * @throws IOException
     * @throws 
    public static synchronized LSM6 getInstance() throws IOException, I2CFactory.UnsupportedBusNumberException {
        // if not already constructed
        if (instance == null) {
            instance = new LSM6(); // so so
        return instance;
     * Enable the LSM6DS33. 
     * The accelerometer and gyroscope can be enabled independently
     * @param enableAccel true enables the accelerometer
     * @param enableGyro true enables the gyroscope
     * @throws IOException 
    public void enable(boolean enableAccel, boolean enableGyro) throws IOException {
        if (enableAccel) {
            lsm6.write(LSM6_REGS.CTRL1_XL, (byte)0x50); // 208 Hz ODR, 2 g FS
        if (enableGyro) {
            lsm6.write(LSM6_REGS.CTRL2_G, (byte)0x58); // 208 Hz ODR, 1000 dps FS
        lsm6.write(LSM6_REGS.CTRL3_C, (byte)0x04); // IF_INC = 1 (automatically increment register address)
     * Read the gyroscope, all three axis.
     * @return a Vector containing the x,y,z readings
     * @throws IOException 
    public Vector readGyro() throws IOException {
        byte[] buffer = new byte[6];
        // read the data from the device, buffer, 0, 6);

        // get 16 bit integers
        // contrust the response as an int
        int gx = (int) (buffer[1] << 8);
        gx = gx | Byte.toUnsignedInt(buffer[0]);
        int gy = (int) (buffer[3] << 8);
        gy = gy | Byte.toUnsignedInt(buffer[2]);
        int gz = (int) (buffer[5] << 8);
        gz = gz | Byte.toUnsignedInt(buffer[4]);
        return new Vector(gx, gy, gz);
    // private class for LSM6 register numbers
    private static class LSM6_REGS {
        static byte WHO_AM_I = 0x0f;
        static byte CTRL1_XL = 0x10;
        static byte CTRL2_G = 0x11;
        static byte CTRL3_C = 0x12;
        static byte OUTX_L_G = 0x22;
        static byte OUTX_L_XL = 0x28;

    // public class defining a "vector" with x,y,z values
    public class Vector {
        int x;
        int y;
        int z;
        public Vector(int x, int y, int z) {
            this.x = x;
            this.y = y;
            this.z = z;
    // main for testing
    public static void main(String[] args) throws I2CFactory.UnsupportedBusNumberException, IOException, InterruptedException {
        LSM6 gyro = LSM6.getInstance();
        gyro.enable(false, true);
//        Vector res = new Vector(0, 0, 0);
        for(int i = 0; i < 200; i++) {
            Vector res = gyro.readGyro();
//            System.out.println(;
//            System.out.println(System.nanoTime());

//            System.out.println(res.z + ", " + System.nanoTime()/1000);

The following code is the Java test program. It has some commented out debug statements left in it that can be ignored.

 *  This does "more accurate" integration
package pi4j_i2c;


 * @author gregflurry
public class TestLSM6_A {

     * @param args the command line arguments
    public static void main(String[] args) throws IOException, I2CFactory.UnsupportedBusNumberException, InterruptedException {

        // offset is approximately bias of z gyro
        final double offset = 98.2;
        // sf is full scale factor constant for DPS (see spec sheet p.15)
        final double sf = 35.0 / 1000.;

        // angle is the current angle of gyro (integration)
        double angle = 0.0;
        double dt = 0.0; // dt in seconds

        LSM6 gyro = LSM6.getInstance();

        gyro.enable(false, true);

        long lastTime = -1;
        long currentTime;
        double lastScaled = 0.0; // assumes start when motionless 
        double scaled = 0.0;

        for (int i = 0; i < 200; i++) {
            LSM6.Vector res = gyro.readGyro();

            currentTime = System.nanoTime(); // gets timestamp in nanoseconds

            // get the offset and scaled sample
            double unscaled = (double) res.z + offset;
            scaled = unscaled * sf;

//            System.out.println(res.z + ", " + System.nanoTime());
            // make sure don't try to mess with first sample
            if (lastTime != -1) {
                // integrate to calculate current angle 
                // integration sums current average dps * delta T 
                dt = (double)(currentTime - lastTime) /1000000000.0; // dt in seconds
//                System.out.println("dt: " + dt);
                angle = angle + (0.5 * (scaled + lastScaled)) * dt;  

            lastScaled = scaled;
            lastTime = currentTime;
            // this prints the time it takes to do all the necessary calculations
            // to do integration
            System.out.println(System.nanoTime() - currentTime);
//            System.out.println(angle);
//            System.out.println(res.z + ", " + scaled + ", " + currentTime + ", " +
//                    dt + ", " + angle);

            Thread.sleep(20); // delay 20ms so don't sample too often



This is the Python 3 script that is theoretically identical to the Java test program, except of course, that is uses the Python LSM6 class. It too has some extraneous debug code commented out.

import LSM6
import time
import datetime

#q = LSM6.Vector(1,2,3)

#print (q)

# offset is approximately bias of z gyro
offset = 90.3
# sf is full scale factor constant for DPS (see spec sheet p.15)
sf = 35.0/1000

# angle is the current angle of gyro (integration)
angle = 0

gyro = LSM6.LSM6()

gyro.enable(False, True)
# make sure have enough time for the thing to turn on properly

previous_time = -1
current_time = 0 

for k in range(200):
#	print(gyro.g, " ",
#	print(gyro.g.z, ", ",

	# get time
	current_time =
	#print (current_time)

	unscaled = gyro.g.z + offset
	scaled = unscaled * sf

	# make sure don't try to mess with first sample
	if previous_time != -1:
		# integrate to calculate current angle
		# integration sums current angle + dps * delta T
#		angle = angle + s * (current_time - previous_time)
		angle = angle + (0.5 * (scaled + last_scaled)) * (current_time - previous_time)

	previous_time = current_time
	last_scaled = scaled
#	print(angle)
#	print(gyro.g.z, ", ", angle)
	print( - current_time)
	# make sure don't sample too fast

1 Like