# L3GD20 and Gimball Lock?

I’m trying to use the L3GD20 gyro to compute angular position from the angular velocity measurements returned from the gyro. Most of the time, everything works as expected. However, if I move the gyro in two directions (roll and pitch) at the same time, the measurements suddenly are way off.

I ran across a posting on the Internet that said I should be representing angular position using quarternions rather than Euler angles because the latter is subject to something called gimball lock. I’ve never heard of this before, and wondered if it really does apply to MEMS gyro chips like the L3GD20.

Either you are using the wrong equations for pitch and roll, or you are confused about compound rotations. The order in which you apply rotations matters!

Post your code, using code tags. This simplified analysis gives one set of correct equations (there are many): https://www.dfrobot.com/wiki/index.php/How_to_Use_a_Three-Axis_Accelerometer_for_Tilt_Sensing

Gimbal lock is a feature of all 3D angular systems, and as you read, is solved by using quaternions instead of angles to represent body orientations.

Hi Jim,

The document you referenced is interesting, but assumes data from an accelerometer. However, I’m using a gyro, which provides angular velocity, not angular acceleration.

Also: I’m not using the yaw axis.

I know that the order in which you apply rotation matters. Maybe I wasn’t clear regarding the problem: If I rotate the device in only one axis (e.g., roll or pitch), the computation correctly integrates the velocity data using trapezoidal integration to produce the angular position, and then returning the device to its original position returns the angular position to the reference values (roll=0, pitch=0). However, if I rotate the device in both roll and pitch simultaneously, the angular position values are way off, and more importantly, returning the device to its original position does not return the angular position to the reference values and instead produces quite large roll and pitch values that should be zero.

Here’s how I integrate degrees per second (dps) to degrees. This code is inside a loop whose execution time is determined by how fast the L3GD20 produces new sets of data, with the elapsed time per sample (deltaT) computed by reading a counter that runs at 168 MHz.

``````			GetVelocity(dps) ;

// Perform trapezoidal integration to get position
degr_ptch += deltaT * (dps[0] + prev_ptch_dps) / 2 ;
degr_roll += deltaT * (dps[1] + prev_roll_dps) / 2 ;
prev_ptch_dps = dps[0] ;
prev_roll_dps = dps[1] ;

``````

Sorry, I did not read your post closely enough and assumed you were using an accelerometer or IMU (which many people do call a “gyro”).

You will have to think long and hard about how to treat motion about two axes simultaneously. Your computational approach assumes that the angles are independent and additive with regard to 3D orientations, but they are not.

Rotation of 45 degrees about X followed by 45 degrees about Y results in an entirely different orientation than the reverse, or trying to rotate 45 degrees about both axes “simultaneously”.

Then, there is potential confusion about the coordinate system itself: the sensor has its own axes, and rotations it measures are in the sensor (moving) frame of reference, not the external frame of reference.

Furthermore, it is extremely unlikely that even if you did the operation correctly, you could easily reverse such a complex motion, and end up with the starting orientation.

The usual computational approach is to use matrix operations for the compound rotations. You start with the unit matrix I representing the starting sensor orientation, then apply successive operations to that for small successive changes in the angle about the sensor X and Y axes.

The RX rotation operator looks something like this (t is the small X rotation angle, st, ct are the sine and cosine of that angle)
[ 1 0 0 ]
[0 ct -st]
[0 st ct]
and the RY rotation operator looks something like this (p is the small Y rotation angle)
[cp 0 sp]
[0 1 0]
[-sp 0 cp]

I probably have the signs wrong on the sine terms, but those depend on your choice of convention for a positive rotation anyway.

For a series of tiny steps you multiply these matrices together successively to get the final orientation matrix O, which can finally be decomposed into a set of orientation angles.

O = RXnRYn…RX2RY2RX1RY1I

or the reverse, depending on whether you are considering the reference (fixed) or moving frame of reference.

Using single precision floats, roundoff errors will invalidate this approach after not very many operations.

Finally it is worth noting that the inverse of a rotation operator is represented by the transpose of the matrix corresponding to the forward operation.

There are tutorials on the web covering this stuff, for example https://en.wikibooks.org/wiki/Robotics_Kinematics_and_Dynamics/Description_of_Position_and_Orientation