Hi, I’m using a QTR-8A sensor and I want to read Calibrated sensor values after the calibration. Ideally the calibrated values should not change once the calibration is done but these values are changing frequently. The code for sensor readings is:
#include <QTRSensors.h>
QTRSensors qtr;
void setup() {
Serial.begin(9600);
pinMode(13, OUTPUT);
pinMode(LED, OUTPUT);
pinMode(RM1, OUTPUT);
pinMode(RM2, OUTPUT);
pinMode(LM1, OUTPUT);
pinMode(LM2, OUTPUT);
pinMode(SDY, OUTPUT);
delay(1000);
Serial.println("Starting Calibration");
calibrating_sensors();
delay(1000);
Serial.println("Calibration Completed");
}
void calibrating_sensors(){
qtr.setTypeAnalog();
qtr.setSensorPins((const uint8_t[]){A0, A1, A2, A3, A4, A5, A6, A7}, SensorCount);
digitalWrite(LED, HIGH);
for (uint16_t i = 0; i < 500; i++)
{
Serial.println("Calibrating!!!");
// sharp_right(94,80);
qtr.calibrate();
}
digitalWrite(LED, LOW);
Serial.println("Calibration Minimum Values are \n");
for (uint16_t i = 0; i < SensorCount; i++)
{
Serial.print(qtr.calibrationOn.minimum[i]);
Serial.print(" ");
}
Serial.println();
// print the calibration maximum values measured when emitters were on
Serial.print("Calibration MaximumT Values are \n");
for (uint16_t i = 0; i < SensorCount; i++)
{
Serial.print(qtr.calibrationOn.maximum[i]);
Serial.print(" ");
}
Serial.println();
Serial.println();
}
int sensor_reading(){
uint16_t position = qtr.readLineBlack(sensorValues);
return position;
}
uint16_t *sensor_values()
{
return sensorValues;
}
uint16_t get_calibrated_values(uint16_t sensorValuesArray){
qtr.readCalibrated(sensorValuesArray);
}
void loop() {
Serial.println("Printing Calibrated values");
get_calibrated_values(calibrated_values);
for (uint8_t i = 0; i < SensorCount; i++)
{
Serial.print(calibrated_values[i]);
Serial.print(" ");
}
Serial.println();
}
I don’t understand why there is this disparency in the calibrated values?
And my idea is to use the calibrated values to automatically calculate the set_point for the PID control. Instead of hardcoding the set_point value, I want to calculate it dynamically according to the sensor calibrated values.
I do not see any evidence of the calibrated values changing like you suggested.
It is not clear from your post, but it sounds like you might be expecting the readCalibrated() function to return the calibrationOn.minimum and calibrationOn.maximum values. That is not the case. The readCalibrated() function takes a new set of sensor readings and processes them with the saved calibration data to return values scaled from 0-1000 based on the calibration. So, they should not be the same as the minimum and maximum values, and even if the sensor is held over a specific surface without moving, there should be a little bit of variance from reading to reading due to sensor noise. For more information, I suggest reviewing the Arduino library documentation.
If that did not answer your question, can you clarify what you are expecting the output to look like?
Thanks for your response!
You cleared my doubt, so I must use calibrationOn.minimum and calibrationOn.maximum values, not the values obtained from the readCalibrated() function as it returns current values on a calibrated scale.
As there are 8 sensors in the QTR-8A sensor array, for the bot to be in the middle while following the black line, the set_point value should be 3500 ideally. But there are some sensor noises and other environmental factors that affect it.
So, my other doubt is that for PID, we need to define a set_point value, instead of using a manual Hardcoded value, I want to make it dynamic. By dynamic my mean is, that if I get the calibrated values from the sensor that will not change until I calibrate the sensor again, I can use these values to calculate the set_point value for PID using some mathematical equation. So is it possible to do?
You can probably set up a program like you are describing, but I do not have any specific suggestions for how to approach that, and I do not see the value in making your set point value dependent on the results of the sensor calibration since they are not directly related.
This is probably not a good way to think about how the line detection functions in our QTR library work. Noise and environmental factors affect the individual sensor results which is the purpose for calibrating the sensors; the calibration data is used to scale the readings from individual sensors to a specific range so that the effect of sensor-to-sensor variation is minimized. I recommend reviewing the Usage Notes in our QTR sensor library documentation for more information.
I supposed there might be some special conditions where modifying the set point could make sense, but to me it sounds more likely to add complexity and confusion. If you share more details about the circumstances that are causing you to consider this approach I may be able to offer some more specific advice.
I understand your point, going with a dynamic set_point is not a good idea.
One thing with which I’m still struggling is that the bot is not able to take turns and they are likely to be sharp turns with some curvature. I tune the PID gains and it is following the line without any problem but when it reaches a turn, it takes a wider turn but it should not lose the line in any case. My speed is also not too high. Below are the hardware details:
Motor: 30:1 Micro Metal Gearmotor HPCB 6V
Wheels: 3PI 44mm wheels
Microcontroller: Arduino Nano
Sensor: QTR-8A
Motor Driver - Toshiba TB6612FNG
Voltage Regulators: LM2596 and XL6009
Battery: 360mAh Lipo
It is hard to know for sure without seeing the behavior or any of the sensor/PID data from when your robot enters those turns, but it sounds like your robot just is not responding quickly/strongly enough to the tight turns. One option is to try adjusting your PID coefficients, which will probably involve increasing Kp, and may come at the expense of making your line follower less smooth on straighter sections. It may also help to try getting your robot working with the turns at a slower speed first.
Alternatively, depending on how sharp the turns are, you might consider adding some other algorithm to handle those separately from your PID routine. For example, when dealing with sharp 90-degree turns, a common method is to add an if condition to either your main loop or your pid() function that checks if a middle sensor and an outer edge sensor both detect a line. If they do, instead of setting the motor speeds based on your PID, you could command the robot to stop following the line briefly and do a specific pivot routine before it resumes its PID-based line following.
This option is there to use conditions for turns but I want to adopt some standard algorithms with PID control. Can you suggest me some error correction algorithms that can help in this case.?