This sessions goals:

  • Determine relation between sensor and motors
  • Implementation of sensor module
  • Implementation of PID-controller for balance
  • Build test robot for camera tracking
  • Implement camera tracking on test robot
  • Changing balance medium
  • Build second iteration of Max’ body

The Plan

In this session we will create our balancing PID-controller, create a camera tracking test robot, and construct the second iteration of Max’ body. The plan for this can be seen in Table 1.

Project plan for session 4. Table 1: Project plan for session 4.

Determine relation between sensor and motors

Plan

In this part we will look into the relation between sensor values and motor movements. This is done to get insight into how the motor controller and sensor module needs to be programmed. To determine the relationship between the sensors and the motors we will look into a master thesis done by Magnus J. Bjärenstam and Michael Lennartsson which can be found here. We will also look into a bachelor thesis by Péter Fankhauser and Corsin Gwerder which can be found here.

Experiment

None

Programming

None

Results

The relations between the sensors and the motors can be described as a physical offset from the axis used by the sensors as seen in figure 1.

Physical relation between sensor axis and motor axis. Figure 1: Physical relation between sensor axis and motor axis.

What this means is that if we take the data from the sensors and convert them into vectors depicting the current posture of Max then these vectors needs to be projected onto the axis of the two motors that does not align with the sensors own axes (left and right motor). The result of these projected vectors can then be used as the error value used in our PID controller as depicted in the following figure 2.

Projected force Figure 2: Projected force = cos(angle-leftMotorRadian)force*

Implementation of sensor module

Plan

In this part we will implement the two sensor sensing classes used for balancing Max which are the accelerometer and the gyro sensor. The implementation will be made with inspiration from Lesson 10 exercise 2, where sensor type gets implemented in a sensor sensing class which manages the sensor and gathers sensor data in their own thread. Each class will be tailored to each sensor while all sensor classes will have the same purpose of gathering their respective data, change variable offsets and give values to depended behavior classes.

Experiment

Since this is a programming exercise, most of the actual experiments for the sensors will be coupled with the experiments of making Max balance on a ball. The gyro needs to have a separate experiment because it – unlike the accelerometer – needs to calculate the angle based on the sensor data and not just make the raw data available which we therefore need to confirm as expected.

To test the getAngle method of the gyro sensing class we need to confirm two things:

  • That the calculated angle is accurate.
  • That the gyro sensor does not drift when stationary after movement.

The setup used to test that the angle is correct can be seen in figure 3.

Gyro sensing class testing rig. Figure 3: Gyro sensing class testing rig.

To confirm that the calculated angle is correct a protractor was placed behind and aligned with the pivoting arm so that the actual angle could be read and compared to the angle displayed on the LCD display which confirmed that the angle was calculated correctly.

To check the drift of the gyro we implemented a datalogger to document the values from the sensor, which we then are able to analyze to see if the gyro drifts over time. We then start by moving the gyro back and forth and then leave it stationary to see if it should drift over time. As seen in the following figure 4, the gyro settles and does not drift further during the 5 min period that we were measuring.

Results from gyro drift experiment. Figure 4: Results from gyro drift experiment.

As seen in figure 4 the gyro settles at 6 degrees instead of 0 degrees which is the actual angle. This means that the gyro still overshoots some of the calculations which is the result of the hardware not being able to keep up with the movements made at the beginning of the experiment. The overshoot of the gyro is an issue that needs to be addressed but since it is the result of the current construction and use means that it would be different when on Max and therefore would need to be redone.

Programming

The primary function of the two sensing classes is to start their own thread and gather the newest values from their respective sensor and store them locally and make them available through the get methods.

To access the data from one of the sensing classes, a get-method for the respective data can be called. In the gyro sensing class there are three get-methods available getAngularVelocity(); getRawValue(); getAngle() which will return the angular velocity; the raw data value; and the calculated angle, respectively. In the accelerometer sensing class, there are three get-methods: getX(), getY() and getZ() which return the force data from the axis that the method indicates (e.g getX() is for the x-axis).

To align the values from sensors to Max we need to calibrate the sensors which is done by calling calibrateGyro() and calibrateAccelerometer() which can be seen in the following code snippets:

public void calibrateAccelerometer(){
  int avgX = 0, avgY = 0, avgZ = 0;
  for(int i=0; i<100; i++){
    avgX += accelerometer.getXAccel();
    avgY += accelerometer.getYAccel();
    avgZ += accelerometer.getZAccel();
  }
  setXOffset(avgX/100);
  setYOffset(avgY/100);
  setZOffset(avgZ/100);
}

The code snippet above is the calibration method for the accelerometer that takes the values from the X, Y and Z axis and average the value over 100 samples which is then set as the offsets.

public void calibrateGyro(){
  setEnabled(false);
  gyro.recalibrateOffset();
  setEnabled(true);
}

The code snippet above is the calibration method for the gyro sensor, but since the gyro class itself already has a calibration method we only have to disable the thread that continuously gathers data so we do not interfere with the calibration and then enable it again after the calibration has been completed.

The offsets used in the sensing classes can be set manually using the respective setOffset() methods. The gyro sensing class has one method, setOffset(), to change the angleOffset variable, while the accelerometer sensing class has three methods, setXOffset(), setYOffset() and setZOffset().

The gyro sensing class can be found here and accelerometer sensing class can be found here.

Results

The result is two classes that continuously gather data from their respective sensors through the use of threads. The data is stored in local variables that can be accessed through get methods.

Implementation of PID-controller for balance

Plan

For Max to fulfill his lifelong dream about being a ball balancing robot he will need some help with maintaining his balance. In Lesson 5 we worked on making Max balance on the ground with two wheels using a PID control and different input sensors. With success in – and inspiration from – that lesson, we will try implementing a two-dimensional PID control that fits with Max’s three motor construction.

Experiment

To see if the initial PID control is implemented correctly, we will have Max run the PID control without balancing on the ball and instead hold Max in our hand while he is tilted in various directions to see if the motors corresponds to move in opposite directions to the tilted positions. Keep in mind that this is not for testing Max’s balance, since that belongs to another exercise, but instead to test if the PID control works as intended.

Programming

For the PID control to be implemented, a behavior class is created. This class will have the PID control implemented in its action method.

As seen in the motor sensor relations (as described earlier), the force and direction of Max’s tilt has to be projected to each motor’s axis. In the following code snippet the projected error is calculated using the earlier explained cosinus and tangent relations.

public float projectedError(float forceY, float forceZ, double radians) {
  double force = Math.sqrt(Math.pow(forceZ, 2)+Math.pow(forceY, 2));
  double angle = Math.atan2(forceZ, forceY);
  float result = (float) (Math.cos(angle-radians)*force);
  return result;
}

With the projected errors calculated, a PID value for each motor is calculated, as shown in the code snippet below, using the given proportional, integral and derivative constants.

  float leftPid_val = (KP * leftError + KI * leftInt_error + KD
  * leftDeriv_error) / SCALE;
  float rearPid_val = (KP * rearError + KI * rearInt_error + KD
  * rearDeriv_error) / SCALE;
  float rightPid_val = (KP * rightError + KI * rightInt_error + KD
  * rightDeriv_error) / SCALE;

These PID values will afterwards be converted into the power values which dictate the movement on each motor, as seen in this code snippet:

public int calculatePower(float pid_val) {
  if (pid_val > 100) return 100;
  if (pid_val < -100) return -100;

  int power = (int) Math.abs(pid_val);
  power = baseMotorPower + (power * motorPower) / 100; // NORMALIZE POWER

  if(pid_val < 0) return 0 - power;

  return power;
}

In this code snippet, the conversion is done in a calculatePower() method where the input PID value is first converted into an absolute value. This value is afterwards calculated to fit with the motor’s range of power (i.e. 100) and the motor’s base power (i.e. the minimum power needed to move Max) and thereafter given as the motor’s power output. Finally if the PID value is a negative number, the power is also converted into a negative number, so that the motor moves in the correct direction.

The code for our PID controller can be found here.

Results

The result is a single behavior class, which action method includes a created PID control. The PID control uses sensor data from the accelerometer which is read and calculated into errors according to each motor’s angle position. This is further calculated into the needed power that each motor needs.

As seen in the video here the motors move in varying speeds opposite of each motor’s projected error, and when tilting Max towards a specific motor’s directional position, that motor moves less and switches direction constantly.

Build test robot for camera tracking

Plan

To test the camera tracking we need to build a test robot which can drive around with the camera while we develop the tracking program. The robot needs to have the camera placed in a similar height and position as it would be on Max, so that the experiences we gather and programming we make is applicable to where it would be mounted on Max. We will in this section describe how this robot is built.

Experiment

As seen in figure 5 the test robot is a simple construction but with extra focus on making the frame solid so that movement will not cause the camera to bounce and/or shake and in that way interfere with our programming. The camera bouncing is of course an issue that we will have to address when the camera is mounted on Max, but since this will be a robot-specific calibration we will not look into it at this point.

Camera tracking test robot. Figure 5: Camera tracking test robot.

The height of the camera is made to match the height and angle that we intend to use when mounting it on Max as illustrated on figure 6, drawn in green.

Illustration of camera placement (the green part) on Max. Figure 6: Illustration of camera placement (the green part) on Max.

Programming

No programming need for this part.

Results

The test robot. Figure 7: The test robot.

The test robot as seen in figure 7 is sturdy built and geared down slightly so that it will drive a bit slower than normal, making it easier to track and adjust the tracking. The NXT was placed upside down so that the sensor ports would be as high as possible. This was done to be sure, that the cable for the camera could reach it which in the end proved irrelevant because we found a sufficiently long cable. Finally, we made certain that the camera was mounted completely centered on the robot so that the offset could correctly be determined.

Implement camera tracking on test robot

Plan

The plan is to implement camera tracking in such a level that is possible to follow a black line on a white background using the NXT Camera, much like what we did back in lesson 4, exercise 2. The resulting sensor class needs to be able to be implemented in Max without further modifications so that all that is needed is to calibrate the camera class to Max.

Experiment

Before anything could be made we needed to configure the NXT camera from a PC as described in this guide. When the camera was connected and drivers installed we made a mapping of the black color. We used the NXTCamView program which can be found here.

So as not to start from scratch we took the code used during Lesson 4 – Exercise 5 and modified it to use the camera as the input instead of a color sensor.

Programming

The following code snippet is a simplified segment of the CameraSensor class. This code finds the center value on a registered object’s X-axis. This value is then used to align the robot like what was done with the color sensor in Lesson 4.

while(isEnabled){
  for(int i=0; i<camera.getNumberOfObjects(); i++){
    if(camera.getObjectColor(i) != 0){
      continue;
    }
    objectArea = camera.getRectangle(i);
    if(!enabledArea.intersects(objectArea)){
      continue;
    }
    centerX = objectArea.getCenterX();
  }
}

This code snippet is run in a separate thread, where we continuously go through all the registered objects. The camera has to determine if they are black and inside the area that we have decided to track. For every object, we check if the color is equal to 0, which is the black color that we determined earlier. If it is not black, it continues to check the next object. When we know it is a valid object, we can check if the object’s bounds intersect with the bounds that we have set. An illustrated example is visualised in the following figure 8.

Illustration of the enabled area of the NXT camera. Figure 8: Illustration of the enabled area of the NXT camera.

The red area is the entire available area of the NXT camera, which is 176 x 144 pixels. The green area is the enabled area that we have decided that objects should intersect with, which is 116 x 84 pixels. Since we only require the registered objects to intersect with the enabled area, it means that all the yellow rectangles in figure 8 are valid in terms of intersecting with the enabled area. However, the blue rectangles are not. This helps to focus the tracking straight in front of the robot and not out to the sides. This way of restricting the valid objects could be made more strict, by requiring the entire width of the object to be inside the enabled area. This could also be made more contextual and adaptive by starting with objects in the center of the available area, then going further and further out until a required number of objects are registered. Finally, the size of the object could also be used to sort away small disturbances registered by the camera, this, however, can become a problem as the camera sometimes registers a solid line as several smaller objects.

The reason that we have not looked further into this is because it only helps to refine the results for the individual robots. Our work on refinement on this test robot would therefore not be applicable on Max.

The following code snippet is the modified PD controller from Lesson 4 which uses the NXT camera to track the line.

private double kP = 5.0f, kD = 2.0f;
while (true){
  lineCenter = cam.getCenterX();
  error = lineCenter - desiredPos;
  derivative = error - lastError;
  power = kP*error + kD*derivative;
  power = power/10;
  lastError = error;
  car.forward((int) (baseSpeed-power), (int) (baseSpeed+power), false);
}

The values for kP and kD were the initial values that were used in the PD controller from Lesson 4. These values gave us a satisfiable result so we decided not to change them or implement a PID controller instead.

The complete code for the NXT Camera LineFollower can be found here.

Results

As seen in this video the test robot is able to register the black line and align to it and even though it overshoots a bit, we decided not to refine this further as it would not be applicable when using it on Max.

Changing balancing medium

Plan

In this part, we will briefly discuss our change of balancing medium and why we did so.

Experiment

We will be trying out a new ball because we observed that our initial ball would deform around the wheels giving an inconsistent connection.

Programming

None.

Results

We found that the previously-chosen Fætter BR ball was not sufficiently strong for Max. After using it for a short while, we found that it was quickly becoming too soft and therefore could not be used for the balancing purposes as it would require too much power caused by excessive amount of friction. Therefore, we have found sturdier (and also cheaper!) ball bought in Føtex, as can be seen in figure 9.

New balancing medium. Figure 9: New balancing medium.

Changing balancing medium

Plan

For this part we will be working on the following three aspects:

  • Making side rail stabilizers to help control the ball
  • Make a support point for the ball to decrease bounce

The rails are made to help keep the ball from being pushed away from under Max. The support point is for keeping the ball at a certain distance from Max because we observed that it could slowly work its way into Max because of the flexibility of the LEGO construction eventually blocking the wheels from the extra force exerted on them.

Experiments

Our experiment has been quite simple in this regard. We chose to attach stabilizers to keep the ball under Max at all times as also seen in the bachelor thesis by Fankhauser and Gwerder. This can be seen in figure 10.

Stabilizing legs to keep the ball underneath Max. Figure 10: Stabilizing legs to keep the ball underneath Max.

Furthermore, we found that even though we had added the stabilizers to Max, we had some problems with having enough power in our motors to actually get Max back up on the ball, if he started falling. For this reason, we looked back at session 2 and how the center of mass and the object’s weight distribution affects the balancing abilities of Max. We realized, that if we added a single point for the majority of the weight and had it above the pivot point, the center of mass would still be kept while at the same time having less weight pushing on the wheels. This means, the motors will require less torque in order to work and push the wheels around. The single point for the weight can be seen in Figure 11, in three different construction types.

Pivot points Figure 11: From left to right: Large curved top; Rounded point; Small curved top.

We want to create as little friction as possible, so we are using a rounded surface. Ideally, we would like to try the LEGO ball caster, as mentioned in Session 3, but we will try with the lego bricks first.

Programming

None as this is purely physical changes to Max’ construction.

Results

To show how the stabilizers work, we have created a video which can be seen here.

Checking which of the points for weight distribution we will use, we have created three videos; one for the large curved top, one for the rounded point, and one for the small curved top. So far, it is obvious that the large curved top will not work, as it pushes the ball too far away. The rounded point and small curved top are both interesting and seems to work quite well, but the rounded point does have a disadvantage as seen in figure 12.

The point sticks into the ball’s air valve which can make it stuck. Figure 12: The point sticks into the ball’s air valve which can make it stuck.

For this reason, we will try to use the small curved top for further experiments.

General

Problems

We have had some challenges in this session because of the balancing medium being too soft, and also some parts of the construction that had to be changed. We have also noted that we might need more torque in our motors, and we might look into improving this in the following session. Furthermore, we have been very limited in time because of exams which has made us become a bit behind on the schedule. We hope to make up for this in the next session.

Conclusion

We have created the PID controller, a camera-based line following robot and a second iteration on Max’ construction. We are now more or less ready to test Max’ balancing prowess. We do, however, fear that it might not be possible to make him balance because of the current construction. This means that in the following session, we might also look into other ways to show how the balancing and line following would work separate from each other.