Todays goals:

Are as described in the lesson 3 exercises notes:

  • Exercise 1 – Test of the Sound Sensor
  • Exercise 2 – Data logger
  • Exercise 3 – Sound Controlled Car
  • Exercise 4 – ButtonListener
  • Exercise 5 – Clap Controlled Car
  • Exercise 6 – Party Finder Robot

Additional “Optional” Goals

  • None for now

The Plan

Because of the thorough guide described in the lesson 3 exercises notes, no further planning was necessary in that regard.

Exercise 1 – Test of the Sound Sensor


By mounting the sound sensor to Max, we are now ready to test it. To proper test the sensor, we plan to have Max listen through the sound sensor and log its values using a Datalogger. We will then place a pair of speakers connected to a smartphone with this Frequency Sound Generator app at different angles and distances, playing at 200Hz. This should result in a map of how the sensor measures sound at different positions.

Max N00b ready for sound sensing action! Max N00b ready for sound sensing action!


Before we could start mapping the sound sensor, we wanted to do a series of mini experiments to know the following:

  1. Is the value at different frequencies the same?

  2. Does the height of the source of the sound have any impact on the sensed values?

  3. At what distances can we still get a usable reading?

To answer these questions, we made several experiments:

The first experiment was made by tuning the speaker volume so that it would not just max out the sensor which would invalidate our experiments. This calibration was deemed necessary because of a previous test, where the volume was set to its maximum, and the values startet increasing from 85 to 92 between 50 cm to 100 cm and then began decreasing rapidly. We therefore concluded that the level must have been too high to begin with. We then placed it at the 50 cm & 100 cm mark directly in front of the sound sensor, and then over a 25 sec played from 1Hz to 20000Hz. We recorded the data with the DataLogger which are as follows:

Results from frequency experiments. Results from frequency experiments.

There are a lot of interesting results to take from this experiment, first we are to disregard the large spikes from 0 Hz to 200 Hz of the graph as these are caused by the sound made when starting the program on the NXT and moving our hands away from it. Then, if we take a look at the ~14.500 Hz mark, the values seem to flatline and just become one with the noise. This might be caused by the volume dropping so low that the sensor could not register it, or the sensor not being able to sense frequencies higher than 14.500 Hz. Besides, to avoid confusion, the speakers we set to play to 20.000+ Hz. To see if we could pinpoint the problem, we tried again with the speakers facing the sensor at point blank distance. We wanted to see if (a) the sensor cannot register tones above a specific frequency, and (b) if the frequency doesn’t produce a high enough volume for the sensor to register. We were sure that up to 18.000 Hz the volume was loud enough because we could hear the sound, so what was going on? In this test, the sensor values maxed out from the start at 0 Hz, but around 17.000 Hz the sensor values quickly flatlined, meaning that ~17.000 Hz might be the highest frequency the sound sensor can register.

The second experiment, was to see if elevation has an impact on the sensors ability to read the sound levels. Therefore we placed Max with his sound sensor as shown in figure 1.

Setup for elevation experiments. Figure 1: Setup for elevation experiments.

We then played a 3.000 Hz tone and waited for the values to somewhat stabilize. We could then read the values on the display, resulting in values around 25-45 from the table setup (S1) and values of 40-60 from the floor setup (S2). From this we can conclude that elevation and environment has an effect on the values recorded and therefore to give our self the best chances of success in the following exercises, we should try to have the speakers’ output somewhat level with the sensor.

Finally, to determine how far our sound map should go, we needed to do a distance test. This was done by playing a 2.000 Hz tone calibrated to give a reliable result at point blank and then moving the speakers back until the values dropped to that of the ambient sound levels. This happened around ~250 cm from the sensor

With all the data from our first experiments, we could now proceed to mapping out the sensor. For testing the different sound levels we positioned Max in an isolated room where we marked the floor with tape at the positions shown in figure 2.

Positions for sensor value mapping showing distances and angles. Figure 2: Positions for sensor value mapping showing distances and angles.

Max was then placed at position 00 with its sound sensor following the 0 degree line.

At each of the marked positions 1 – 20 the speakers were placed pointing towards the 00 mark where Max’ sound sensor was and the sound value at each position was then noted as shown in figure 3.

To limit the number of variables we calibrated the speaker’s volume at 200Hz so that it would give the largest value at the shortest distance and then started doing measurements as shown in the following image:

Measuring the value at position 10. Measuring the value at position 10.


We were handed the programming section for reading distances using the UltrasonicSensor here.

The program segments for reading the sound levels can be found on the link. Here we make a small change to the code changing the UltrasonicSensor object into a SoundSensor object and the us.getDistance() method to s.readValue() to get the sensor value.

The final code can be downloaded here


As shown in the following figure 3, the results from our mapping gave us a lot of insight into the workings of the sound sensor

Results from sensor mapping experiment. Figure 3: Results from sensor mapping experiment.

We could determine that at close range e.g. 20 cm the values are stable no matter the angle, but becomes uneven at distances from 50 cm. Also we can see a significant drop in the recorded values from 20 to 50 cm which settles slightly above the ambient sound level in the room. To better illustrate this, the development of the measurements are displayed in the following graph:

Results from frequency experiment. Results from frequency experiment: 1 – 20.000 Hz.

So to conclude upon all of this, we could determine that the sensor only cares about the volume of the sound that its recording as long as its below the 17.000 Hz mark, however, not to disregard the frequency completely as different frequencies result in different volume levels at different distances.

Also the angle relative to the sound sensor and the sound source has shown some indications that they might affect the readings – at least at longer ranges.

Exercise 2 – Data logger


In this exercise we want to record and log sample data of the sound levels and plot the data in a graph. We will use the code provided by the lecturer.


We executed the code for sampling the sound levels from a youtube video clip, so that we should get both high and low values and have a little fun doing it. Max would then record the sound from the youtube video that was played through a set of speakers. This file could then be downloaded to the computer using the leJos browse software (NXT Control) for confirmation that the SoundSampling program was logging data correctly.


The code provided for the Datalogger and SoundSampling was sufficient for our needs in this exercise.


The results can be seen in Exercise 1 where it was used to log data for the frequency tests.

Exercise 3 – Sound Controlled Car


We want to see what happens when Max is being controlled by sound based on the program provided here.


We run the program to see what happens. The sounds will be either clapping, humming or shouting. Based on the code, we expect the code to make the program run in a continuous loop; driving Forward, Right, Left, and then Stop. This will be triggered every time the soundLevel gets read as 90 or more.


No additional programming was done. We used the code provided, as explained in Plan.


The results were as expected. Max kept behaving in the exact same way; looping Forward, Right, Left and Stop whenever the soundLevel reached 90 or more. There is an issue now, though. The code provided does not allow Max to exit the program (Which will be avoided in the next exercise).

A video of poor Max N00b being shouted at can be seen here.

Exercise 4 – ButtonListener


We want to make it possible to exit the program from Exercise 3 when running, by implementing a ButtonListener.


Coding the buttonlistener and then test if it works. The same functionality from the car is used, as provided in Exercise 3.


We made a constructor to avoid static methods, and had the class implement ButtonListener.


We had problems creating a buttonListener in the code, since the methods already provided were static methods. This meant we were not allowed to create a global listener in the main method. Instead, we have made the listener in our constructor which main initializes. The new code can be found here. Now Max will exit the program if the Back button is pressed.

Exercise 5 – Clap Controlled Car


We want to use Sivan Toledo’s method for detecting claps, which can be found here. We also want to compare the results with the program from Exercise 3. This means we will make the robot follow the same loop as described in Exercise 3, but it will change direction when registering a clap instead of just when it registers an amplitude of 90 or more.


First, we program Max to react on claps only. Second, we datalog Max’ values when being clapped at. And third, we will compare the results with what we found in Exercise 3.


We changed the waitForLoudSound() method in several ways. First, we changed its name to waitForClap(), and it can be found here.

Basically, what we do is making sure that Max does not react on any sounds that are not within the amplitude and time as depicted by Sivan Toledo. This means, to start Max he must be registering an amplitude below 50, and then within the next 25 milliseconds, the amplitude has to be more than 85. If it is, Max will see if the amplitude falls below 50 within the next 250 milliseconds. If so, Max will register it as a clap and start or change his motors’ powers accordingly.

If any of these conditions are not met, Max will abort the new command.


We made Max react on sounds that resembles a clap. We logged the data to see more precisely if Max actually reacts the way we want him to – for instance by aborting new commands if the clap-sound is not exactly as prescribed in our algorithm. From this, we have created this diagram:

Diagram showing recorded values when clapping. Diagram showing recorded values when clapping.

In this diagram we can see that the four spikes in amplitude in the ~9.000 milliseconds to ~14.000 milliseconds range have made Max react and therefore change direction when driving. An easy way to see this is the point where the line sort of “breaks” and a small part is missing, which is consistently just below 50 in amplitude. This is because the data written in the log is not the amplitude, but instead the registered commando (i.e. Forward, Right, Left or Stop) which only triggers if Max hears a clapping sound.

It is interesting to see, though, that the spike at 8.000 millimeters is registered as an increase in amplitude, however, it did not comply with the requirement of an increase in amplitude from < 50 to > 85 in just 25 milliseconds. We have taken a look into our data and can see this as well:

Segment showing values of a failed clap. Segment showing values of a failed clap.

As we can see, the counter triggers when the amplitude is at 3 and increasing. However, the counter reaches above 25 after just 26 milliseconds, where the amplitude is still only 33. It takes another 13 milliseconds to reach an amplitude of 88, which causes Max not to react and instead just register the high amplitude.

Compared to the program from Exercise 3, Max now reacts to specific sound patterns whereas before, he just reacted whenever the amplitude was above 90.

Exercise 6 – Party Finder Robot


We want to make Max N00b drive to a location where a party is going on, by using two sound sensors. Also, when Max gets close to a party, he will start showing off his awesome dance moves.



We made two experiments. The first experiment was to create a party and have Max drive to it. We created the this party from lego and a loudspeaker, as seen in this image:

The party The party – obviously – “sponsored” by The Village People!

Now we had a location for Max. Also, as seen on the picture, there was a loudspeaker generating sound for Max to drive towards it. Max had to drive almost 2 meters to get to the party, as depicted in this image:

It is obvious that Max is in a party mood It is obvious that Max is in a party mood – he is wearing his favorite party hat!

The second experiment we conducted was moving the loudspeaker around and have Max following it. This was to see if he could also follow a moving source and not just a stationary one.


Our programming was based on the previous exercises. However, we have made a lot of changes in our final code which can be found here.

We created a delta to find the differences between the input of the two sound sensors.

We have made 5 different variables to control Max’ party finding and partying capabilities:

int partyThreshold = 15;
int shuffleThreshold = 90;
int danceDelay = 180;
int correctionLeft = 6;
int correctionRight = 0;
The partyThreshold is for sorting out noise.

The shuffleThreshold is for determining at which amplitude level Max should start showing his epic dance moves.

The danceDelay is also for epic dance moves from Max.

The correctionLeft and correctionRight variables are for synchronizing the microphones.

We also created a delta variable. This delta were used to determine which way Max should drive:

soundLevelDelta = (soundLevelLeft-soundLevelRight);
if(Math.abs(soundLevelDelta) <= 5 || (soundLevelLeft+soundLevelRight)/2<partyThreshold){
  Car.forward(75, 75);
}else if(soundLevelDelta < 0){
  Car.forward(100, 0);
  Car.forward(0, 100);

This means that if the difference between the two sensed values is between -5 to +5, Max will see it as being the same value and drive forwards. If delta is less than 0 (actually less than -5), it means the right sound sensor has a higher value than the left sound sensor, which means Max should drive to the right. In case of any other value (which is really just > 5), Max will drive to the left.

The partyThreshold here is for sorting out noise so that if there is only noise, Max should ignore it and drive straight ahead. This noise can be, for instance, the sounds from the motors.

Also, as a final note, the corrections were added to the read values of the sound sensors, so we would get a consistence new value of the readings:

soundLevelLeft = soundLeft.readValue() + correctionLeft;
soundLevelRight = soundRight.readValue() + correctionRight;


The end result can be seen in the videos linked further down. However, we did not get there easily.

We realized through a lot of testing that our left microphone got values that were always lower than the right microphone, which caused Max to drive to the right and had a hard time turning left. This meant we had to investigate the exact difference between the left and right microphone, and the difference between the two values. By having this delta value, we could now add an approximation of that delta value to the left microphone so it would get close to even values so we used a correction value to better synchronize the readings – causing Max to react better when driving. However, finding this correction value was done through several trial-and-error experiments, where we would adjust the correction value, causing Max to either turn right or left, as seen in this video where he insist on driving into the wall.

However, by tweaking the values we eventually made Max go to the party as we wanted him to in the first experiment. The results from our first experiment can be found here. Also, notice the fabulous dance move Max did at the end!

Furthermore, Max would also be lured to a moving source of music as we wanted him to in the second experiment. The results from our second experiment can be found here. Here, we see – again – Max’ infamous dance moves at 00:07, 00:11, 00:19, and 00:23.

Max would start dancing if the amplitude got above 90 which we determined was equal to being at a party.



We ran into several problems when trying to make Max N00b a sound-following robot. First off, there is the problem of echoes. We are using a sound-sensor which means that the detected sound can easily have been an echo leading Max astray. This happened sometimes during our initial experiments and in Exercise 6 because either we positioned ourselves so the loudspeakers sound could echo right off us into Max’ sensors, or the furniture in the room would cause the echoes.

Second, the different quality of the sensors proved an issue to us. This was in particular evident in Exercise 6 where the left sensor registered consistently lower values than the right sensor no matter what. We corrected this problem by adding and subtracting a value from the left and right sensors measurements

Third, we believe the noise from Max’ motors also had an impact on his measured sound levels. We tried to create a workaround to this problem as well by adding a fixed threshold that had to be overcome.

We also realized when finished with this lesson, that the sensor itself has two different sensing modes – called DB and DBA mode. The standard setting is a DB mode. However, the DBA-mode changes the sensitivity in different frequencies causing better values in what they call “the human spectre” which is 3.000 to 6.000 Hz. We speculate that if we had set the sensor to DBA instead of DB, we might not have needed the minimum threshold for sorting out different noise.


Using a sound sensor is a lot of fun – but also with a lot of challenges. We found that the sound sensor mainly reacts to the volume of the sound it registers, but the angle from where the sound is played also has an impact. Another interesting thing is making a robot following different sounds played. It takes a lot of work to measure and analyze the sound itself in order to write an algorithm that has an acceptable coverage to “understand” the sound and make the robot react to it and/or follow it. Furthermore, using several sound sensors can be challenging dependent on the differences there might be in the hardware itself. We would say that it is not an effective sensor for navigational purposes such as following a sound but It does have its merits in other areas. For instance it is a nice sensor when trying to react to different commands by sound. Also, it can be useful for alarm-triggering purposes for instance the sound of a broken window which will most likely have a specific frequency pattern and volume that can be used to trigger an alarm.