This sessions goals:

  • Initial idea generation
  • Projects considered
  • Our project idea
  • Initial planning for project

The Plan

The session from today will be used to discuss various project ideas and determine which project we want to work on, followed by a plan of how we want to realize this. This is based on the lesson 11 description, found here.

Initial idea generation


We will have a brainstorming session where we come up with various project ideas. We will then discuss the ideas shortly and sort out 3 projects for future work the following exercises.


Our brainstorming session was used to come up with a lot of different project ideas. At first, we talked about some of the projects that were presented during the course and were since inspired by those. We also talked about having some different games, and how much should be autonomous (should it be controlled by the users or completely autonomous). Also, when we talked about the ideas we also discussed which parts of the course could be used on the different projects, i.e. behavior control, PID control, etc.. A complete mindmap of project ideas can be found in figure 1.

Mindmap of brainstormed project ideas. Figure 1: Mindmap of brainstormed project ideas.


Having created a list of project ideas, we started discussing what the projects would be about and overall what they would require technically that we have learned from the course so far. The following will list up the ideas and thoughts about them.

  • Traffic drone
    • To make it easy for drivers to choose a road through the city and avoid congestion. The drone will fly above traffic and send information to the drivers about which road to avoid.
  • Solar replace bot
    • Trying to solve the WRO Competition challenge. The challenge is inspired of how to change and adjust solar panels in space. The rules of the competition can be found here
  • Balancing robot
    • This will be a robot that can balance on a ball. However, we think that just creating a robot for balancing on a ball is not enough in itself – we also think that the robot should have a reason for balancing (or it should impact something). This could e.g. be a robot that can drive around a room, serving drinks to people. Or perhaps two balancing robots that battle each other and make the opponent lose balance and fall off their balls.
  • Carmageddon
    • A lego remake of the classic computer game carmageddon. You drive through the circuit battling for the first place. You gain additional points for hitting pedestrians and damaging your opponents.
  • Guard Robot
    • This will be an autonomous robot that will patrol a given area and shoot detected intruders.
  • Dr. Who Bots
    • This project will have several robots that are constructed to look like characters from the TV-series called Doctor Who. Here, the various robots will have different behaviors having the bad robots doing evil things, and then the good robot has to stop them somehow.
  • Naval transport
    • A robot that can navigate through water, and then pick up and deliver packages. This could be quite a challenge making a floating, waterproof robot that has to avoid e.g. ducks in a lake.
  • Bluetooth hunt
    • Robots that hunt bluetooth connections from mobile phones or laptops. This can also be used to induce different behavior, for instance if the mobile phone is attached to a robot so it has to run away from the other robots. Another way to look at this is a zombie-game where an active bluetooth connection is the same as having a dangerous virus, and then the other robots have to get away from the one with a virus. If the virus-bot gets close to another robot it will be infected and also start hunting the non-virus robots (A bit like the embodied evolution robots described here).
  • Swing bot
    • Build a robot that can jump from one rope to another rope like Tarzan.
  • Counter Strike
    • Robots battling each other in teams. One of the teams have a bomb which have to be placed in the bomb sight, if it blows the team wins. The other team has to stop the first team and disarm the bomb.
  • Jenga robot
    • The player has to compete against a robot in the classic game of jenga.
  • Labyrinth
    • Two or more robots have to help each other out of a maze. They map where they have been and exchange their knowledge with each other to avoid exploring the same dead-end twice, and in that way together find the quickest route out of the maze.
  • Tetris
    • This project is about creating an augmented tetris game, where four robots will be the tetris bricks. The robots will drive from one end of a track to the other, and look like one of the various bricks in the tetris game. When they reach the end, the brick the robots represent will be projected down on the track, and the robots will disperse and drive back to the top, forming a new kind of tetris-brick. We see two options here: Either the system is fully autonomous and the robots will remember the previous bricks laid at the end of the track. Or a player can change the shape of the bricks and move them from side to side, as known from a normal tetris game.

Projects considered


Here we describe some alternative projects from the list seen above that we also considered for the final project.



Based on the original game, Throw ‘n Go Jenga, one of the players will be a robot that is able to map where the colored blocks are positioned and thereafter move a block and position it on top of the Jenga tower.

A way to do this is by having a robot with several abilities. For starter, it will have to be able to scan the tower from a distance in order to know what the tower looks like (for instance which blocks are missing and placed at what position). This will be done at the beginning of the robot’s turn. The robot will also need a sensor to know which block to push, and then have a linear actuator with a touch sensor (that is used to “feel” the blocks, deciding if the block is movable or not) that can push the blocks a small distance. Once the block has been pushed a bit, the robot will have to be able to drive to the other side of the tower, grab the block and pull it all the way out. The robot will then have to be able to put it on top of the tower in the right position. To grab and place the block, we imagine creating a motorized claw. Also, in order to account for the height differences, the sensors and claw can be attached to a vertical conveyor belt system. To have the robot drive around the tower, we can create an Express bot bottom that can follow a line drawn out on the table aligning itself around the tower. We also imagine the dice that is usually used in this game will be a software component on the NXT that all players will use. In this way, we decide the variable used in the NXT for its strategy.

Hardware/Software platform (e.g. number of NXT’s, program on a PC, sensors, actuators)

For this, we can use two NXT’s for the robot. This is for providing more processing power by sharing the different jobs between the two. The robot will have several motors attached; two for driving; one for the conveyor belt; one for the claw, and one for the linear actuator. Furthermore, it will need at least one light sensor for following the line around the tower (possibly two for better alignment), and a color sensor to register which blocks to push. For mapping out the jenga tower, we believe using a webcam connected to a Raspberry Pi can do the required image recognition. We could use an external device such as a laptop instead of a Raspberry Pi – however, we want the robot to be independent from external devices, to give an easy-to-use approach for the user. The Raspberry Pi will then be connected with one of the NXT’s for the mapping of the jenga tower. In order for the conveyor belt to work, we imagine using e.g. the tacho counter in the motor to know at which height the motor is at and must go to.

Software architecture (e.g. a behaviour-based architecture on each NXT and a server architecture on a PC. Use the course literature as reference.)

For this project, we imagine we can use a sequential strategy for the NXT. This sequence could look as depicted in figure 2.

Control sequence for Jenga robot. Figure 2: Control sequence for Jenga robot.

The Jenga robot will be using a sequential strategy because it will always be doing the same sequence of events: Scan the tower, drive to the side of the tower where the robot wants to push a block, drive to the opposite side of the tower to grab the block, pull it out, and put it on top of the tower, and then drive back. There are no external stimulations that would require a reactive strategy.

Most difficult problems

The most difficult problem will be having precise and sufficiently smooth movements for the claw when pulling out the blocks and placing them on top of the tower, without the risk of toppling over the tower.

Another challenge that we will mention is having effective image recognition software run on a Raspberry Pi.

What we could expect to have at the end

A robot capable of driving up to the tower, push out a block, drive to the opposite site, grab the block, pull it out and place it on the top. We are not sure, however, if we in the end will be able to have a robot that can pull out the block without the risk of toppling over the tower.



In this idea we will remake the classic game of tetris with lego robots. Tetris is a game where the player has to stack different figures. Each figure are made of 4 squares. The game board is 10 squares wide and 20 high. When the game runs, a figure at the top of the board will show up (called spawn) and is slowly dragged toward the bottom. The player now has the ability to control the orientation of the figure and also move it left, right, and down. The goal of the game is to place the figures in a horizontal line on the bottom. When there are 10 squares on the same horizontal line, this line disappears and anything above moves one square down. The player then get points for everything he clears. The game ends when the figures stack so high that there is not room enough for a new figure to spawn.

Our idea is the replace each of the squares that together forms the different figures with a robot. Then we will make the robots drive in formations as described by Jakob Fredslund et al. in “A General, Local Algorithm for Robot Formations” which should make the robots resemble the figures used in Tetris. We will then let the player control the figures like in tetris to battle for the best high score. When a line or more is completed in the game the robots involved have to drive out from the game setup, to the top of the game and in the back of a queue and wait until they are needed to construct a new figure.

Hardware/Software platform (e.g. number of NXT’s, program on a PC, sensors, actuators)

To ensure we have enough robots to fill the board we will need at least (9*18)=162 NXT’s. This will ensure that there is always a robot standing by to join the game. On each of the robots we will need a camera that can track the other robots near it. This is needed to make the robots able to drive in formations. We think that the best solution for this is to equip each robot with a IR-beacon that the other robots in the formation can follow. This will ensure the flexibility we need for the game to work and the robots will be able to switch between being slave or master in the formations.

The robots also has to have a bumper sensor to stop then it reaches the top of the stack, and a light sensor to make it follow the lines to make it easier to align the robots at the end of the stack.

To control the robots we will need a computer to handle the connections to the robots. The player can control the figure in play with a gamepad connected to the pc.

Software architecture (e.g. a behaviour-based architecture on each NXT and a server architecture on a PC. Use the course litterature as reference.)

As described we will make us of the “Local Algorithm for Robot Formations” by Jacob Fredslund. This will make us able to ensure that we robots drive in the right formations. We will also need each robot to have a behaviour-based architecture as described by Brooks, R. in “A robust layered control system for a mobile robot”. This we will use to control the different behaviours depending on there the robots is in the game process. e.g. when the robot is in the follow line behaviour mode it has to prioritises the behavior of stopping when the bumper sensor is pressed higher.

Most difficult problems

It is not possible for us to get all the NXT’s we need, so we will have to make the game with far less robots and a smaller game board. The bluetooth specifications say that a maximum of seven devices can be connected at once, this can make it difficult for us have one computer that handles all the NXT’s and the connection to the player.

Because of the game structure the robots used to make one figure will not necessarily be used in the same figure next time they get into play. This will require every robot to function indepently.

What we could expect to have at the end

We can maybe get 12 NXT’s. This will make us able to test the gameplay but not play a full game. With 12 NXT’s it will not be possible to test the line clearings process with more than one line at a time.



This idea is about having a couple of robots help each other escape a maze. The robots will be put into the maze, and then split up and start mapping out the maze together. whenever necessary e.g. when one of the robots finds a road that leads to a dead end, information will be shared between the robots to reduce repeated mapping of areas. In this way, the robots will know where each other have been and in that way optimize their search for the exit. In case one of the robots find the way out, it will tell the other robots of the way out by sharing its mapping of the maze.

One can also imagine how this can be extended into several types of games, for instance where some areas must first be visited by all robots before they are allowed to leave the maze or a game based on the players making the mazes and the player whose maze takes the longest to find out of is the winner.

Hardware/Software platform (e.g. number of NXT’s, program on a PC, sensors, actuators)

We will need a minimum of two NXT’s – one for each robot driving in the labyrinth. Depending on whether the maze has 2D obstacles(just lines drawn out on the floor) or 3D obstacles (physical walls put up), the sensors for each robot will vary. If it is just 2D, we imagine a single light sensor can be used to map the maze. Whenever the robot faces a 2D-wall, it will take a look to the right and then the left to sense if there are any other 2D walls. However, it is also possible to attach several light sensors so the robot will not have to make the turns to register walls at the sides whenever facing a wall. If it is 3D walls, we can utilize the fact that we can measure distance, and have three ultrasonic sensors mounted on the robot, one left; one right and one straight ahead which will allow the robot to map the values directly.

The robots will have to drive around in the maze, and we imagine using an Express bot design can do just well here. This design has two motors on each side of the robot and a third support wheel.

In case two robots happen to bump into each other (which might happen in a 2D labyrinth), we imagine having four bumpers around the robot. These bumpers will be connected to pressure sensors, so we know if a pressure sensor is activated, it means another robot is close. Whenever that happens, the robots can share each other’s mappings of the maze. This also means we need some form of communication between the robots – for instance bluetooth communication.

Software architecture (e.g. a behaviour-based architecture on each NXT and a server architecture on a PC. Use the course litterature as reference.)

For this to work, the robots will need several things. First, they need to have a behaviour-based architecture programmed in their NXT, such as introduced by Flynn & Jones. We see several behaviours, such as Drive Forwards, Scan for Wall, Map Route, Share Mapping When Close To Other Robot and also a Share Mapping Every 5 Minute.

Also, we need the NXT’s to map out the maze. For this, we can use the architecture proposed by Hellstrom for accurate navigation for the robot as well as mapping out where the robots have been.

Furthermore, if we want to share the mapping with the other robots, we need the NXT’s to not only communicate as well – but also process external information from other NXT’s. For this, we can look into Embodied Evolution architectures. Through this, it will be possible to make the NXT’s tell each other what is the “best” path.

Most difficult problems

Some of the most difficult problems in this project will be mapping the maze and sharing each part of the mapping between robots, and then have the robots know where to go and where not to go.

What we could expect to have at the end

If we created this project, we believe we would be able to have several robots driving around the maze, mapping it out relatively precise. The sharing of the mapping, however, might be the biggest challenge and we might not be able to do that in the end.

Our project idea – Balancing robot


Here we will describe our final project idea and our initial considerations on how it should work and the set project milestones.


Our project idea is based on the balancing robot concept described earlier, which we have refined and set a goal of having our robot balancing on a ball, which we also want to drive on a “tightrope”. The tightrope will in our case be a wooden beam that is painted white with a black line in the center for our robot to follow and align against.

To make this project more manageable, we have divided it into a series of milestones:

  1. Have first iteration of physical robot build
  2. Is capable of balancing on ball
  3. Is capable of moving while balancing
  4. Is capable of driving on a “tightrope”

Our first milestone is to have built a physical robot capable of being mounted on top of a ball, and have control over the motors and sensors needed.

The second milestone is making the robot balance on top of the ball without external assistance or excessive movement to correct its posture. This also implies that it is able to adapt to some amount of external exposure, such as a light push.

The third milestone is making the robot able to move while balancing on the ball, for example through external controls such as remote control or a predefined behaviour as seen in previous exercises.

The final milestone is having the robot drive a “tightrope” which requires minimal movement when adjusting posture and precise movement so as not to run off the “tightrope” when moving forward.

Hardware/Software platform (e.g. number of NXT’s, program on a PC, sensors, actuators)

This project should be able to be run from a single NXT unit which effectively allows control for 3 motors and 4 sensors without having to use multiplexing units, which are rumored to decrease refresh rate. The decreased refresh rate would have a serious impact on our robots abillity to stay in balance as changes can occur too fast. Furthermore, our robot requires 2-3 motors depending on design. A 2 motor design can be seen here and a 3 motor design can be seen here (although not made with the NXT and LEGO).

As for sensing the changes in balance (which we will call posture, as it can be seen by how the robot leans forwards/backwards/sideways etc.) we have a series of possibilities which we need to investigate more thoroughly, such as a gyroscopic sensor; accelerometer; a combination of both, or something completely different. For sensing the line when driving the “tightrope” we have had great success with the color sensor, but since this needs to be approximately 5-10 mm from the surface might cause a problem, and require us to find some other way of following the line. A sketch of our project can be seen in figure 3.

Simple sketch of “tightrope”-walking ballbot. Figure 3: Simple sketch of “tightrope”-walking ballbot.

Software architecture (e.g. a behaviour-based architecture on each NXT and a server architecture on a PC. Use the course literature as reference.)

The robots will need several things. First, we would use the behaviour-based architecture as introduced by Flynn & Jones. We will use this to make several behaviours, such as Maintain Balance, Drive Forward and Follow Line. We believe a behaviour-based architecture can look like that in figure 4.

Behavioural architecture. Figure 4: Behavioural architecture.

We will now explain the different behaviours.

Maintain Balance

The highest priority will be set for maintaining balance. This means that data from the accelerometer and/or gyroscopic sensor will be used to maintain the balance of the robot meaning that if it is tilting too much in any direction then it will adjust for this but if in an up-right position then release control to the other behaviours.

Controlled Movement

This behaviour is from an external source, e.g. a user that gives a command through a remote control to move forward, and then the robot will perform the move, releasing control once the move has been finished.

Align to Line

This behaviour is for when the robot is driving on the “tightrope” and needs to follow the line in the middle of the beam. It will then align to the white line and release control when it is parallel with it.

Go Through Waypoints Then Stop

This is the go-to behaviour for the robot when every other condition is met, which is a series of waypoints or “tasks” that it should perform. This can e.g. be maintaining position or go to position A then B etc., and repeat.

Most difficult problems

The biggest problem that we will encounter is getting a stable balance over time. This means, for example, if we use the gyroscopic sensor we have to take into account that it has a natural drift, causing imprecise readings, which needs to be corrected so the robot will not just topple over.

What we could expect to have at the end

We should atleast be have a robot able to balance on a ball for a period of time.

Initial planning for project


We will create an initial plan for the entire project.


We have created a plan for the remainder of the project, as seen in figure 5.

Project plan for week 20. Figure 5: Project plan for week 20.

Our BallBot requires several parts for completion. As such, we have divided the plan into overall parts: Body Construction, Software Components, Sensors, Motors and Actuators and then Milestones. Every part (except milestones) will begin with a research phase where we investigate propositions for optimal ways of doing the different parts, by looking at literature, other researchers’ work and own experiments.