Abstract:
Factories of the future need more flexible material transport systems that can dynamically
schedule transport tasks under uncertain conditions. Popular methods to solve this task
allocation problem include a range of algorithms that apply reinforcement learning. This
research aims to develop multi-agent reinforcement learning algorithms for dispatching
transport tasks in an autonomous mobile robot fleet. A Deep Q Neural Network was
used in conjunction with a multi-agent system to realise the model. The novel "lookahead"
parameter was introduced to encourage the most efficient task allocation scheme
and prevent the model from behaving similar to a first-in, first-out (FIFO) model. Extensive
simulations were executed on a range of devices to ensure that the model was
viable across varying hardware and software specifications. Each simulation consisted
of a set number of agents and tasks, paired with parameters varied to determine the
viability of the look-ahead parameter and validate the performance of the model under
different environments. To further validate the model, experiments were conducted on
a physical mobile robot fleet consisting of two TurtleBots. The results show that the
model showed tremendous improvement when comparing the original performance to
the trained model at the end of each simulation. Furthermore, the results also indicated
that the "look-ahead" parameter improved performance regarding the distance travelled.
However, this caused delays in the overall execution due to the additional calculations.
These results suggest that the proposed model is viable and has the potential to increase
the performance of mobile robot fleets in real applications. Modifications to the
neural network and overall model could allow for further improvements and outperform
traditional methods.