Abstract:
Conventional harvesting and pollination methods are no longer efficient due to the
shortage of seasonal labour and high labour costs. Therefore, automation provides a solution
to providing healthy and good quality food. This project is a part of a big orchard
project called "multipurpose orchard robotics". An autonomous multipurpose mobile
platform was designed that consists of kiwifruit harvesting, and pollinating platforms.
Furthermore, the potential of the current method was shown on an apple harvester. A
list of milestones was defined at the start of the project. The goal of this thesis is designing
and developing the vision system of orchard robots which can fulfil all milestones.
The vision system was integrated into robots and tested in the real world. It includes
sensing, detecting, and matching and localising modules. The sensing module was designed
to work reliably under challenging conditions (e.g. midday in a kiwifruit orchard).
Its performance was measured based on fruit visibility, dealing with challenging lighting
conditions and calibration errors. The fruit detection method used was the Faster
R-CNN and met all milestones to detect kiwifruit, kiwifruit flowers, apple fruit, apple
flowers, apple buds and apple fruit-lets. The detection methods face difficulties such
as dynamic lighting conditions and fruit occlusion. According to the challenges, a comprehensive
dataset was collected for each module. The Faster R-CNN was successful
in detecting 87%, 91%, and 99% of kiwifruit, apples, and kiwifruit flowers, respectively.
Moreover, a simple matcher was modified for the fruit, which is capable of fruit matching
with various features. The accuracy of the fruit stereo matching method on kiwifruit,
apples and kiwifruit flowers was 0.99, 0.98, and 0.95, respectively. Overall, the vision systemwas
capable of detecting and localising 87%of kiwifruit, kiwifruit flowers and apples
in the real world orchard.