Abstract:
Traffic monitoring systems provide valuable traffic information desired by transport engineers to enhance transportation planning and traffic management. Recently, Unmanned Aerial Vehicles (UAV), also known as drones, have opened many opportunities for different traffic monitoring applications, ranging from traffic surveillance applications to surrogate safety measures. UAVs hold definite advantages over the traditional traffic sensors as they are well known for easy manoeuvring, low cost, wider field of view and no disturbance on traffic which translates into a safer and quicker data collection strategy. In parallel, with the outbreak of deep learning technology, the use of computer vision to automatically extract traffic flow data from drone videos has become a promising option for UAV-based applications. Several systems have been proposed in the literature that exploits different computer vision approaches for traffic data extraction. These methods can be categorised as flow-based, appearance-based and object-based. Most of these methods focused on extracting traffic data from fixed surveillance cameras (CCTV) located on highways. However, all of the reviewed methods have their own limitations and might not suit complex data collection situations such as signalised intersections or roundabouts. This thesis proposes a novel method to extract lane-by-lane traffic flow data automatically from drone video footage. Deep learning-based methods namely YOLO-v3 and Sparse Lucas-Kanade Optical Flow techniques are employed to detect, categorise (light vehicle/heavy vehicle) and track vehicles while Open Source Computer Vision (OpenCV) is used to write codes to extract vehicle count, headway and queue length data from the video footage. The proposed methods are verified for its computational efficiency and accuracy using drone video footages taken from a signalised intersection in Auckland, New Zealand. The results are then compared with those reported in the literature. The proposed methods demonstrate a prospect to improve computational efficiency as well as the accuracy of traffic data extraction from drone video footage.