Adversarial Attacks, Defences and Visualisation for AI-based NIDS

Reference

Degree Grantor

The University of Auckland

Abstract

Network Intrusion Detection Systems (NIDSes) are crucial in safeguarding computer networks against malicious activities. Recent NIDS architectures have increasingly adopted Deep Neural Networks (DNNs) for highly accurate network traffic detection. However, DNNs are vulnerable to adversarial attacks, where imperceptible perturbations are injected into inputs to deceive the DNN. Adversarial attacks compromise the trustworthiness of NIDSes and pose significant risks to network security. Nonetheless, many existing adversarial learning and visualisation techniques in the NIDS domain are primarily derived from supervised classification tasks in Computer Vision (CV), making them impractical for unsupervised outlier detection tasks in NIDS. This thesis aims to develop adversarial learning and visualisation methods tailored for DNN-based NIDSes. We begin with a comprehensive literature review of existing adversarial attacks and defences in the NIDS domain, revealing unique characteristics of NIDS that render adversarial attacks and defences proposed in CV impractical. To address this gap, we introduce a practical adversarial attack called Liuer Mihou (LM) that generates replayable network packets. Our results show LM can completely evade DNN-based NIDS such as Kitsune but are less effective against convention NIDS models. In response to LM, we propose Moving Target Defense as Adversarial Defense (MTD-AD) that stochastically modifies the NIDS’ decision boundary to reduce the chances of successful attacks. Results show under MTD-AD, increasing the evasiveness of adversarial attacks leads to a significant decrease in maliciousness, making it challenging to craft evasive and malicious attacks. To gain a fundamental understanding of the adversarial attacks, we introduce NIDS-Vis, a black-box decision boundary traversal algorithm. Utilising NIDS-Vis, we discover that complex decision boundaries offer more accurate detection at the cost of lower adversarial robustness. To improve the adversarial robustness of NIDS, we propose two methods to reduce the adversarial risk of the NIDSes: feature space partition and distribution loss function. Overall, this thesis establishes a solid foundation for practical adversarial learning in the NIDS domain. The findings contribute to developing effective adversarial attacks, defences, and visualisation techniques tailored for NIDSes, ultimately strengthening the security of NIDSes against adversarial threats.

Description

DOI

Related Link

Keywords

ANZSRC 2020 Field of Research Codes

Collections