The increasing number and variety of cyber attacks in recent years have made intrusion detection systems (IDS) a critical component of computer network defense to monitor network traffic and identify malicious activities. Machine learning (ML) and deep learning (DL) techniques have been increasingly used in anomaly-based network IDS (NIDS) to detect new and unknown attacks, but they have been proven to be vulnerable to adversarial attacks, which can significantly reduce the detection system performance. In this paper we investigate the robustness of a DNNs-based NIDS, implemented for the Secure Safe Apulia Progect, against adversarial untargeted white box attacks. We employ Fast Gradient Sign Method (FGSM) and Projected Gradient Descent (PGD) as adversarial attacks to evaluate the decrease in model accuracy. The results show that adversarial training is an effective defense strategy against these types of attacks allowing the model to achieve F1 score values of 93%, 99%, 85%, 83% respectively, for the classification of benign instances, Backdooor, Ransomware and XSS malicous instances. This work aims to contribute to the challenge of handling adversary attacks in the domain of NIDS, in which research is still moving its first steps.

Improving the Robustness of DNNs-based Network Intrusion Detection Systems through Adversarial Training / Lella, E.; Macchiarulo, N.; Pazienza, A.; Lofù, D.; Abbatecola, A.; Noviello, P.. - (2023), pp. -6. (Intervento presentato al convegno 8th International Conference on Smart and Sustainable Technologies, SpliTech 2023 tenutosi a University of Split, Faculty of Electrical Engineering, Mechanical Engineering and Naval Architecture (FESB) and Hotel Elaphusa, hrv nel 2023) [10.23919/SpliTech58164.2023.10193009].

Improving the Robustness of DNNs-based Network Intrusion Detection Systems through Adversarial Training

Lella E.;Pazienza A.;Lofù D.;Noviello P.
2023-01-01

Abstract

The increasing number and variety of cyber attacks in recent years have made intrusion detection systems (IDS) a critical component of computer network defense to monitor network traffic and identify malicious activities. Machine learning (ML) and deep learning (DL) techniques have been increasingly used in anomaly-based network IDS (NIDS) to detect new and unknown attacks, but they have been proven to be vulnerable to adversarial attacks, which can significantly reduce the detection system performance. In this paper we investigate the robustness of a DNNs-based NIDS, implemented for the Secure Safe Apulia Progect, against adversarial untargeted white box attacks. We employ Fast Gradient Sign Method (FGSM) and Projected Gradient Descent (PGD) as adversarial attacks to evaluate the decrease in model accuracy. The results show that adversarial training is an effective defense strategy against these types of attacks allowing the model to achieve F1 score values of 93%, 99%, 85%, 83% respectively, for the classification of benign instances, Backdooor, Ransomware and XSS malicous instances. This work aims to contribute to the challenge of handling adversary attacks in the domain of NIDS, in which research is still moving its first steps.
2023
8th International Conference on Smart and Sustainable Technologies, SpliTech 2023
978-953-290-128-3
Improving the Robustness of DNNs-based Network Intrusion Detection Systems through Adversarial Training / Lella, E.; Macchiarulo, N.; Pazienza, A.; Lofù, D.; Abbatecola, A.; Noviello, P.. - (2023), pp. -6. (Intervento presentato al convegno 8th International Conference on Smart and Sustainable Technologies, SpliTech 2023 tenutosi a University of Split, Faculty of Electrical Engineering, Mechanical Engineering and Naval Architecture (FESB) and Hotel Elaphusa, hrv nel 2023) [10.23919/SpliTech58164.2023.10193009].
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11589/264426
Citazioni
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact