このエントリーをはてなブックマークに追加
ID 67499
フルテキストURL
著者
Musthafa, Muhammad Bisri Graduate School of Environmental, Life, Natural Science and Technology, Okayama University
Huda, Samsul Green Innovation Center, Okayama University
Kodera, Yuta Graduate School of Environmental, Life, Natural Science and Technology, Okayama University
Ali, Md. Arshad Faculty of CSE, Hajee Mohammad Danesh Science and Technology University
Araki, Shunsuke Graduate School of Computer Science and Systems Engineering, Kyushu Institute of Technology
Mwaura, Jedidah Graduate School of Computer Science and Systems Engineering, Kyushu Institute of Technology
Nogami, Yasuyuki Graduate School of Environmental, Life, Natural Science and Technology, Okayama University
抄録
Internet of Things (IoT) devices are leading to advancements in innovation, efficiency, and sustainability across various industries. However, as the number of connected IoT devices increases, the risk of intrusion becomes a major concern in IoT security. To prevent intrusions, it is crucial to implement intrusion detection systems (IDSs) that can detect and prevent such attacks. IDSs are a critical component of cybersecurity infrastructure. They are designed to detect and respond to malicious activities within a network or system. Traditional IDS methods rely on predefined signatures or rules to identify known threats, but these techniques may struggle to detect novel or sophisticated attacks. The implementation of IDSs with machine learning (ML) and deep learning (DL) techniques has been proposed to improve IDSs' ability to detect attacks. This will enhance overall cybersecurity posture and resilience. However, ML and DL techniques face several issues that may impact the models' performance and effectiveness, such as overfitting and the effects of unimportant features on finding meaningful patterns. To ensure better performance and reliability of machine learning models in IDSs when dealing with new and unseen threats, the models need to be optimized. This can be done by addressing overfitting and implementing feature selection. In this paper, we propose a scheme to optimize IoT intrusion detection by using class balancing and feature selection for preprocessing. We evaluated the experiment on the UNSW-NB15 dataset and the NSL-KD dataset by implementing two different ensemble models: one using a support vector machine (SVM) with bagging and another using long short-term memory (LSTM) with stacking. The results of the performance and the confusion matrix show that the LSTM stacking with analysis of variance (ANOVA) feature selection model is a superior model for classifying network attacks. It has remarkable accuracies of 96.92% and 99.77% and overfitting values of 0.33% and 0.04% on the two datasets, respectively. The model's ROC is also shaped with a sharp bend, with AUC values of 0.9665 and 0.9971 for the UNSW-NB15 dataset and the NSL-KD dataset, respectively.
キーワード
intrusion detection system
feature selection
class balancing
ensemble technique
stacked long short-term memory
発行日
2024-07-01
出版物タイトル
Sensors
24巻
13号
出版者
MDPI
開始ページ
4293
ISSN
1424-8220
資料タイプ
学術雑誌論文
言語
英語
OAI-PMH Set
岡山大学
著作権者
© 2024 by the authors.
論文のバージョン
publisher
PubMed ID
DOI
Web of Science KeyUT
関連URL
isVersionOf https://doi.org/10.3390/s24134293
ライセンス
https://creativecommons.org/licenses/by/4.0/
Citation
Musthafa, M.B.; Huda, S.; Kodera, Y.; Ali, M.A.; Araki, S.; Mwaura, J.; Nogami, Y. Optimizing IoT Intrusion Detection Using Balanced Class Distribution, Feature Selection, and Ensemble Machine Learning Techniques. Sensors 2024, 24, 4293. https://doi.org/10.3390/s24134293