LAMBERT: Leveraging Attention Mechanisms to Improve the BERT Fine-Tuning Model for Encrypted Traffic Classification
Abstract
Encrypted traffic classification is a crucial part of privacy-preserving research. With the great success of artificial intelligence technology in fields such as image recognition and natural language processing, how to classify encrypted traffic based on AI technology has become an attractive topic in information security. With good generalization ability and high training accuracy, pre-training-based encrypted traffic classification methods have become the first option. The accuracy of this type of method depends highly on the fine-tuning model. However, it is a challenge for existing fine-tuned models to effectively integrate the representation of packet and byte features extracted via pre-training. A novel fine-tuning model, LAMBERT, is proposed in this article. By introducing an attention mechanism to capture the relationship between BiGRU and byte sequences, LAMBERT not only effectively improves the sequence loss phenomenon of BiGRU but also improves the processing performance of encrypted stream classification. LAMBERT can quickly and accurately classify multiple types of encrypted traffic. The experimental results show that our model performs well on datasets with uneven sample distribution, no pre-training, and large sample classification. LAMBERT was tested on four datasets, namely, ISCX-VPN-Service, ISCX-VPN-APP, USTC-TFC and CSTNET-TLS 1.3, and the F1 scores reached 99.15%, 99.52%, 99.30%, and 97.41%, respectively.\r\nKeywords: information security; encrypted traffic classification; privacy protection; fine-tuning model; attention mechanism
Author
Kayhan Zrar Ghafor
Publisher
Mathematics
ISSN
Publish Date: