2023.emnlp-industry.12@ACL

Total: 1

#1 A Pretrained Language Model for Cyber Threat Intelligence [PDF] [Copy] [Kimi3]

Authors: Youngja Park ; Weiqiu You

We present a new BERT model for the cybersecurity domain, CTI-BERT, which can improve the accuracy of cyber threat intelligence (CTI) extraction, enabling organizations to better defend against potential cyber threats. We provide detailed information about the domain corpus collection, the training methodology and its effectiveness for a variety of NLP tasks for the cybersecurity domain. The experiments show that CTI-BERT significantly outperforms several general-domain and security-domain models for these cybersecurity applications indicating that the training data and methodology have a significant impact on the model performance.