A SPAMTRANSFORMERMODEL FOR SMS SPAM DETECTION
DOI:
https://doi.org/10.64751/7sp0x034Keywords:
Transformer, SMS Spam Detection, Natural Language Processing, Attention Mechanism, Deep Learning, Text ClassificationAbstract
With the increasing volume of mobile communication, SMS spam has become a prevalent security issue, exposing users to fraudulent messages, scams, and unwanted advertisements. Traditional machine learning approaches such as Naïve Bayes, SVM, and classical neural networks have achieved reasonable accuracy but struggle with long-range dependencies, contextual understanding, and evolving spam patterns. This paper introduces a Transformer-based SMS spam detection model designed to capture semantic meaning and contextual relationships within text messages. Unlike recurrent models, Transformers rely on selfattention, enabling the system to focus on significant words within a message and understand subtle cues commonly used in modern spam. The proposed architecture incorporates tokenization, position embeddings, multihead attention, and feed-forward layers for efficient text classification. The model was trained on publicly available SMS datasets containing labeled ham (legitimate) and spam messages. Experimental results show that the Transformer model achieved higher precision, recall, and F1-score compared to traditional classifiers and LSTM-based systems. The approach demonstrates strong capability in generalizing across different message styles and languages. This work highlights the effectiveness of Transformers in text classification and establishes a scalable, high-accuracy solution for SMS spam detection.
Downloads
Published
Issue
Section
License

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.







