NEWS HEADLINES SARCASM DETECTION USING NEURAL NETWORKS

Authors

  • Aleksandar Vujinović Autor

DOI:

https://doi.org/10.24867/19BE16Vujinovic

Keywords:

sarcasm detection, news articles, deep learning, neural networks

Abstract

Sarcasm is the use of a remark that means the opposite of the real meaning, made to criticize something in a humorous way. It's essential to understand sarcasm to avoid misunderstanding. Detecting sarcasm in the wri­tten text is problematic since all non-verbal signals are missing. In this paper, I present sarcasm detection in news headlines using: (1) embedding-based neural net­works and (2) transformer-based models. LSTM (Long Short-Term Memory) with convolutional layers achieved the best performance among embedding-based models, reaching 86% accuracy, whereas roBERTa reached 94% accuracy within transformer-based models. All the trans­former models outperform embedding-based models.

References

Devlin, J., Chang, M.W., Lee, K. and Toutanova, K., 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.
[2] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L. and Stoyanov, V., 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692.
[3] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D. and Sutskever, I., 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8), p.9.
[4] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W. and Liu, P.J., 2019. Exploring the limits of transfer learning with a unified text-to-text transformer. arXiv preprint arXiv:1910.10683.
[5] Misra, R. and Arora, P., 2019. Sarcasm detection using hybrid neural network. arXiv preprint arXiv:1908.07414.
[6] Felbo, B., Mislove, A., Søgaard, A., Rahwan, I. and Lehmann, S., 2017. Using millions of emoji occurrences to learn any-domain representations for detecting sentiment, emotion and sarcasm. arXiv preprint arXiv:1708.00524.
[7] https://twitter.com
[8] Akula, R. and Garibay, I., 2021. Interpretable Multi-Head Self-Attention Architecture for Sarcasm Detection in Social Media. Entropy, 23(4), p.394.
[9] https://reddit.com
[10] https://keras.io/
[11] https://keras.io/keras_tuner/
[12] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł. and Polosukhin, I., 2017. Attention is all you need. Advances inneural information processing systems, 30.
[13] https://huggingface.co/
[14] Sun, C., Qiu, X., Xu, Y. and Huang, X., 2019, October. How to fine-tune bert for text classification?. In China national conference on Chinese computational linguistics (pp. 194-206). Springer, Cham.
[15] Mandal, P.K. and Mahto, R., 2019. Deep CNN-LSTM with word embeddings for news headline sarcasm detection. In 16th International Conference on Information Technology-New Generations (ITNG 2019) (pp. 495-498). Springer, Cham.

Published

2022-09-07

Issue

Section

Electrotechnical and Computer Engineering