
info@juzhikan.asia
Xi’an International Studies University,Xi’an Shaanxi,710061;
Abstract:Irony detection in social media requires modeling the incongruity between literal and intended meaning, where pragmatic markers play a decisive role. This paper proposes a novel framework, IFM-Prompt-BERT, which leverages Illocutionary Force Markers (IFMs)—both explicit (e.g., how wonderfu, sarcastic emojis) and implicit (e.g., rhetorical questions, hyperbolic modifiers)—as key signals for irony recognition. Grounded in Searle’s speech act theory and Brown & Levinson’s politeness strategies,we hypothesize that irony systematically exploits violations of cooperative principles through IFMs, with distinct patterns across speech act categories: Assertives。Practical applications span social media sentiment monitoring (detecting disguised criticism) and dialog systems (preventing misinterpretation). Theoretically, this work bridges pragmatics and computational linguistics, demonstrating that IFMs provide a robust explanatory framework for irony beyond lexical cues.
Keywords: Irony Detection;Illocutionary Force Markers;Speech Acts;Contrastive Prompt Learning;Pragmatic Modeling;Social media;Pragmatics;Dialog System
References
[1]Austin, J. L. (1962). How to do things with words. Oxford University Press.
[2]Brown, P., & Levinson, S. C. (1987). Politeness: Some universals in language usage. Cambridge University Press.
[3]Grice, H. P. (1975). Logic and conversation. In P. Cole & J. L. Morgan (Eds.), Syntax and semantics 3: Speech acts(pp. 41–58). Academic Press.
[4]Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., & Neubig, G. (2021). Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586.
[5]Devlin, J., Chang, M. W., Lee, K., & Toutanova, K. (2019). BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)* (pp. 4171–4186).
[6]Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., ... & Stoyanov, V. (2019). RoBERTa: A robustly optimized BERT pretraining approach. arXiv preprint arXiv:1907.11692.
[7]Searle, J. R. (1969). *Speech acts: An essay in the philosophy of language. Cambridge University Press.
[8]Searle, J. R. (1976). A classification of illocutionary acts. Language in society, 5(1), 1-23.
[9]Giora, R. (1995). On irony and negation. Discourse processes, 19(2), 239-264.
[10]Kreuz, R. J., & Glucksberg, S. (1989). How to be sarcastic: The echoic reminder theory of verbal irony. Journal of experimental psychology: General, 118(4), 374.
[11]Joshi, A., Bhattacharyya, P., & Carman, M. J. (2017). Automatic sarcasm detection: A survey. ACM Computing Surveys (CSUR), 50(5), 1-22.
[12]Khodak, M., Saunshi, N., & Vodrahalli, K. (2018). A large self-annotated corpus for sarcasm. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018).
[13]Chen, P., Soo, V. W., & Wang, Y. (2022). Contrastive Prompt Learning for Sarcasm Detection in Social Media. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing(pp. 1234–1248).