Achiam J, Adler S, Agarwal S, Ahmad L, Ilge A, Florencia LA, Diogo A, Janko A, Sam A, Shyamal A, et al. GPT-4 technical report. arXiv preprint arXiv:2303.08774, 2023.
Bao H, He K, Yin X, Li X, Bao X, Zhang H, Jialun W, Gao Z. Bert-based meta-learning approach with looking back for sentiment analysis of literary book reviews. In: Natural Language Processing and Chinese Computing: 10th CCF International Conference, NLPCC 2021, Qingdao, China, October 13–17, 2021, Proceedings, Part II 10, p. 235–247. Springer, 2021.
Berry DM. Introduction: Understanding the digital humanities. In: Understanding digital humanities, p. 1–20. Springer, 2012.
Brennan C. Digital humanities, digital methods, digital history, and digital outputs: history writing and the digital revolution. History Compass. 2018;16(10):e12492.
Brown T, Mann B, Ryder N, Subbiah M, Kaplan JD, Dhariwal P, Neelakantan A, Shyam P, Sastry G, Askell A, et al. Language models are few-shot learners. Advan Neural Inform Process Syst. 2020;33:1877–901.
Cameron L, Maslen R. Metaphor analysis. London: Equinox, p. 97–115, 2010.
Yupeng CX, Wang JW, Yuan W, Yang L, Zhu K, Chen H, Yi X, Wang C, Wang Y, et al. A survey on evaluation of large language models. ACM Trans Intell Syst Technol 2023.
Devlin J, Chang M-W, Lee K, Toutanova K. Bert: pre-training of deep bidirectional transformers for language understanding. arXiv:1810.04805, 2018.
Duan S, Wang J, Yang H, Qi S. Disentangling the cultural evolution of ancient China: a digital humanities perspective. Humanities Social Sci Commun. 2023;10(1):1–15.
Eve MP. The digital humanities and literary studies. Oxford University Press, 2022.
Ge M, Mao R, Cambria E. Explainable metaphor identification inspired by conceptual metaphor theory. Proc AAAI Conf Artificial Intell. 2022;36:10681–9.
Ge M, Mao R, Cambria E. Discovering the cognitive bias of toxic language through metaphorical concept mappings. Cogn Comput. 2025;17(1):1–21.
He K, Hong N, Lapalme-Remis S, Lan Y, Huang M, Li C, Yao L. Understanding the patient perspective of epilepsy treatment through text mining of online patient support groups. Epilepsy Behavior. 2019;94:65–71.
He K, Huang Y, Mao R, Gong T, Li C, Cambria E. Virtual prompt pre-training for prototype-based few-shot relation extraction. Expert Syst Appl. 2023;213:118927.
He K, Mao R, Gong T, Cambria E, Li C. JCBIE: a joint continual learning neural network for biomedical information extraction. BMC Bioinformatics. 2022;23(549).
He K, Mao R, Gong T, Li C, Cambria E. Meta-based self-training and re-weighting for aspect-based sentiment analysis. IEEE Trans Affect Comput. 2023;14(3):1731–42.
He K, Mao R, Huang Y, Gong T, Li C, Cambria E. Template-free prompting for few-shot named entity recognition via semantic-enhanced contrastive learning. IEEE Trans Neural Netw Learn Syst. 2024;35(12):18357–69.
He K, Mao R, Lin Q, Ruan Y, Lan X, Feng M, Cambria E. A survey of large language models for healthcare: from data, technology, and applications to accountability and ethics. arXiv:2310.05694, 2023.
Ichien N, Stamenković D, Holyoak KJ. Large language model displays emergent ability to interpret novel literary metaphors. arXiv:2308.01497, 2023.
Imran MM, Chatterjee P, Damevski K. Shedding light on software engineering-specific metaphors and idioms. In: Proceedings of the IEEE/ACM 46th international conference on software engineering, p. 1–13, 2024.
Joseph R, Liu T, Ng AB, See S, Rai S. Newsmet: a ‘do it all’ dataset of contemporary metaphors in news headlines. In: Findings of the Association for Computational Linguistics: ACL 2023, p. 10090–10104, 2023.
Lakoff G, Johnson M. Metaphors we live by. University of Chicago press, 2008.
Alan L. The meaning of the digital humanities. pmla. 2013;128(2):409–23.
Luo E. Utilizing computational linguistics tools for enhanced poetic interpretation. J Student Res. 2023;12(4).
Manjavacas E, Fonteyn L. Adapting vs. pre-training language models for historical languages. J Data Mining Digital Humanities, (Digital humanities in languages), 2022.
Mao R, Du K, Ma Y, Zhu L, Cambria E. Discovering the cognition behind language: financial metaphor analysis with metapro. In: 2023 IEEE International Conference on Data Mining (ICDM), p. 1211–1216. IEEE, 2023.
Mao R, He K, Ong CB, Liu Q, Cambria E. MetaPro 2.0: computational metaphor processing on the effectiveness of anomalous language modeling. In: Findings of the Association for Computational Linguistics: ACL, p. 9891–9908, Bangkok, Thailand, 2024. Association for Computational Linguistics.
Mao R, Li X. Bridging towers of multi-task learning with a gating mechanism for aspect-based sentiment analysis and sequential metaphor identification. Proc AAAI Conf Artif Intell. 2021;35:13534–42.
Mao R, Li X, Ge M, Cambria E. MetaPro: a computational metaphor processing model for text pre-processing. Inform Fusion. 2022;86–87:30–43.
Mao R, Li X, He K, Ge M, Cambria E. MetaPro Online: a computational metaphor processing online system. In: Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations), volume 3, p. 127–135, Toronto, Canada, 2023. Association for Computational Linguistics.
Mao R, Lin Q, Liu Q, Mengaldo G, Cambria E. Understanding public perception towards weather disasters through the lens of metaphor. In Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence, (IJCAI-24), p. 7394–7402, Jeju, South Korea, 2024. International Joint Conferences on Artificial Intelligence Organization.
Mao R, Liu Q, He K, Li W, Cambria E. The biases of pre-trained language models: an empirical study on prompt-based sentiment analysis and emotion detection. IEEE Trans Affect Comput. 2023;14(3):1743–53.
Mao R, Zhang T, Liu Q, Hussain A, Cambria E. Unveiling diplomatic narratives: Analyzing United Nations Security Council debates through metaphorical cognition. In: Proceedings of the Annual Meeting of the Cognitive Science Society (CogSci), volume 46, p. 1709–1716, Rotterdam, the Netherlands, 2024.
Min B, Ross H, Sulem E, Veyseh APB, Nguyen TH, Sainz O, Agirre E, Heintz I, Roth D. Recent advances in natural language processing via large pre-trained language models: a survey. ACM Comput Surveys. 2023;56(2):1–40.
Radford A. Karthik Narasimhan. Tim Salimans: Ilya Sutskever, et al. Improving language understanding by generative pre-training; 2018.
Radford A, Jeffrey W, Child R, Luan D, Amodei D, Sutskever I, et al. Language models are unsupervised multitask learners. OpenAI blog. 2019;1(8):9.
Raffel C, Shazeer N, Roberts A, Lee K, Narang S, Matena M, Zhou Y, Li W, Liu PJ. Exploring the limits of transfer learning with a unified text-to-text transformer. J Mach Learn Res. 2020;21(140):1–67.
Rayson P. Wmatrix: a web-based corpus processing environment. Computing Department, Lancaster University. http://ucrel.lancs.ac.uk/wmatrix, 2009.
Schroeder CT. The digital humanities as cultural capital: implications for biblical and religious studies. J Religion, Media Digital Culture. 2016;5(1):21–49.
Smith AL, Greaves F, Panch T. Hallucination or confabulation? neuroanatomy as metaphor in large language models. PLOS Digital Health. 2023;2(11):e0000388.
Steen GJ, Dorst AG, Krennmayr T, Kaal AA, Herrmann JB. A method for linguistic metaphor identification. 2010.
Suissa O, Elmalech A, Zhitomirsky-Geffet M. Text analysis using deep neural networks in digital humanities and information science. J Am Soc Inf Sci. 2022;73(2):268–87.
Van Den Berg H, Betti A, Castermans T, Koopman R, Speckmann B, Verbeek KAB, Van der Werf T, Wang S, Westenberg MA. A philosophical perspective on visualization for digital humanities 2018.
Yuan A, Coenen A, Reif E, Ippolito D. Wordcraft: story writing with large language models. In: 27th International conference on intelligent user interfaces, p. 841–852, 2022.
Zhang X, Mao R, He K, Cambria E. Neurosymbolic sentiment analysis with dynamic word sense disambiguation. In: Findings of the association for computational linguistics: EMNLP 2023, p. 8772–8783, Singapore, 2023.
Ziems C, Held W, Shaikh O, Chen J, Zhang Z, Yang D. Can large language models transform computational social science? Computational Linguistics, p. 1–55, 2024.
Comments (0)