Abstract
Literature mirrors a nation’s ideology and language system, with British and American literature (BALW) differing greatly from Chinese literature due to distinct historical, linguistic, and cultural backgrounds. Chinese language system struggles to accurately retrieve and classify BALW. To tackle this, we propose a BALW appreciation model. Built upon a base model akin to DocBERT, it uses the BERT pre-trained model in the text representation module to capture contextual semantics. A heterogeneous graph attention network (HAN) with word, feature word, and label nodes is designed to extract local semantics in the text. These features are then integrated for multi-label classification. Experiments on a curated dataset show our model outperforms the base one, improving Hamming Loss, Macro-F1, and Micro-F1 by 0.009, 1.9%, and 5.38%, respectively. This enhances intelligent classification and retrieval of BALW, benefiting literary appreciation and cross-cultural literary exchange.
Keywords
Get full access to this article
View all access options for this article.
