Abstract
In order to address problems related to label dependency, imbalance, extensive text analysis, and multi-label classification in legal contexts, this paper presents the BERT-CNN framework. The model’s development and integration with actual court cases allowed for the identification of the ideal hyperparameter configuration for the BERT-CNN architecture. The gathered dataset can be divided into three separate sections for the purpose of conducting a thorough assessment and examination of the model: the training, validation, and testing sets. The evaluation data from the testing set is used to evaluate the model’s performance. The evaluation’s conclusions show that the BERT-CNN model obtains metrics for precision, recall, F1 score and AUC-PR of 0.83, 0.82, 0.83 and 0.88, in that order. The empirical results highlight the BERT-CNN framework’s applicability for classifying legal texts, demonstrating impressive resilience and precision in multi-label classification tasks pertaining to court cases.
Keywords
Get full access to this article
View all access options for this article.
