Your browser doesn't support javascript.
Text classification of COVID-19 reviews based on pre-training language model
2nd IEEE International Conference on Power, Electronics and Computer Applications, ICPECA 2022 ; : 1179-1183, 2022.
Article in English | Scopus | ID: covidwho-1788729
ABSTRACT
This experiment analyzed 100,000 epidemic-related microblogs officially provided by the CCF. Using Enhanced Representation through Knowledge Integration (ERNIE), the effect of pre-training model on extracting Chinese semantic information was improved. After that, the deep pyramid network (DPCNN) was merged with ERNIE to save computing costs. Enhanced feature extraction performance for long-distance text. This model was the most effective in the comparison test of six emotional three-category tasks, which improved the accuracy of BERT pre-training model by 7%. © 2022 IEEE.
Keywords

Full text: Available Collection: Databases of international organizations Database: Scopus Language: English Journal: 2nd IEEE International Conference on Power, Electronics and Computer Applications, ICPECA 2022 Year: 2022 Document Type: Article

Similar

MEDLINE

...
LILACS

LIS


Full text: Available Collection: Databases of international organizations Database: Scopus Language: English Journal: 2nd IEEE International Conference on Power, Electronics and Computer Applications, ICPECA 2022 Year: 2022 Document Type: Article