Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 20
Filter
1.
Sci Rep ; 14(1): 14646, 2024 Jun 25.
Article in English | MEDLINE | ID: mdl-38918461

ABSTRACT

Aspect-Based Sentiment Analysis (ABSA) represents a fine-grained approach to sentiment analysis, aiming to pinpoint and evaluate sentiments associated with specific aspects within a text. ABSA encompasses a set of sub-tasks that together facilitate a detailed understanding of the multifaceted sentiment expressions. These tasks include aspect and opinion terms extraction (ATE and OTE), classification of sentiment at the aspect level (ALSC), the coupling of aspect and opinion terms extraction (AOE and AOPE), and the challenging integration of these elements into sentiment triplets (ASTE). Our research introduces a comprehensive framework capable of addressing the entire gamut of ABSA sub-tasks. This framework leverages the contextual strengths of BERT for nuanced language comprehension and employs a biaffine attention mechanism for the precise delineation of word relationships. To address the relational complexity inherent in ABSA, we incorporate a Multi-Layered Enhanced Graph Convolutional Network (MLEGCN) that utilizes advanced linguistic features to refine the model's interpretive capabilities. We also introduce a systematic refinement approach within MLEGCN to enhance word-pair representations, which leverages the implicit outcomes of aspect and opinion extractions to ascertain the compatibility of word pairs. We conduct extensive experiments on benchmark datasets, where our model significantly outperforms existing approaches. Our contributions establish a new paradigm for sentiment analysis, offering a robust tool for the nuanced extraction of sentiment information across diverse text corpora. This work is anticipated to have significant implications for the advancement of sentiment analysis technology, providing deeper insights into consumer preferences and opinions for a wide range of applications.

2.
Comput Biol Med ; 159: 106960, 2023 06.
Article in English | MEDLINE | ID: mdl-37099973

ABSTRACT

Medical image segmentation enables doctors to observe lesion regions better and make accurate diagnostic decisions. Single-branch models such as U-Net have achieved great progress in this field. However, the complementary local and global pathological semantics of heterogeneous neural networks have not yet been fully explored. The class-imbalance problem remains a serious issue. To alleviate these two problems, we propose a novel model called BCU-Net, which leverages the advantages of ConvNeXt in global interaction and U-Net in local processing. We propose a new multilabel recall loss (MRL) module to relieve the class imbalance problem and facilitate deep-level fusion of local and global pathological semantics between the two heterogeneous branches. Extensive experiments were conducted on six medical image datasets including retinal vessel and polyp images. The qualitative and quantitative results demonstrate the superiority and generalizability of BCU-Net. In particular, BCU-Net can handle diverse medical images with diverse resolutions. It has a flexible structure owing to its plug-and-play characteristics, which promotes its practicality.


Subject(s)
Neural Networks, Computer , Retinal Vessels , Semantics , Image Processing, Computer-Assisted
3.
IEEE Trans Neural Netw Learn Syst ; 34(9): 5544-5556, 2023 Sep.
Article in English | MEDLINE | ID: mdl-34860655

ABSTRACT

Aspect-based sentiment triplet extraction (ASTE) aims at recognizing the joint triplets from texts, i.e., aspect terms, opinion expressions, and correlated sentiment polarities. As a newly proposed task, ASTE depicts the complete sentiment picture from different perspectives to better facilitate real-world applications. Unfortunately, several major challenges, such as the overlapping issue and long-distance dependency, have not been addressed effectively by the existing ASTE methods, which limits the performance of the task. In this article, we present an innovative encoder-decoder framework for end-to-end ASTE. Specifically, the ASTE task is first modeled as an unordered triplet set prediction problem, which is satisfied with a nonautoregressive decoding paradigm with a pointer network. Second, a novel high-order aggregation mechanism is proposed for fully integrating the underlying interactions between the overlapping structure of aspect and opinion terms. Third, a bipartite matching loss is introduced for facilitating the training of our nonautoregressive system. Experimental results on benchmark datasets show that our proposed framework significantly outperforms the state-of-the-art methods. Further analysis demonstrates the advantages of the proposed framework in handling the overlapping issue, relieving long-distance dependency and decoding efficiency.

4.
PLoS One ; 17(9): e0272353, 2022.
Article in English | MEDLINE | ID: mdl-36166421

ABSTRACT

The task of event extraction consists of three subtasks namely entity recognition, trigger identification and argument role classification. Recent work tackles these subtasks jointly with the method of multi-task learning for better extraction performance. Despite being effective, existing attempts typically treat labels of event subtasks as uninformative and independent one-hot vectors, ignoring the potential loss of useful label information, thereby making it difficult for these models to incorporate interactive features on the label level. In this paper, we propose a joint label space framework to improve Chinese event extraction. Specifically, the model converts labels of all subtasks into a dense matrix, giving each Chinese character a shared label distribution via an incrementally refined attention mechanism. Then the learned label embeddings are also used as the weight of the output layer for each subtask, hence adjusted along with model training. In addition, we incorporate the word lexicon into the character representation in a soft probabilistic manner, hence alleviating the impact of word segmentation errors. Extensive experiments on Chinese and English benchmarks demonstrate that our model outperforms state-of-the-art methods.


Subject(s)
Machine Learning , Space Simulation , China
5.
Comput Biol Med ; 150: 106198, 2022 11.
Article in English | MEDLINE | ID: mdl-37859292

ABSTRACT

Convolutional neural networks (CNN), especially numerous U-shaped models, have achieved great progress in retinal vessel segmentation. However, a great quantity of global information in fundus images has not been fully explored. And the class imbalance problem of background and blood vessels is still serious. To alleviate these issues, we design a novel multi-layer multi-scale dilated convolution network (MMDC-Net) based on U-Net. We propose an MMDC module to capture sufficient global information under diverse receptive fields through a cascaded mode. Then, we place a new multi-layer fusion (MLF) module behind the decoder, which can not only fuse complementary features but filter noisy information. This enables MMDC-Net to capture the blood vessel details after continuous up-sampling. Finally, we employ a recall loss to resolve the class imbalance problem. Extensive experiments have been done on diverse fundus color image datasets, including STARE, CHASEDB1, DRIVE, and HRF. HRF has a large resolution of 3504 × 2336 whereas others have a small resolution of slightly more than 512 × 512. Qualitative and quantitative results verify the superiority of MMDC-Net. Notably, satisfactory accuracy and sensitivity are acquired by our model. Hence, some key blood vessel details are sharpened. In addition, a large number of further validations and discussions prove the effectiveness and generalization of the proposed MMDC-Net.


Subject(s)
Neural Networks, Computer , Retinal Vessels , Retinal Vessels/diagnostic imaging , Fundus Oculi , Image Processing, Computer-Assisted , Algorithms
6.
IEEE Trans Neural Netw Learn Syst ; 33(8): 3612-3621, 2022 08.
Article in English | MEDLINE | ID: mdl-33566767

ABSTRACT

Attention has been shown highly effective for modeling sequences, capturing the more informative parts in learning a deep representation. However, recent studies show that the attention values do not always coincide with intuition in tasks, such as machine translation and sentiment classification. In this study, we consider using deep reinforcement learning to automatically optimize attention distribution during the minimization of end task training losses. With more sufficient environment states, iterative actions are taken to adjust attention weights so that more informative words receive more attention automatically. Results on different tasks and different attention networks demonstrate that our model is of great effectiveness in improving the end task performances, yielding more reasonable attention distribution. The more in-depth analysis further reveals that our retrofitting method can help to bring explainability for baseline attention.


Subject(s)
Neural Networks, Computer , Reinforcement, Psychology , Learning , Machine Learning
7.
PLoS One ; 16(4): e0250519, 2021.
Article in English | MEDLINE | ID: mdl-33857250

ABSTRACT

[This corrects the article DOI: 10.1371/journal.pone.0235796.].

8.
PLoS One ; 16(3): e0247704, 2021.
Article in English | MEDLINE | ID: mdl-33647054

ABSTRACT

Implicit sentiment analysis is a challenging task because the sentiment of a text is expressed in a connotative manner. To tackle this problem, we propose to use textual events as a knowledge source to enrich network representations. To consider task interactions, we present a novel lightweight joint learning paradigm that can pass task-related messages between tasks during training iterations. This is distinct from previous methods that involve multi-task learning by simple parameter sharing. Besides, a human-annotated corpus with implicit sentiment labels and event labels is scarce, which hinders practical applications of deep neural models. Therefore, we further investigate a back-translation approach to expand training instances. Experiment results on a public benchmark demonstrate the effectiveness of both the proposed multi-task architecture and data augmentation strategy.


Subject(s)
Data Mining , Natural Language Processing , Neural Networks, Computer , Humans , Learning , Multitasking Behavior
9.
Bioinformatics ; 37(11): 1581-1589, 2021 07 12.
Article in English | MEDLINE | ID: mdl-33245108

ABSTRACT

MOTIVATION: Entity relation extraction is one of the fundamental tasks in biomedical text mining, which is usually solved by the models from natural language processing. Compared with traditional pipeline methods, joint methods can avoid the error propagation from entity to relation, giving better performances. However, the existing joint models are built upon sequential scheme, and fail to detect overlapping entity and relation, which are ubiquitous in biomedical texts. The main reason is that sequential models have relatively weaker power in capturing long-range dependencies, which results in lower performance in encoding longer sentences. In this article, we propose a novel span-graph neural model for jointly extracting overlapping entity relation in biomedical texts. Our model treats the task as relation triplets prediction, and builds the entity-graph by enumerating possible candidate entity spans. The proposed model captures the relationship between the correlated entities via a span scorer and a relation scorer, respectively, and finally outputs all valid relational triplets. RESULTS: Experimental results on two biomedical entity relation extraction tasks, including drug-drug interaction detection and protein-protein interaction detection, show that the proposed method outperforms previous models by a substantial margin, demonstrating the effectiveness of span-graph-based method for overlapping relation extraction in biomedical texts. Further in-depth analysis proves that our model is more effective in capturing the long-range dependencies for relation extraction compared with the sequential models. AVAILABILITY AND IMPLEMENTATION: Related codes are made publicly available at http://github.com/Baxelyne/SpanBioER.


Subject(s)
Data Mining , Natural Language Processing , Drug Interactions , Language , Research Design
10.
Brief Bioinform ; 22(3)2021 05 20.
Article in English | MEDLINE | ID: mdl-32591802

ABSTRACT

Biomedical information extraction (BioIE) is an important task. The aim is to analyze biomedical texts and extract structured information such as named entities and semantic relations between them. In recent years, pre-trained language models have largely improved the performance of BioIE. However, they neglect to incorporate external structural knowledge, which can provide rich factual information to support the underlying understanding and reasoning for biomedical information extraction. In this paper, we first evaluate current extraction methods, including vanilla neural networks, general language models and pre-trained contextualized language models on biomedical information extraction tasks, including named entity recognition, relation extraction and event extraction. We then propose to enrich a contextualized language model by integrating a large scale of biomedical knowledge graphs (namely, BioKGLM). In order to effectively encode knowledge, we explore a three-stage training procedure and introduce different fusion strategies to facilitate knowledge injection. Experimental results on multiple tasks show that BioKGLM consistently outperforms state-of-the-art extraction models. A further analysis proves that BioKGLM can capture the underlying relations between biomedical knowledge concepts, which are crucial for BioIE.


Subject(s)
Data Mining , Natural Language Processing , Neural Networks, Computer , Semantics
11.
PLoS One ; 15(7): e0235796, 2020.
Article in English | MEDLINE | ID: mdl-32667950

ABSTRACT

Chinese information extraction is traditionally performed in the process of word segmentation, entity recognition, relation extraction and event detection. This pipelined approach suffers from two limitations: 1) It is prone to introduce propagated errors from upstream tasks to subsequent applications; 2) Mutual benefits of cross-task dependencies are hard to be introduced in non-overlapping models. To address these two challenges, we propose a novel transition-based model that jointly performs entity recognition, relation extraction and event detection as a single task. In addition, we incorporate subword-level information into character sequence with the use of a hybrid lattice structure, removing the reliance of external word tokenizers. Results on standard ACE benchmarks show the benefits of the proposed joint model and lattice network, which gives the best result in the literature.


Subject(s)
Data Mining/methods , Language , Algorithms , China , Humans , Neural Networks, Computer
12.
PLoS One ; 15(5): e0232547, 2020.
Article in English | MEDLINE | ID: mdl-32413094

ABSTRACT

Scientific information extraction is a crucial step for understanding scientific publications. In this paper, we focus on scientific keyphrase extraction, which aims to identify keyphrases from scientific articles and classify them into predefined categories. We present a neural network based approach for this task, which employs the bidirectional long short-memory (LSTM) to represent the sentences in the article. On top of the bidirectional LSTM layer in our neural model, conditional random field (CRF) is used to predict the label sequence for the whole sentence. Considering the expensive annotated data for supervised learning methods, we introduce self-training method into our neural model to leverage the unlabeled articles. Experimental results on the ScienceIE corpus and ACL keyphrase corpus show that our neural model achieves promising performance without any hand-designed features and external knowledge resources. Furthermore, it efficiently incorporates the unlabeled data and achieve competitive performance compared with previous state-of-the-art systems.


Subject(s)
Deep Learning , Information Storage and Retrieval/methods , Neural Networks, Computer , Models, Statistical , Natural Language Processing , Publications
13.
Neural Netw ; 117: 295-306, 2019 Sep.
Article in English | MEDLINE | ID: mdl-31207482

ABSTRACT

Extracting knowledge from time series provides important tools for many real applications. However, many challenging problems still open due to the stochastic nature of large amount of time series. Considering this scenario, new data mining and machine learning techniques have continuously developed. In this paper, we study time series based on its topological features, observed on a complex network generated from the time series data. Specifically, we present a trend detection algorithm for stochastic time series based on community detection and network metrics. The proposed model presents some advantages over traditional time series analysis, such as adaptive number of classes with measurable strength and better noise absorption. The appealing feature of this work is to pave a new way to represent time series trends by communities of complex networks in topological space instead of physical space (spatial-temporal space or frequency spectral) as traditional techniques do. Experimental results on artificial and real data-sets shows that the proposed method is able to classify the time series into local and global patterns. As a consequence, it improves the predictability on time series.


Subject(s)
Forecasting/methods , Machine Learning , Data Mining
14.
BMC Med Inform Decis Mak ; 19(Suppl 2): 51, 2019 04 04.
Article in English | MEDLINE | ID: mdl-30961614

ABSTRACT

BACKGROUND: Disease prediction based on Electronic Health Records (EHR) has become one hot research topic in biomedical community. Existing work mainly focuses on the prediction of one target disease, and little work is proposed for multiple associated diseases prediction. Meanwhile, a piece of EHR usually contains two main information: the textual description and physical indicators. However, existing work largely adopts statistical models with discrete features from numerical physical indicators in EHR, and fails to make full use of textual description information. METHODS: In this paper, we study the problem of kidney disease prediction in hypertension patients by using neural network model. Specifically, we first model the prediction problem as a binary classification task. Then we propose a hybrid neural network which incorporates Bidirectional Long Short-Term Memory (BiLSTM) and Autoencoder networks to fully capture the information in EHR. RESULTS: We construct a dataset based on a large number of raw EHR data. The dataset consists of totally 35,332 records from hypertension patients. Experimental results show that the proposed neural model achieves 89.7% accuracy for the task. CONCLUSIONS: A hybrid neural network model was presented. Based on the constructed dataset, the comparison results of different models demonstrated the effectiveness of the proposed neural model. The proposed model outperformed traditional statistical models with discrete features and neural baseline systems.


Subject(s)
Electronic Health Records , Hypertension , Kidney Diseases , Neural Networks, Computer , Forecasting , Humans , Hypertension/complications , Kidney Diseases/complications , Kidney Diseases/diagnosis , Risk Factors
15.
BMC Bioinformatics ; 18(1): 462, 2017 Oct 30.
Article in English | MEDLINE | ID: mdl-29084508

ABSTRACT

BACKGROUND: Biomedical named entity recognition(BNER) is a crucial initial step of information extraction in biomedical domain. The task is typically modeled as a sequence labeling problem. Various machine learning algorithms, such as Conditional Random Fields (CRFs), have been successfully used for this task. However, these state-of-the-art BNER systems largely depend on hand-crafted features. RESULTS: We present a recurrent neural network (RNN) framework based on word embeddings and character representation. On top of the neural network architecture, we use a CRF layer to jointly decode labels for the whole sentence. In our approach, contextual information from both directions and long-range dependencies in the sequence, which is useful for this task, can be well modeled by bidirectional variation and long short-term memory (LSTM) unit, respectively. Although our models use word embeddings and character embeddings as the only features, the bidirectional LSTM-RNN (BLSTM-RNN) model achieves state-of-the-art performance - 86.55% F1 on BioCreative II gene mention (GM) corpus and 73.79% F1 on JNLPBA 2004 corpus. CONCLUSIONS: Our neural network architecture can be successfully used for BNER without any manual feature engineering. Experimental results show that domain-specific pre-trained word embeddings and character-level representation can improve the performance of the LSTM-RNN models. On the GM corpus, we achieve comparable performance compared with other systems using complex hand-crafted features. Considering the JNLPBA corpus, our model achieves the best results, outperforming the previously top performing systems. The source code of our method is freely available under GPL at https://github.com/lvchen1989/BNER .


Subject(s)
Biomedical Research/instrumentation , Neural Networks, Computer , Algorithms , Biomedical Research/methods , Information Storage and Retrieval , Machine Learning
16.
BMC Bioinformatics ; 18(1): 198, 2017 Mar 31.
Article in English | MEDLINE | ID: mdl-28359255

ABSTRACT

BACKGROUND: Extracting biomedical entities and their relations from text has important applications on biomedical research. Previous work primarily utilized feature-based pipeline models to process this task. Many efforts need to be made on feature engineering when feature-based models are employed. Moreover, pipeline models may suffer error propagation and are not able to utilize the interactions between subtasks. Therefore, we propose a neural joint model to extract biomedical entities as well as their relations simultaneously, and it can alleviate the problems above. RESULTS: Our model was evaluated on two tasks, i.e., the task of extracting adverse drug events between drug and disease entities, and the task of extracting resident relations between bacteria and location entities. Compared with the state-of-the-art systems in these tasks, our model improved the F1 scores of the first task by 5.1% in entity recognition and 8.0% in relation extraction, and that of the second task by 9.2% in relation extraction. CONCLUSIONS: The proposed model achieves competitive performances with less work on feature engineering. We demonstrate that the model based on neural networks is effective for biomedical entity and relation extraction. In addition, parameter sharing is an alternative method for neural models to jointly process this task. Our work can facilitate the research on biomedical text mining.


Subject(s)
Biomedical Research , Data Mining , Neural Networks, Computer , Databases, Factual , Models, Theoretical , Monocytes/cytology , Monocytes/metabolism
17.
Bioinformatics ; 33(15): 2363-2371, 2017 Aug 01.
Article in English | MEDLINE | ID: mdl-28369171

ABSTRACT

MOTIVATION: Disease named entities play a central role in many areas of biomedical research, and automatic recognition and normalization of such entities have received increasing attention in biomedical research communities. Existing methods typically used pipeline models with two independent phases: (i) a disease named entity recognition (DER) system is used to find the boundaries of mentions in text and (ii) a disease named entity normalization (DEN) system is used to connect the mentions recognized to concepts in a controlled vocabulary. The main problems of such models are: (i) there is error propagation from DER to DEN and (ii) DEN is useful for DER, but pipeline models cannot utilize this. METHODS: We propose a transition-based model to jointly perform disease named entity recognition and normalization, casting the output construction process into an incremental state transition process, learning sequences of transition actions globally, which correspond to joint structural outputs. Beam search and online structured learning are used, with learning being designed to guide search. Compared with the only existing method for joint DEN and DER, our method allows non-local features to be used, which significantly improves the accuracies. RESULTS: We evaluate our model on two corpora: the BioCreative V Chemical Disease Relation (CDR) corpus and the NCBI disease corpus. Experiments show that our joint framework achieves significantly higher performances compared to competitive pipeline baselines. Our method compares favourably to other state-of-the-art approaches. AVAILABILITY AND IMPLEMENTATION: Data and code are available at https://github.com/louyinxia/jointRN. CONTACT: dhji@whu.edu.cn.


Subject(s)
Data Mining/methods , Disease/classification , Vocabulary, Controlled , Humans
18.
BMC Bioinformatics ; 18(1): 75, 2017 Jan 31.
Article in English | MEDLINE | ID: mdl-28143488

ABSTRACT

BACKGROUND: Information extraction in clinical texts enables medical workers to find out problems of patients faster as well as makes intelligent diagnosis possible in the future. There has been a lot of work about disorder mention recognition in clinical narratives. But recognition of some more complicated disorder mentions like overlapping ones is still an open issue. This paper proposes a multi-label structured Support Vector Machine (SVM) based method for disorder mention recognition. We present a multi-label scheme which could be used in complicated entity recognition tasks. RESULTS: We performed three sets of experiments to evaluate our model. Our best F1-Score on the 2013 Conference and Labs of the Evaluation Forum data set is 0.7343. There are six types of labels in our multi-label scheme, all of which are represented by 24-bit binary numbers. The binary digits of each label contain information about different disorder mentions. Our multi-label method can recognize not only disorder mentions in the form of contiguous or discontiguous words but also mentions whose spans overlap with each other. The experiments indicate that our multi-label structured SVM model outperforms the condition random field (CRF) model for this disorder mention recognition task. The experiments show that our multi-label scheme surpasses the baseline. Especially for overlapping disorder mentions, the F1-Score of our multi-label scheme is 0.1428 higher than the baseline BIOHD1234 scheme. CONCLUSIONS: This multi-label structured SVM based approach is demonstrated to work well with this disorder recognition task. The novel multi-label scheme we presented is superior to the baseline and it can be used in other models to solve various types of complicated entity recognition tasks as well.


Subject(s)
Data Mining/methods , Disease , Support Vector Machine , Humans
19.
J Cheminform ; 7(Suppl 1 Text mining for chemistry and the CHEMDNER track): S4, 2015.
Article in English | MEDLINE | ID: mdl-25810775

ABSTRACT

BACKGROUND: The chemical compound and drug name recognition plays an important role in chemical text mining, and it is the basis for automatic relation extraction and event identification in chemical information processing. So a high-performance named entity recognition system for chemical compound and drug names is necessary. METHODS: We developed a CHEMDNER system based on mixed conditional random fields (CRF) with word clustering for chemical compound and drug name recognition. For the word clustering, we used Brown's hierarchical algorithm and Skip-gram model based on deep learning with massive PubMed articles including titles and abstracts. RESULTS: This system achieved the highest F-score of 88.20% for the CDI task and the second highest F-score of 87.11% for the CEM task in BioCreative IV. The performance was further improved by multi-scale clustering based on deep learning, achieving the F-score of 88.71% for CDI and 88.06% for CEM. CONCLUSIONS: The mixed CRF model represents both the internal complexity and external contexts of the entities, and the model is integrated with word clustering to capture domain knowledge with PubMed articles including titles and abstracts. The domain knowledge helps to ensure the performance of the entity recognition, even without fine-grained linguistic features and manually designed rules.

20.
J Cheminform ; 7(Suppl 1 Text mining for chemistry and the CHEMDNER track): S2, 2015.
Article in English | MEDLINE | ID: mdl-25810773

ABSTRACT

The automatic extraction of chemical information from text requires the recognition of chemical entity mentions as one of its key steps. When developing supervised named entity recognition (NER) systems, the availability of a large, manually annotated text corpus is desirable. Furthermore, large corpora permit the robust evaluation and comparison of different approaches that detect chemicals in documents. We present the CHEMDNER corpus, a collection of 10,000 PubMed abstracts that contain a total of 84,355 chemical entity mentions labeled manually by expert chemistry literature curators, following annotation guidelines specifically defined for this task. The abstracts of the CHEMDNER corpus were selected to be representative for all major chemical disciplines. Each of the chemical entity mentions was manually labeled according to its structure-associated chemical entity mention (SACEM) class: abbreviation, family, formula, identifier, multiple, systematic and trivial. The difficulty and consistency of tagging chemicals in text was measured using an agreement study between annotators, obtaining a percentage agreement of 91. For a subset of the CHEMDNER corpus (the test set of 3,000 abstracts) we provide not only the Gold Standard manual annotations, but also mentions automatically detected by the 26 teams that participated in the BioCreative IV CHEMDNER chemical mention recognition task. In addition, we release the CHEMDNER silver standard corpus of automatically extracted mentions from 17,000 randomly selected PubMed abstracts. A version of the CHEMDNER corpus in the BioC format has been generated as well. We propose a standard for required minimum information about entity annotations for the construction of domain specific corpora on chemical and drug entities. The CHEMDNER corpus and annotation guidelines are available at: http://www.biocreative.org/resources/biocreative-iv/chemdner-corpus/.

SELECTION OF CITATIONS
SEARCH DETAIL
...