Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
Add more filters










Database
Language
Publication year range
1.
Nat Commun ; 15(1): 3972, 2024 May 10.
Article in English | MEDLINE | ID: mdl-38730241

ABSTRACT

The advancement of Long-Read Sequencing (LRS) techniques has significantly increased the length of sequencing to several kilobases, thereby facilitating the identification of alternative splicing events and isoform expressions. Recently, numerous computational tools for isoform detection using long-read sequencing data have been developed. Nevertheless, there remains a deficiency in comparative studies that systemically evaluate the performance of these tools, which are implemented with different algorithms, under various simulations that encompass potential influencing factors. In this study, we conducted a benchmark analysis of thirteen methods implemented in nine tools capable of identifying isoform structures from long-read RNA-seq data. We evaluated their performances using simulated data, which represented diverse sequencing platforms generated by an in-house simulator, RNA sequins (sequencing spike-ins) data, as well as experimental data. Our findings demonstrate IsoQuant as a highly effective tool for isoform detection with LRS, with Bambu and StringTie2 also exhibiting strong performance. These results offer valuable guidance for future research on alternative splicing analysis and the ongoing improvement of tools for isoform detection using LRS data.


Subject(s)
Algorithms , Alternative Splicing , RNA, Messenger , Sequence Analysis, RNA , Humans , RNA, Messenger/genetics , RNA, Messenger/analysis , Sequence Analysis, RNA/methods , RNA Isoforms/genetics , Software , Computational Biology/methods , High-Throughput Nucleotide Sequencing/methods , Protein Isoforms/genetics
2.
IEEE Trans Med Imaging ; 43(3): 1089-1101, 2024 Mar.
Article in English | MEDLINE | ID: mdl-37874703

ABSTRACT

Cortical cataract, a common type of cataract, is particularly difficult to be diagnosed automatically due to the complex features of the lesions. Recently, many methods based on edge detection or deep learning were proposed for automatic cataract grading. However, these methods suffer a large performance drop in cortical cataract grading due to the more complex cortical opacities and uncertain data. In this paper, we propose a novel Transformer-based Knowledge Distillation Network, called TKD-Net, for cortical cataract grading. To tackle the complex opacity problem, we first devise a zone decomposition strategy to extract more refined features and introduce special sub-scores to consider critical factors of clinical cortical opacity assessment (location, area, density) for comprehensive quantification. Next, we develop a multi-modal mix-attention Transformer to efficiently fuse sub-scores and image modality for complex feature learning. However, obtaining the sub-score modality is a challenge in the clinic, which could cause the modality missing problem instead. To simultaneously alleviate the issues of modality missing and uncertain data, we further design a Transformer-based knowledge distillation method, which uses a teacher model with perfect data to guide a student model with modality-missing and uncertain data. We conduct extensive experiments on a dataset of commonly-used slit-lamp images annotated by the LOCS III grading system to demonstrate that our TKD-Net outperforms state-of-the-art methods, as well as the effectiveness of its key components. Codes are available at https://github.com/wjh892521292/Cataract_TKD-Net.


Subject(s)
Cataract , Humans , Cataract/diagnostic imaging
3.
Eur J Orthod ; 46(1)2024 Jan 01.
Article in English | MEDLINE | ID: mdl-37824439

ABSTRACT

OBJECTIVES: This study aimed to provide a universal and reliable reference system quantifying temporomandibular joint (TMJ) morphological and positional changes. METHODS: Large field-of-view (FOV) cone-beam computed tomography (CBCT) images (20 TMJs) from 10 preorthognathic surgery patients and limited FOV CBCT images (40 TMJs) from 20 splint therapy-treated patients with temporomandibular disorders were collected. TMJ-specific reference system including a TMJ horizontal reference plane (TMJHP) and a local coordinate system (TMJCS) was constructed with landmarks on cranial base. Its application for TMJ measurements and its spatial relationship to common Frankfort horizontal plane (FHP) and maxillofacial coordinate system (MFCS) were evaluated. RESULTS: Five relevant landmarks were selected to optimally construct TMJ-specific reference system. General parallelism between TMJHP and FHP was demonstrated by minimal angular and constant distance deviation (1.714 ±â€…0.811º; 2.925 ±â€…0.817 mm). Additionally, tiny axial orientational deviations (0.181 ±â€…6.805º) suggested TMJCS rivaled MFCS. Moreover, small deviations in orientations and distances (1.232 ±â€…0.609º; 0.310 ±â€…0.202 mm) indicated considerable reliability for TMJCS construction, with intraclass correlation coefficients (ICCs) ranging from 0.999 to 1.000. Lastly, slight discrepancies in translations and rotations revealed high reliability for condylar positional and morphological measurements (ICC, 0.918-0.999). LIMITATIONS: TMJ-specific reference system was merely tested in two representative FOVs. CONCLUSIONS: This study provides a universal and reliable reference system for TMJ assessment that is applicable to both limited and large FOV CBCT. It would improve comparability among diverse studies and enable comprehensive evaluations of TMJ positional and morphological changes during TMJ-related treatment follow-up such as splint therapy and disease progression.


Subject(s)
Mandibular Condyle , Temporomandibular Joint Disorders , Humans , Mandibular Condyle/diagnostic imaging , Reproducibility of Results , Temporomandibular Joint/diagnostic imaging , Temporomandibular Joint Disorders/diagnostic imaging , Temporomandibular Joint Disorders/therapy , Cone-Beam Computed Tomography/methods
4.
Patterns (N Y) ; 4(9): 100825, 2023 Sep 08.
Article in English | MEDLINE | ID: mdl-37720330

ABSTRACT

High-fidelity three-dimensional (3D) models of tooth-bone structures are valuable for virtual dental treatment planning; however, they require integrating data from cone-beam computed tomography (CBCT) and intraoral scans (IOS) using methods that are either error-prone or time-consuming. Hence, this study presents Deep Dental Multimodal Fusion (DDMF), an automatic multimodal framework that reconstructs 3D tooth-bone structures using CBCT and IOS. Specifically, the DDMF framework comprises CBCT and IOS segmentation modules as well as a multimodal reconstruction module with novel pixel representation learning architectures, prior knowledge-guided losses, and geometry-based 3D fusion techniques. Experiments on real-world large-scale datasets revealed that DDMF achieved superior segmentation performance on CBCT and IOS, achieving a 0.17 mm average symmetric surface distance (ASSD) for 3D fusion with a substantial processing time reduction. Additionally, clinical applicability studies have demonstrated DDMF's potential for accurately simulating tooth-bone structures throughout the orthodontic treatment process.

5.
IEEE Trans Med Imaging ; 42(2): 467-480, 2023 02.
Article in English | MEDLINE | ID: mdl-36378797

ABSTRACT

Accurately delineating individual teeth and the gingiva in the three-dimension (3D) intraoral scanned (IOS) mesh data plays a pivotal role in many digital dental applications, e.g., orthodontics. Recent research shows that deep learning based methods can achieve promising results for 3D tooth segmentation, however, most of them rely on high-quality labeled dataset which is usually of small scales as annotating IOS meshes requires intensive human efforts. In this paper, we propose a novel self-supervised learning framework, named STSNet, to boost the performance of 3D tooth segmentation leveraging on large-scale unlabeled IOS data. The framework follows two-stage training, i.e., pre-training and fine-tuning. In pre-training, three hierarchical-level, i.e., point-level, region-level, cross-level, contrastive losses are proposed for unsupervised representation learning on a set of predefined matched points from different augmented views. The pretrained segmentation backbone is further fine-tuned in a supervised manner with a small number of labeled IOS meshes. With the same amount of annotated samples, our method can achieve an mIoU of 89.88%, significantly outperforming the supervised counterparts. The performance gain becomes more remarkable when only a small amount of labeled samples are available. Furthermore, STSNet can achieve better performance with only 40% of the annotated samples as compared to the fully supervised baselines. To the best of our knowledge, we present the first attempt of unsupervised pre-training for 3D tooth segmentation, demonstrating its strong potential in reducing human efforts for annotation and verification.


Subject(s)
Prostheses and Implants , Surgical Mesh , Humans , Image Processing, Computer-Assisted , Radionuclide Imaging , Supervised Machine Learning
6.
Nucleic Acids Res ; 50(D1): D1244-D1254, 2022 01 07.
Article in English | MEDLINE | ID: mdl-34606616

ABSTRACT

T-cell receptors (TCRs) and B-cell receptors (BCRs) are critical in recognizing antigens and activating the adaptive immune response. Stochastic V(D)J recombination generates massive TCR/BCR repertoire diversity. Single-cell immune profiling with transcriptome analysis allows the high-throughput study of individual TCR/BCR clonotypes and functions under both normal and pathological settings. However, a comprehensive database linking these data is not yet readily available. Here, we present the human Antigen Receptor database (huARdb), a large-scale human single-cell immune profiling database that contains 444 794 high confidence T or B cells (hcT/B cells) with full-length TCR/BCR sequence and transcriptomes from 215 datasets. All datasets were processed in a uniform workflow, including sequence alignment, cell subtype prediction, unsupervised cell clustering, and clonotype definition. We also developed a multi-functional and user-friendly web interface that provides interactive visualization modules for biologists to analyze the transcriptome and TCR/BCR features at the single-cell level. HuARdb is freely available at https://huarc.net/database with functions for data querying, browsing, downloading, and depositing. In conclusion, huARdb is a comprehensive and multi-perspective atlas for human antigen receptors.


Subject(s)
Databases, Genetic , Receptors, Antigen, B-Cell/classification , Receptors, Antigen, T-Cell/classification , Software , B-Lymphocytes , Humans , Receptors, Antigen, B-Cell/immunology , Receptors, Antigen, T-Cell/immunology , Single-Cell Analysis , Transcriptome/genetics , V(D)J Recombination/genetics
SELECTION OF CITATIONS
SEARCH DETAIL
...