Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Cureus ; 15(2): e34933, 2023 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-36938204

RESUMO

Mutations in the phosphodiesterase 6B (PDE6B) gene are a rare cause of autosomal recessive retinitis pigmentosa (arRP). We report on a non-consanguineous family with a pseudodominant inheritance of RP due to PDE6B mutations. We conducted a chart review of four members of a Puerto Rican family who underwent a comprehensive ophthalmic evaluation by at least one of the authors. The mutational screening was done using a genotyping microarray provided by Invitae Corporation, using next-generation sequencing (NGS) technology. Genomic DNA obtained from saliva samples is enriched for targeted regions using a hybridization-based protocol and sequenced using Illumina technology. A descriptive analysis was done. Patient 1A had a normal ophthalmic examination and a heterozygous pathogenic variant in the PDE6B gene c.1540del PLeu514Trpfs*61. Patients 1B, 2A, and 2B had mid-peripheral retinitis pigmentosa, concentric visual field ring scotomata in both eyes (OU), extinguished electroretinogram (ERG), and homozygous pathogenic variants in the PDE6B gene c.1540del PLeu514Trpfs*61. Even though mutations in the PDE6B gene usually lead to arRP, they may be inherited in a pseudodominant pattern in geographically isolated populations. Genotyping studies in patients with RP are warranted to classify inheritance mode correctly.

2.
Cognit Comput ; 15(2): 590-612, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36341132

RESUMO

In scientific literature and industry, semantic and context-aware Natural Language Processing-based solutions have been gaining importance in recent years. The possibilities and performance shown by these models when dealing with complex Human Language Understanding tasks are unquestionable, from conversational agents to the fight against disinformation in social networks. In addition, considerable attention is also being paid to developing multilingual models to tackle the language bottleneck. An increase in size has accompanied the growing need to provide more complex models implementing all these features without being conservative in the number of dimensions required. This paper aims to provide a comprehensive account of the impact of a wide variety of dimensional reduction techniques on the performance of different state-of-the-art multilingual siamese transformers, including unsupervised dimensional reduction techniques such as linear and nonlinear feature extraction, feature selection, and manifold techniques. In order to evaluate the effects of these techniques, we considered the multilingual extended version of Semantic Textual Similarity Benchmark (mSTSb) and two different baseline approaches, one using the embeddings from the pre-trained version of five models and another using their fine-tuned STS version. The results evidence that it is possible to achieve an average reduction of 91.58 % ± 2.59 % in the number of dimensions of embeddings from pre-trained models requiring a fitting time 96.68 % ± 0.68 % faster than the fine-tuning process. Besides, we achieve 54.65 % ± 32.20 % dimensionality reduction in embeddings from fine-tuned models. The results of this study will significantly contribute to the understanding of how different tuning approaches affect performance on semantic-aware tasks and how dimensional reduction techniques deal with the high-dimensional embeddings computed for the STS task and their potential for other highly demanding NLP tasks.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...