Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Neural Netw ; 166: 188-203, 2023 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-37499604

RESUMO

Subspace distance is an invaluable tool exploited in a wide range of feature selection methods. The power of subspace distance is that it can identify a representative subspace, including a group of features that can efficiently approximate the space of original features. On the other hand, employing intrinsic statistical information of data can play a significant role in a feature selection process. Nevertheless, most of the existing feature selection methods founded on the subspace distance are limited in properly fulfilling this objective. To pursue this void, we propose a framework that takes a subspace distance into account which is called "Variance-Covariance subspace distance". The approach gains advantages from the correlation of information included in the features of data, thus determines all the feature subsets whose corresponding Variance-Covariance matrix has the minimum norm property. Consequently, a novel, yet efficient unsupervised feature selection framework is introduced based on the Variance-Covariance distance to handle both the dimensionality reduction and subspace learning tasks. The proposed framework has the ability to exclude those features that have the least variance from the original feature set. Moreover, an efficient update algorithm is provided along with its associated convergence analysis to solve the optimization side of the proposed approach. An extensive number of experiments on nine benchmark datasets are also conducted to assess the performance of our method from which the results demonstrate its superiority over a variety of state-of-the-art unsupervised feature selection methods. The source code is available at https://github.com/SaeedKarami/VCSDFS.


Assuntos
Algoritmos , Reconhecimento Automatizado de Padrão , Reconhecimento Automatizado de Padrão/métodos , Aprendizagem , Software , Benchmarking
2.
Front Med (Lausanne) ; 10: 1109411, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37064042

RESUMO

Background: Artificial intelligence (AI) and machine learning (ML) models continue to evolve the clinical decision support systems (CDSS). However, challenges arise when it comes to the integration of AI/ML into clinical scenarios. In this systematic review, we followed the Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA), the population, intervention, comparator, outcome, and study design (PICOS), and the medical AI life cycle guidelines to investigate studies and tools which address AI/ML-based approaches towards clinical decision support (CDS) for monitoring cardiovascular patients in intensive care units (ICUs). We further discuss recent advances, pitfalls, and future perspectives towards effective integration of AI into routine practices as were identified and elaborated over an extensive selection process for state-of-the-art manuscripts. Methods: Studies with available English full text from PubMed and Google Scholar in the period from January 2018 to August 2022 were considered. The manuscripts were fetched through a combination of the search keywords including AI, ML, reinforcement learning (RL), deep learning, clinical decision support, and cardiovascular critical care and patients monitoring. The manuscripts were analyzed and filtered based on qualitative and quantitative criteria such as target population, proper study design, cross-validation, and risk of bias. Results: More than 100 queries over two medical search engines and subjective literature research were developed which identified 89 studies. After extensive assessments of the studies both technically and medically, 21 studies were selected for the final qualitative assessment. Discussion: Clinical time series and electronic health records (EHR) data were the most common input modalities, while methods such as gradient boosting, recurrent neural networks (RNNs) and RL were mostly used for the analysis. Seventy-five percent of the selected papers lacked validation against external datasets highlighting the generalizability issue. Also, interpretability of the AI decisions was identified as a central issue towards effective integration of AI in healthcare.

3.
IEEE Access ; 9: 21192-21205, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34786306

RESUMO

A "Sleeping Beauty" (SB) in science is a metaphor for a scholarly publication that remains relatively unnoticed by the related communities for a long time; - the publication is "sleeping". However, suddenly due to the appearance of some phenomenon, such a "forgotten" publication may become a center of scientific attention; - the SB is "awakened". Currently, there are specific scientific areas for which sleeping beauties (SBs) are awakened. For example, as the world is experiencing the COVID-19 global pandemic (triggered by SARS-CoV-2), publications on coronaviruses appear to be awakened. Thus, one can raise questions of scientific interest: are these publications coronavirus related SBs? Moreover, while much literature exists on other coronaviruses, there seems to be no comprehensive investigation on COVID-19, - in particular in the context of SBs. Nowadays, such SB papers can be even used for sustaining literature reviews and/or scientific claims about COVID-19. In our study, in order to pinpoint pertinent SBs, we use the "beauty score" (B-score) measure. The Activity Index (AI) and the Relative Specialization Index (RSI) are also calculated to compare countries where such SBs appear. Results show that most of these SBs were published previously to the present epidemic time (triggered by SARS-CoV or SARS-CoV-1), and are awakened in 2020. Besides outlining the most important SBs, we show from what countries and institutions they originate, and the most prolific author(s) of such SBs. The citation trend of SBs that have the highest B-score is also discussed.

4.
Scientometrics ; 126(9): 8129-8151, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34276109

RESUMO

The publish or perish culture of scholarly communication results in quality and relevance to be are subordinate to quantity. Scientific events such as conferences play an important role in scholarly communication and knowledge exchange. Researchers in many fields, such as computer science, often need to search for events to publish their research results, establish connections for collaborations with other researchers and stay up to date with recent works. Researchers need to have a meta-research understanding of the quality of scientific events to publish in high-quality venues. However, there are many diverse and complex criteria to be explored for the evaluation of events. Thus, finding events with quality-related criteria becomes a time-consuming task for researchers and often results in an experience-based subjective evaluation. OpenResearch.org is a crowd-sourcing platform that provides features to explore previous and upcoming events of computer science, based on a knowledge graph. In this paper, we devise an ontology representing scientific events metadata. Furthermore, we introduce an analytical study of the evolution of Computer Science events leveraging the OpenResearch.org knowledge graph. We identify common characteristics of these events, formalize them, and combine them as a group of metrics. These metrics can be used by potential authors to identify high-quality events. On top of the improved ontology, we analyzed the metadata of renowned conferences in various computer science communities, such as VLDB, ISWC, ESWC, WIMS, and SEMANTiCS, in order to inspect their potential as event metrics.

5.
Scientometrics ; 126(1): 641-682, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33169040

RESUMO

Systematic assessment of scientific events has become increasingly important for research communities. A range of metrics (e.g., citations, h-index) have been developed by different research communities to make such assessments effectual. However, most of the metrics for assessing the quality of less formal publication venues and events have not yet deeply investigated. It is also rather challenging to develop respective metrics because each research community has its own formal and informal rules of communication and quality standards. In this article, we develop a comprehensive framework of assessment metrics for evaluating scientific events and involved stakeholders. The resulting quality metrics are determined with respect to three general categories-events, persons, and bibliometrics. Our assessment methodology is empirically applied to several series of computer science events, such as conferences and workshops, using publicly available data for determining quality metrics. We show that the metrics' values coincide with the intuitive agreement of the community on its "top conferences". Our results demonstrate that highly-ranked events share similar profiles, including the provision of outstanding reviews, visiting diverse locations, having reputed people involved, and renowned sponsors.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...