Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 11 de 11
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Clin Res Cardiol ; 2024 Jun 03.
Artigo em Inglês | MEDLINE | ID: mdl-38829411

RESUMO

AIM: Examine the performance of a simple echocardiographic "Killip score" (eKillip) in predicting heart failure (HF) hospitalizations and mortality after index event of decompensated HF hospitalization. METHODS: HF patients hospitalized at our facility between 03/2019-03/2021 who underwent an echocardiography during their index admission were included in this retrospective analysis. The cohort was divided into 4 classes of eKillip according to: stroke volume index (SVI) < 35ml/m2 > and E/E' ratio < 15 > . An eKillip Class I was defined as SVI ≥ 35ml/m2 and E/E' ≤ 15 and was used as reference. RESULTS: Included 751 patients, median age 78.1 (IQR 69.3-86) years, 59% men, left ventricular ejection fraction 45 (IQR 30-60)%, brain natriuretic peptide levels 634 (IQR 331-1222)pg/ml. Compared with eKillip Class I, a graded increase in the combined endpoint of 30-day mortality and rehospitalizations rates was noted: (Class II: HR 1.77, CI 0.95-3.33, p = 0.07; Class III: HR 1.94, CI 1.05-3.6, p = 0.034; Class IV: HR 2.9, CI 1.64-5.13, p < 0.001 respectively), which overall persisted after correction for clinical (Class II: HR 1.682, CI 0.9-3.15, p = 0.105; Class III: HR 2.104, CI 1.13-3.9, p = 0.019; Class IV: HR 2.74, CI 1.54-4.85, p = 0.001 respectively) or echocardiographic parameters (Class II: HR 1.92, CI 1.02-3.63, p = 0.045; Class III: HR 1.54, CI 0.81-2.95, p = 0.189; Class IV: HR 2.04, CI 1.1-3.76, p = 0.023 respectively). Specifically, the eKillip Class IV group comprised one-third of the patient population and persistently showed increased risk of 30-day HF hospitalizations or mortality following multivariate analysis. CONCLUSION: A simple echocardiographic score can assist identifying high-risk decompensated HF patients for recurrent hospitalizations and mortality.

2.
Sci Rep ; 14(1): 5881, 2024 Mar 11.
Artigo em Inglês | MEDLINE | ID: mdl-38467786

RESUMO

Recently, the underlying mechanism for successful deep learning (DL) was presented based on a quantitative method that measures the quality of a single filter in each layer of a DL model, particularly VGG-16 trained on CIFAR-10. This method exemplifies that each filter identifies small clusters of possible output labels, with additional noise selected as labels outside the clusters. This feature is progressively sharpened with each layer, resulting in an enhanced signal-to-noise ratio (SNR), which leads to an increase in the accuracy of the DL network. In this study, this mechanism is verified for VGG-16 and EfficientNet-B0 trained on the CIFAR-100 and ImageNet datasets, and the main results are as follows. First, the accuracy and SNR progressively increase with the layers. Second, for a given deep architecture, the maximal error rate increases approximately linearly with the number of output labels. Third, similar trends were obtained for dataset labels in the range [3, 1000], thus supporting the universality of this mechanism. Understanding the performance of a single filter and its dominating features paves the way to highly dilute the deep architecture without affecting its overall accuracy, and this can be achieved by applying the filter's cluster connections (AFCC).

3.
Sci Rep ; 13(1): 13385, 2023 Aug 31.
Artigo em Inglês | MEDLINE | ID: mdl-37652973

RESUMO

Learning classification tasks of [Formula: see text] inputs typically consist of [Formula: see text]) max-pooling (MP) operators along the entire feedforward deep architecture. Here we show, using the CIFAR-10 database, that pooling decisions adjacent to the last convolutional layer significantly enhance accuracies. In particular, average accuracies of the advanced-VGG with [Formula: see text] layers (A-VGGm) architectures are 0.936, 0.940, 0.954, 0.955, and 0.955 for m = 6, 8, 14, 13, and 16, respectively. The results indicate A-VGG8's accuracy is superior to VGG16's, and that the accuracies of A-VGG13 and A-VGG16 are equal, and comparable to that of Wide-ResNet16. In addition, replacing the three fully connected (FC) layers with one FC layer, A-VGG6 and A-VGG14, or with several linear activation FC layers, yielded similar accuracies. These significantly enhanced accuracies stem from training the most influential input-output routes, in comparison to the inferior routes selected following multiple MP decisions along the deep architecture. In addition, accuracies are sensitive to the order of the non-commutative MP and average pooling operators adjacent to the output layer, varying the number and location of training routes. The results call for the reexamination of previously proposed deep architectures and their accuracies by utilizing the proposed pooling strategy adjacent to the output layer.

4.
Sci Rep ; 13(1): 5423, 2023 Apr 20.
Artigo em Inglês | MEDLINE | ID: mdl-37080998

RESUMO

The realization of complex classification tasks requires training of deep learning (DL) architectures consisting of tens or even hundreds of convolutional and fully connected hidden layers, which is far from the reality of the human brain. According to the DL rationale, the first convolutional layer reveals localized patterns in the input and large-scale patterns in the following layers, until it reliably characterizes a class of inputs. Here, we demonstrate that with a fixed ratio between the depths of the first and second convolutional layers, the error rates of the generalized shallow LeNet architecture, consisting of only five layers, decay as a power law with the number of filters in the first convolutional layer. The extrapolation of this power law indicates that the generalized LeNet can achieve small error rates that were previously obtained for the CIFAR-10 database using DL architectures. A power law with a similar exponent also characterizes the generalized VGG-16 architecture. However, this results in a significantly increased number of operations required to achieve a given error rate with respect to LeNet. This power law phenomenon governs various generalized LeNet and VGG-16 architectures, hinting at its universal behavior and suggesting a quantitative hierarchical time-space complexity among machine learning architectures. Additionally, the conservation law along the convolutional layers, which is the square-root of their size times their depth, is found to asymptotically minimize error rates. The efficient shallow learning that is demonstrated in this study calls for further quantitative examination using various databases and architectures and its accelerated implementation using future dedicated hardware developments.

5.
Sci Rep ; 13(1): 962, 2023 Jan 30.
Artigo em Inglês | MEDLINE | ID: mdl-36717568

RESUMO

Advanced deep learning architectures consist of tens of fully connected and convolutional hidden layers, currently extended to hundreds, are far from their biological realization. Their implausible biological dynamics relies on changing a weight in a non-local manner, as the number of routes between an output unit and a weight is typically large, using the backpropagation technique. Here, a 3-layer tree architecture inspired by experimental-based dendritic tree adaptations is developed and applied to the offline and online learning of the CIFAR-10 database. The proposed architecture outperforms the achievable success rates of the 5-layer convolutional LeNet. Moreover, the highly pruned tree backpropagation approach of the proposed architecture, where a single route connects an output unit and a weight, represents an efficient dendritic deep learning.

6.
Behav Sci (Basel) ; 13(1)2023 Jan 09.
Artigo em Inglês | MEDLINE | ID: mdl-36661633

RESUMO

Entrepreneurship catalyzes economic growth; it generates jobs, advances the economy and solves global challenges. Hence, it is crucial to understand the factors contributing to entrepreneurship and entrepreneurs' development. While many studies have investigated intrapersonal factors for entrepreneurial tendencies, the present study focuses on a critical yet often overlooked interpersonal aspect: attachment orientations. Specifically, this article examines the relationship between adult attachment orientations and entrepreneurial tendencies. Three studies across three countries (Israel, the UK, and Singapore) indicated that an anxious attachment orientation in close relationships is negatively associated with enterprising tendencies. In Israel (Study 1) and Singapore (Study 2), avoidant attachment in close relationships was also negatively correlated to such tendencies. Overall, the more people feel secure in close relationships (lower scores on attachment anxiety or avoidance), the higher their enterprising tendencies. Limitations and future research suggestions are discussed.

7.
Sci Rep ; 12(1): 16003, 2022 09 29.
Artigo em Inglês | MEDLINE | ID: mdl-36175466

RESUMO

Real-time sequence identification is a core use-case of artificial neural networks (ANNs), ranging from recognizing temporal events to identifying verification codes. Existing methods apply recurrent neural networks, which suffer from training difficulties; however, performing this function without feedback loops remains a challenge. Here, we present an experimental neuronal long-term plasticity mechanism for high-precision feedforward sequence identification networks (ID-nets) without feedback loops, wherein input objects have a given order and timing. This mechanism temporarily silences neurons following their recent spiking activity. Therefore, transitory objects act on different dynamically created feedforward sub-networks. ID-nets are demonstrated to reliably identify 10 handwritten digit sequences, and are generalized to deep convolutional ANNs with continuous activation nodes trained on image sequences. Counterintuitively, their classification performance, even with a limited number of training examples, is high for sequences but low for individual objects. ID-nets are also implemented for writer-dependent recognition, and suggested as a cryptographic tool for encrypted authentication. The presented mechanism opens new horizons for advanced ANN algorithms.


Assuntos
Encéfalo , Neurônios , Redes Neurais de Computação , Plasticidade Neuronal , Receptores Proteína Tirosina Quinases , Reconhecimento Psicológico
8.
Clin Imaging ; 77: 213-218, 2021 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-33992882

RESUMO

OBJECTIVES: To assess the potential role of low monoenergetic images in the evaluation of acute appendicitis. METHODS: A retrospective study of 42 patients with pathology proven acute appendicitis underwent contrast-enhanced-CT conducted on a single-source-DECT before surgery. Attenuation, SNR, and CNR were calculated on both monoenergetic and conventional images and compared to 24 abdominal CT-scans with normal appendix. Representative conventional and monoenergetic images were randomized and presented side-by-side to three abdominal radiologists to determine preferred images for detecting inflammation. Additionally, six individual acute inflammatory characteristics were graded on a 1-5 scale to determine factors contributing to differences between conventional and monoenergetic images by 2 abdominal radiologists. Paired t-tests, Wilcoxon and McNemar tests, and intra-observer error statistics were performed. RESULTS: For the inflamed appendixes monoenergetic images had overall increased attenuation (average ratio 1.7; P < 0.05), signal-to-noise-ratio (6.7 ± 3.1 vs 4.2 ± 1.6; P < 0.001) and contrast-to-noise-ratio (12.1 ± 3 vs 9 ± 2.1; P < 0.001). Moreover, this increase was not found in normal appendixes (P < 0.001 vs p = 0.28-0.44). Subjectively, radiologists showed significant preferences towards monoenergetic images (P < 0.001), with inter-reader agreement of 0.84. Two parameters, diffuse bowel wall and mucosal enhancement, received significantly higher scores on monoenergetic images (average 4.3 vs. 3.0; P < 0.001 and 2.8 vs. 2.3 P < 0.03 respectively, with interobserver agreements of 62% and 52%). CONCLUSION: Increased bowel wall conspicuity from enhanced attenuation, SNR, and CNR on low monenergetic CT images results in a significant preference by radiologists for these images when assessing acute inflamed appendixes. Thus, close inspection of low monoenergetic images may improve the visualization of acute inflammatory bowel processes.


Assuntos
Apendicite , Imagem Radiográfica a Partir de Emissão de Duplo Fóton , Apendicite/diagnóstico por imagem , Meios de Contraste , Humanos , Estudos Retrospectivos , Razão Sinal-Ruído , Tomografia Computadorizada por Raios X
9.
Sci Rep ; 10(1): 19628, 2020 11 12.
Artigo em Inglês | MEDLINE | ID: mdl-33184422

RESUMO

Power-law scaling, a central concept in critical phenomena, is found to be useful in deep learning, where optimized test errors on handwritten digit examples converge as a power-law to zero with database size. For rapid decision making with one training epoch, each example is presented only once to the trained network, the power-law exponent increased with the number of hidden layers. For the largest dataset, the obtained test error was estimated to be in the proximity of state-of-the-art algorithms for large epoch numbers. Power-law scaling assists with key challenges found in current artificial intelligence applications and facilitates an a priori dataset size estimation to achieve a desired test accuracy. It establishes a benchmark for measuring training complexity and a quantitative hierarchy of machine learning tasks and algorithms.

10.
Sci Rep ; 10(1): 9356, 2020 Jun 04.
Artigo em Inglês | MEDLINE | ID: mdl-32493994

RESUMO

An amendment to this paper has been published and can be accessed via a link at the top of the paper.

11.
Sci Rep ; 10(1): 6923, 2020 04 23.
Artigo em Inglês | MEDLINE | ID: mdl-32327697

RESUMO

Attempting to imitate the brain's functionalities, researchers have bridged between neuroscience and artificial intelligence for decades; however, experimental neuroscience has not directly advanced the field of machine learning (ML). Here, using neuronal cultures, we demonstrate that increased training frequency accelerates the neuronal adaptation processes. This mechanism was implemented on artificial neural networks, where a local learning step-size increases for coherent consecutive learning steps, and tested on a simple dataset of handwritten digits, MNIST. Based on our on-line learning results with a few handwriting examples, success rates for brain-inspired algorithms substantially outperform the commonly used ML algorithms. We speculate this emerging bridge from slow brain function to ML will promote ultrafast decision making under limited examples, which is the reality in many aspects of human activity, robotic control, and network optimization.


Assuntos
Adaptação Fisiológica , Algoritmos , Inteligência Artificial , Encéfalo/fisiologia , Simulação por Computador , Humanos , Aprendizado de Máquina
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...