Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Artif Intell Med ; 149: 102804, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38462275

RESUMO

Sepsis is known as a common syndrome in intensive care units (ICU), and severe sepsis and septic shock are among the leading causes of death worldwide. The purpose of this study is to develop a deep learning model that supports clinicians in efficiently managing sepsis patients in the ICU by predicting mortality, ICU length of stay (>14 days), and hospital length of stay (>30 days). The proposed model was developed using 591 retrospective data with 16 tabular data related to a sequential organ failure assessment (SOFA) score. To analyze tabular data, we designed the modified architecture of the transformer that has achieved extraordinary success in the field of languages and computer vision tasks in recent years. The main idea of the proposed model is to use a skip-connected token, which combines both local (feature-wise token) and global (classification token) information as the output of a transformer encoder. The proposed model was compared with four machine learning models (ElasticNet, Extreme Gradient Boosting [XGBoost]), and Random Forest) and three deep learning models (Multi-Layer Perceptron [MLP], transformer, and Feature-Tokenizer transformer [FT-Transformer]) and achieved the best performance (mortality, area under the receiver operating characteristic (AUROC) 0.8047; ICU length of stay, AUROC 0.8314; hospital length of stay, AUROC 0.7342). We anticipate that the proposed model architecture will provide a promising approach to predict the various clinical endpoints using tabular data such as electronic health and medical records.


Assuntos
Sepse , Humanos , Estudos Retrospectivos , Prognóstico , Sepse/diagnóstico , Escores de Disfunção Orgânica , Curva ROC , Unidades de Terapia Intensiva
2.
Artigo em Inglês | MEDLINE | ID: mdl-38347691

RESUMO

The generator in the generative adversarial network (GAN) learns image generation in a coarse-to-fine manner in which earlier layers learn the overall structure of the image and the latter ones refine the details. To propagate the coarse information well, recent works usually build their generators by stacking up multiple residual blocks. Although the residual block can produce a high-quality image as well as be trained stably, it often impedes the information flow in the network. To alleviate this problem, this brief introduces a novel generator architecture that produces the image by combining features obtained through two different branches: the main and auxiliary branches. The goal of the main branch is to produce the image by passing through the multiple residual blocks, whereas the auxiliary branch is to convey the coarse information in the earlier layer to the later one. To combine the features in the main and auxiliary branches successfully, we also propose a gated feature fusion module (GFFM) that controls the information flow in those branches. To prove the superiority of the proposed method, this brief provides extensive experiments using various standard datasets including CIFAR-10, CIFAR-100, LSUN, CelebA-HQ, AFHQ, and tiny-ImageNet. Furthermore, we conducted various ablation studies to demonstrate the generalization ability of the proposed method. Quantitative evaluations prove that the proposed method exhibits impressive GAN performance in terms of Inception score (IS) and Frechet inception distance (FID). For instance, the proposed method boosts the FID and IS scores on the tiny-ImageNet dataset from 35.13 to 25.00 and 20.23 to 25.57, respectively.

3.
Artigo em Inglês | MEDLINE | ID: mdl-35584073

RESUMO

Despite rapid advancements over the past several years, the conditional generative adversarial networks (cGANs) are still far from being perfect. Although one of the major concerns of the cGANs is how to provide the conditional information to the generator, there are not only no ways considered as the optimal solution but also a lack of related research. This brief presents a novel convolution layer, called the conditional convolution (cConv) layer, which incorporates the conditional information into the generator of the generative adversarial networks (GANs). Unlike the most general framework of the cGANs using the conditional batch normalization (cBN) that transforms the normalized feature maps after convolution, the proposed method directly produces conditional features by adjusting the convolutional kernels depending on the conditions. More specifically, in each cConv layer, the weights are conditioned in a simple but effective way through filter-wise scaling and channel-wise shifting operations. In contrast to the conventional methods, the proposed method with a single generator can effectively handle condition-specific characteristics. The experimental results on CIFAR, LSUN, and ImageNet datasets show that the generator with the proposed cConv layer achieves a higher quality of conditional image generation than that with the standard convolution layer.

4.
Neural Netw ; 152: 370-379, 2022 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-35605302

RESUMO

This paper introduces a novel convolution method, called generative convolution (GConv), which is simple yet effective for improving the generative adversarial network (GAN) performance. Unlike the standard convolution, GConv first selects useful kernels compatible with the given latent vector, and then linearly combines the selected kernels to make latent-specific kernels. Using the latent-specific kernels, the proposed method produces the latent-specific features which encourage the generator to produce high-quality images. This approach is simple but surprisingly effective. First, the GAN performance is significantly improved with a little additional hardware cost. Second, GConv can be employed to the existing state-of-the-art generators without modifying the network architecture. To reveal the superiority of GConv, this paper provides extensive experiments using various standard datasets including CIFAR-10, CIFAR-100, LSUN-Church, CelebA, and tiny-ImageNet. Quantitative evaluations prove that GConv significantly boosts the performances of the unconditional and conditional GANs in terms of Frechet inception distance (FID) and Inception score (IS). For example, the proposed method improves both FID and IS scores on the tiny-ImageNet dataset from 35.13 to 29.76 and 20.23 to 22.64, respectively.


Assuntos
Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Processamento de Imagem Assistida por Computador/métodos
5.
IEEE Trans Neural Netw Learn Syst ; 33(4): 1811-1818, 2022 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-33385312

RESUMO

In adversarial learning, the discriminator often fails to guide the generator successfully since it distinguishes between real and generated images using silly or nonrobust features. To alleviate this problem, this brief presents a simple but effective way that improves the performance of the generative adversarial network (GAN) without imposing the training overhead or modifying the network architectures of existing methods. The proposed method employs a novel cascading rejection (CR) module for discriminator, which extracts multiple nonoverlapped features in an iterative manner using the vector rejection operation. Since the extracted diverse features prevent the discriminator from concentrating on nonmeaningful features, the discriminator can guide the generator effectively to produce images that are more similar to the real images. In addition, since the proposed CR module requires only a few simple vector operations, it can be readily applied to existing frameworks with marginal training overheads. Quantitative evaluations on various data sets, including CIFAR-10, CelebA, CelebA-HQ, LSUN, and tiny-ImageNet, confirm that the proposed method significantly improves the performance of GAN and conditional GAN in terms of the Frechet inception distance (FID), indicating the diversity and visual appearance of the generated images.

6.
IEEE Trans Neural Netw Learn Syst ; 32(1): 252-265, 2021 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-32203033

RESUMO

Among the various generative adversarial network (GAN)-based image inpainting methods, a coarse-to-fine network with a contextual attention module (CAM) has shown remarkable performance. However, due to two stacked generative networks, the coarse-to-fine network needs numerous computational resources, such as convolution operations and network parameters, which result in low speed. To address this problem, we propose a novel network architecture called parallel extended-decoder path for semantic inpainting (PEPSI) network, which aims at reducing the hardware costs and improving the inpainting performance. PEPSI consists of a single shared encoding network and parallel decoding networks called coarse and inpainting paths. The coarse path produces a preliminary inpainting result to train the encoding network for the prediction of features for the CAM. Simultaneously, the inpainting path generates higher inpainting quality using the refined features reconstructed via the CAM. In addition, we propose Diet-PEPSI that significantly reduces the network parameters while maintaining the performance. In Diet-PEPSI, to capture the global contextual information with low hardware costs, we propose novel rate-adaptive dilated convolutional layers that employ the common weights but produce dynamic features depending on the given dilation rates. Extensive experiments comparing the performance with state-of-the-art image inpainting methods demonstrate that both PEPSI and Diet-PEPSI improve the qualitative scores, i.e., the peak signal-to-noise ratio (PSNR) and structural similarity (SSIM), as well as significantly reduce hardware costs, such as computational time and the number of network parameters.

7.
Artigo em Inglês | MEDLINE | ID: mdl-31751239

RESUMO

Various power-constrained contrast enhance-ment (PCCE) techniques have been applied to an organic light emitting diode (OLED) display for reducing the pow-er demands of the display while preserving the image qual-ity. In this paper, we propose a new deep learning-based PCCE scheme that constrains the power consumption of the OLED displays while enhancing the contrast of the displayed image. In the proposed method, the power con-sumption is constrained by simply reducing the brightness a certain ratio, whereas the perceived visual quality is pre-served as much as possible by enhancing the contrast of the image using a convolutional neural network (CNN). Furthermore, our CNN can learn the PCCE technique without a reference image by unsupervised learning. Ex-perimental results show that the proposed method is supe-rior to conventional ones in terms of image quality assess-ment metrics such as a visual saliency-induced index (VSI) and a measure of enhancement (EME).1.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...