Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Comput Biol Med ; 150: 106148, 2022 11.
Artigo em Inglês | MEDLINE | ID: mdl-36252363

RESUMO

Dermoscopic images ideally depict pigmentation attributes on the skin surface which is highly regarded in the medical community for detection of skin abnormality, disease or even cancer. The identification of such abnormality, however, requires trained eyes and accurate detection necessitates the process being time-intensive. As such, computerized detection schemes have become quite an essential, especially schemes which adopt deep learning tactics. In this paper, a convolutional deep neural network, S2C-DeLeNet, is proposed, which (i) Performs segmentation procedure of lesion based regions with respect to the unaffected skin tissue from dermoscopic images using a segmentation sub-network, (ii) Classifies each image based on its medical condition type utilizing transferred parameters from the inherent segmentation sub-network. The architecture of the segmentation sub-network contains EfficientNet-B4 backbone in place of the encoder and the classification sub-network bears a 'Classification Feature Extraction' system which pulls trained segmentation feature maps towards lesion prediction. Inside the classification architecture, there have been designed, (i) A 'Feature Coalescing Module' in order to trail and mix each dimensional feature from both encoder and decoder, (ii) A '3D-Layer Residuals' block to create a parallel pathway of low-dimensional features with high variance for better classification. After fine-tuning on a publicly accessible dataset, a mean dice-score of 0.9494 during segmentation is procured which beats existing segmentation strategies and a mean accuracy of 0.9103 is obtained for classification which outperforms conventional and noted classifiers. Additionally, the already fine-tuned network demonstrates highly satisfactory results on other skin cancer segmentation datasets while cross-inference. Extensive experimentation is done to prove the efficacy of the network for not only dermoscopic images but also different medical modalities; which can show its potential in being a systematic diagnostic solution in the field of dermatology and possibly more.


Assuntos
Dermoscopia , Neoplasias Cutâneas , Humanos , Dermoscopia/métodos , Neoplasias Cutâneas/diagnóstico por imagem , Redes Neurais de Computação , Pele/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos
2.
Annu Int Conf IEEE Eng Med Biol Soc ; 2022: 3392-3395, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-36086237

RESUMO

Ambulatory respiration signal extraction system is required to maintain continuous surveillance of a patient with respiratory deficiency. The capnograph signal has received a lot of attention in recent years as a valuable indicator of respiratory conditions. However, the typical capnograph signal extraction method is quite expensive and also unpleasant to the patient due to the involvement of a nasal cannula. With the advent of wearable sensor technology, there has been significant research on the use of photoplethysmogram (PPG) signals as a less expensive alternative to extract respiratory information. In this paper, we propose CapNet, a novel deep learning-based framework which takes the regular PPG signal as input, and estimates the capnograph signal as output. Training, validation and testing of the proposed networks in CapNet is done using the IEEE TMBE Respiratory Rate Benchmark dataset by utilizing reference capnograph respiration signals. With a lower MSE and higher cross-correlation values, CapNet outperforms two traditional signal processing algorithms and another recently proposed deep neural network, RespNet. The proposed framework expectantly can be implementable and feasible for constant supervising of patients undergoing respiratory ailments.


Assuntos
Aprendizado Profundo , Fotopletismografia , Capnografia , Humanos , Fotopletismografia/métodos , Taxa Respiratória , Processamento de Sinais Assistido por Computador
3.
Annu Int Conf IEEE Eng Med Biol Soc ; 2022: 1024-1027, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-36086584

RESUMO

Atrial fibrillation is the most common sustained cardiac arrhythmia and the electrocardiogram (ECG) is a powerful non-invasive tool for its clinical diagnosis. Automatic AF detection remains a very challenging task due to the high inter-patient variability of ECGs. In this paper, an automatic AF detection scheme is proposed based on a deep learning network that utilizes both raw ECG signal and its discrete wavelet transform (DWT) version. In order to utilize the time-frequency characteristics of the ECG signal, first level DWT is applied and both high and low frequency components are then utilized in the 1D CNN network in parallel. If only the transformed data are utilized in the network, original variations in the data may not be explored, which also contains useful information to identify the abnormalities. A multi-phase training scheme is proposed which facilitates parallel optimization for efficient gradient propagation. In the proposed network, features are directly extracted from raw ECG and DWT coefficients, followed by 2 fully connected layers to process features furthermore and to detect arrhythmia in the recordings. Classification performance of the proposed method is tested on PhysioNet-2017 dataset and it offers superior performance in detecting AF from normal, alternating and noisy cases in comparison to some state-of-the-art methods.


Assuntos
Fibrilação Atrial , Aprendizado Profundo , Fibrilação Atrial/diagnóstico , Diagnóstico por Computador/métodos , Eletrocardiografia/métodos , Humanos , Análise de Ondaletas
4.
Ultrasonics ; 110: 106283, 2021 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-33166787

RESUMO

Ultrasound-based non-invasive elasticity imaging modalities have received significant consideration for tissue characterization over the last few years. Though substantial advances have been made, the conventional Shear Wave Elastography (SWE) methods still suffer from poor image quality in regions far from the push location, particularly those which rely on single focused ultrasound push beam to generate shear waves. In this study, we propose DSWE-Net, a novel deep learning-based approach that is able to construct Young's modulus maps from ultrasonically tracked tissue velocity data resulting from a single acoustic radiation force (ARF) push. The proposed network employs a 3D convolutional encoder, followed by a recurrent block consisting of several Convolutional Long Short-Term Memory (ConvLSTM) layers to extract high-level spatio-temporal features from different time-frames of the input velocity data. Finally, a pair of coupled 2D convolutional decoder blocks reconstructs the modulus image and additionally performs inclusion segmentation by generating a binary mask. We also propose a multi-task learning loss function for end-to-end training of the network with 1260 data samples obtained from a simulation environment which include both bi-level and multi-level phantom structures. The performance of the proposed network is evaluated on 140 synthetic test data and the results are compared both qualitatively and quantitatively with that of the current state of the art method, Local Phase Velocity Based Imaging (LPVI). With an average SSIM of 0.90, RMSE of 0.10 and 20.69 dB PSNR, DSWE-Net performs much better on the imaging task compared to LPVI. Our method also achieves an average IoU score of 0.81 for the segmentation task which makes it suitable for localizing inclusions as well. In this initial study, we also show that our method gains an overall improvement of 0.09 in SSIM, 4.81 dB in PSNR, 2.02 dB in CNR, and 0.09 in RMSE over LPVI on a completely unseen set of CIRS tissue mimicking phantom data. This proves its better generalization capability and shows its potential for use in real-world clinical practice.

5.
Nephron Clin Pract ; 128(1-2): 166-70, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-25412642

RESUMO

BACKGROUND: Acute kidney injury (AKI) is common in hospitalized patients. Despite the progress that has been made in the last decade, early identification of AKI cases remains a challenge. In recent years, electronic AKI alert (e-AKI alert) systems have been tested and are usually based on changes in serum creatinine (Cr) values. However, these methods do not include one of the common scenarios, i.e. when there is no available preadmission Cr value available for a patient to compare and hence an e-AKI alert cannot be issued. Therefore, it is essential to have an alternative algorithm to produce e-AKI alerts in such scenarios. METHOD: We have developed e-AKI alert algorithms which compare serum Cr values at presentation with previous results, within KDIGO AKI guideline-specified classifications. However, where a comparator is not available, we have produced a 'population-based reference Cr value' age and sex matched from 137,000 serum Cr values extracted from blood tests in general practice from our Telepath system. RESULTS: Cr results were split by gender, and then within each group the Cr were stratified according to year of age. The median Cr for each individual year of age was identified and plotted versus age to give separate graphs for males and females that gave excellent fits (R(2)) to cubic regressions. CONCLUSION: Population-based estimated reference Cr measurements from community blood test results is a more robust method of baseline Cr value estimation in generating potential e-AKI alerts to help early recognition and treatment of AKI cases leading to improved outcome.


Assuntos
Injúria Renal Aguda/sangue , Alarmes Clínicos , Creatinina/sangue , Adulto , Algoritmos , Feminino , Humanos , Testes de Função Renal/instrumentação , Testes de Função Renal/métodos , Masculino , Valores de Referência , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...