Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 169
Filtrar
1.
J Contam Hydrol ; 267: 104437, 2024 Sep 24.
Artigo em Inglês | MEDLINE | ID: mdl-39341165

RESUMO

The application of the simulation-optimization method for groundwater contamination source identification (GCSI) encounters two main challenges: the substantial time cost of calling the simulation model, and the limitations on the accuracy of identification results due to the complexity, nonlinearity, and ill-posed nature of the inverse problem. To address these issues, we have innovatively developed an inversion framework based on ensemble learning strategies. This framework comprises a stacking ensemble model (SEM), which integrates three distinct machine learning models (Extremely Randomized Trees, Adaptive Boosting, and Bidirectional Gated Recurrent Unit), and an ensemble optimizer (E-GKSEEFO), which combines two newly proposed swarm intelligence optimizers (Genghis Khan Shark Optimizer and Electric Eel Foraging Optimizer). Specifically, the SEM serves as a surrogate model for the groundwater numerical simulation model. Compared to the original simulation model, it significantly reduces time cost while maintaining accuracy. The E-GKSEEFO, functioning as the search strategy for the optimization model, greatly enhances the accuracy of the optimization results. We have verified the performance of the SEM-E-GKSEEFO ensemble inversion framework through two hypothetical scenarios derived from an actual coal gangue pile. The results are as follows. (1) The SEM exhibits improved fitting performance compared to single machine learning models when dealing with high-dimensional nonlinear data from GCSI. (2) The E-GKSEEFO achieves significantly higher accuracy in the identification results of GCSI than individual optimizers. These findings affirm the effectiveness and superiority of the proposed SEM-E-GKSEEFO ensemble inversion framework.

2.
Int J Med Sci ; 21(12): 2252-2260, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39310268

RESUMO

Background: The early detection of arteriovenous (AV) access dysfunction is crucial for maintaining the patency of vascular access. This study aimed to use deep learning to predict AV access malfunction necessitating further vascular management. Methods: This prospective cohort study enrolled prevalent hemodialysis (HD) patients with an AV fistula or AV graft from a single HD center. Their AV access bruit sounds were recorded weekly using an electronic stethoscope from three different sites (arterial needle site, venous needle site, and the midpoint between the arterial and venous needle sites) before HD sessions. The audio signals were converted to Mel spectrograms using Fourier transformation and utilized to develop deep learning models. Three deep learning models, (1) Convolutional Neural Network (CNN), (2) Convolutional Recurrent Neural Network (CRNN), and (3) Vision Transformers-Gate Recurrent Unit (ViT-GRU), were trained and compared to predict the likelihood of dysfunctional AV access. Results: Total 437 audio recordings were obtained from 84 patients. The CNN model outperformed the other models in the test set, with an F1 score of 0.7037 and area under the receiver operating characteristic curve (AUROC) of 0.7112. The Vit-GRU model had high performance in out-of-fold predictions, with an F1 score of 0.7131 and AUROC of 0.7745, but low generalization ability in the test set, with an F1 score of 0.5225 and AUROC of 0.5977. Conclusions: The CNN model based on Mel spectrograms could predict malfunctioning AV access requiring vascular intervention within 10 days. This approach could serve as a useful screening tool for high-risk AV access.


Assuntos
Derivação Arteriovenosa Cirúrgica , Aprendizado Profundo , Diálise Renal , Humanos , Feminino , Masculino , Pessoa de Meia-Idade , Estudos Prospectivos , Idoso , Diálise Renal/métodos , Curva ROC , Espectrografia do Som/métodos , Redes Neurais de Computação
3.
Comput Biol Med ; 182: 109138, 2024 Sep 20.
Artigo em Inglês | MEDLINE | ID: mdl-39305732

RESUMO

Numerous automatic sleep stage classification systems have been developed, but none have become effective assistive tools for sleep technicians due to issues with generalization. Four key factors hinder the generalization of these models are instruments, montage of recording, subject type, and scoring manual factors. This study aimed to develop a deep learning model that addresses generalization problems by integrating enzyme-inspired specificity and employing separating training approaches. Subject type and scoring manual factors were controlled, while the focus was on instruments and montage of recording factors. The proposed model consists of three sets of signal-specific models including EEG-, EOG-, and EMG-specific model. The EEG-specific models further include three sets of channel-specific models. All signal-specific and channel-specific models were established with data manipulation and weighted loss strategies, resulting in three sets of data manipulation models and class-specific models, respectively. These models were CNNs. Additionally, BiLSTM models were applied to EEG- and EOG-specific models to obtain temporal information. Finally, classification task for sleep stage was handled by 'the-last-dense' layer. The optimal sampling frequency for each physiological signal was identified and used during the training process. The proposed model was trained on MGH dataset and evaluated using both within dataset and cross-dataset. For MGH dataset, overall accuracy of 81.05 %, MF1 of 79.05 %, Kappa of 0.7408, and per-class F1-scores: W (84.98 %), N1 (58.06 %), N2 (84.82 %), N3 (79.20 %), and REM (88.17 %) can be achieved. Performances on cross-datasets are as follows: SHHS1 200 records reached 79.54 %, 70.56 %, and 0.7078; SHHS2 200 records achieved 76.77 %, 66.30 %, and 0.6632; Sleep-EDF 153 records gained 78.52 %, 72.13 %, and 0.7031; and BCI-MU (local dataset) 94 records achieved 83.57 %, 82.17 %, and 0.7769 for overall accuracy, MF1, and Kappa respectively. Additionally, the proposed model has approximately 9.3 M trainable parameters and takes around 26 s to process one PSG record. The results indicate that the proposed model demonstrates generalizability in sleep stage classification and shows potential as a feasibility tool for real-world applications. Additionally, enzyme-inspired specificity effectively addresses the challenges posed by varying montage of recording, while the identified optimal frequencies mitigate instrument-related issues.

4.
Biomimetics (Basel) ; 9(9)2024 Sep 22.
Artigo em Inglês | MEDLINE | ID: mdl-39329597

RESUMO

The prediction of total ionospheric electron content (TEC) is of great significance for space weather monitoring and wireless communication. Recently, deep learning models have become increasingly popular in TEC prediction. However, these deep learning models usually contain a large number of hyperparameters. Finding the optimal hyperparameters (also known as hyperparameter optimization) is currently a great challenge, directly affecting the predictive performance of the deep learning models. The Beluga Whale Optimization (BWO) algorithm is a swarm intelligence optimization algorithm that can be used to optimize hyperparameters of deep learning models. However, it is easy to fall into local minima. This paper analyzed the drawbacks of BWO and proposed an improved BWO algorithm, named FAMBWO (Firefly Assisted Multi-strategy Beluga Whale Optimization). Our proposed FAMBWO was compared with 11 state-of-the-art swarm intelligence optimization algorithms on 30 benchmark functions, and the results showed that our improved algorithm had faster convergence speed and better solutions on almost all benchmark functions. Then we proposed an automated machine learning framework FAMBWO-MA-BiLSTM for TEC prediction, where MA-BiLSTM is for TEC prediction and FAMBWO for hyperparameters optimization. We compared it with grid search, random search, Bayesian optimization algorithm and beluga whale optimization algorithm. Results showed that the MA-BiLSTM model optimized by FAMBWO is significantly better than the MA-BiLSTM model optimized by grid search, random search, Bayesian optimization algorithm, and BWO.

5.
Proc Natl Acad Sci U S A ; 121(40): e2413462121, 2024 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-39320916

RESUMO

Pore structures provide extra freedoms for the design of porous media, leading to desirable properties, such as high catalytic rate, energy storage efficiency, and specific strength. This unfortunately makes the porous media susceptible to failure. Deep understanding of the failure mechanism in microstructures is a key to customizing high-performance crack-resistant porous media. However, solving the fracture problem of the porous materials is computationally intractable due to the highly complicated configurations of microstructures. To bridge the structural configurations and fracture responses of random porous media, a unique generative deep learning model is developed. A two-step strategy is proposed to deconstruct the fracture process, which sequentially corresponds to elastic deformation and crack propagation. The geometry of microstructure is translated into a scalar of elastic field as an intermediate variable, and then, the crack path is predicted. The neural network precisely characterizes the strong interactions among pore structures, the multiscale behaviors of fracture, and the discontinuous essence of crack propagation. Crack paths in random porous media are accurately predicted by simply inputting the images of targets, without inputting any additional input physical information. The prediction model enjoys an outstanding performance with a prediction accuracy of 90.25% and possesses a robust generalization capability. The accuracy of the present model is a record so far, and the prediction is accomplished within a second. This study opens an avenue to high-throughput evaluation of the fracture behaviors of heterogeneous materials with complex geometries.

6.
Bioengineering (Basel) ; 11(8)2024 Aug 19.
Artigo em Inglês | MEDLINE | ID: mdl-39199800

RESUMO

Beat-by-beat monitoring of hemodynamic parameters in the left ventricle contributes to the early diagnosis and treatment of heart failure, valvular heart disease, and other cardiovascular diseases. Current accurate measurement methods for ventricular hemodynamic parameters are inconvenient for monitoring hemodynamic indexes in daily life. The objective of this study is to propose a method for estimating intraventricular hemodynamic parameters in a beat-to-beat style based on non-invasive PCG (phonocardiogram) and PPG (photoplethysmography) signals. Three beagle dogs were used as subjects. PCG, PPG, electrocardiogram (ECG), and invasive blood pressure signals in the left ventricle were synchronously collected while epinephrine medicine was injected into the veins to produce hemodynamic variations. Various doses of epinephrine were used to produce hemodynamic variations. A total of 40 records (over 12,000 cardiac cycles) were obtained. A deep neural network was built to simultaneously estimate four hemodynamic parameters of one cardiac cycle by inputting the PCGs and PPGs of the cardiac cycle. The outputs of the network were four hemodynamic parameters: left ventricular systolic blood pressure (SBP), left ventricular diastolic blood pressure (DBP), maximum rate of left ventricular pressure rise (MRR), and maximum rate of left ventricular pressure decline (MRD). The model built in this study consisted of a residual convolutional module and a bidirectional recurrent neural network module which learnt the local features and context relations, respectively. The training mode of the network followed a regression model, and the loss function was set as mean square error. When the network was trained and tested on one subject using a five-fold validation scheme, the performances were very good. The average correlation coefficients (CCs) between the estimated values and measured values were generally greater than 0.90 for SBP, DBP, MRR, and MRD. However, when the network was trained with one subject's data and tested with another subject's data, the performance degraded somewhat. The average CCs reduced from over 0.9 to 0.7 for SBP, DBP, and MRD; however, MRR had higher consistency, with the average CC reducing from over 0.9 to about 0.85 only. The generalizability across subjects could be improved if individual differences were considered. The performance indicates the possibility that hemodynamic parameters could be estimated by PCG and PPG signals collected on the body surface. With the rapid development of wearable devices, it has up-and-coming applications for self-monitoring in home healthcare environments.

7.
Sci Total Environ ; 950: 175233, 2024 Nov 10.
Artigo em Inglês | MEDLINE | ID: mdl-39102955

RESUMO

Accurate forecast of fine particulate matter (PM2.5) is crucial for city air pollution control, yet remains challenging due to the complex urban atmospheric chemical and physical processes. Recently deep learning has been routinely applied for better urban PM2.5 forecasts. However, their capacity to represent the spatiotemporal urban atmospheric processes remains underexplored, especially compared with traditional approaches such as chemistry-transport models (CTMs) and shallow statistical methods other than deep learning. Here we probe such urban-scale representation capacity of a spatiotemporal deep learning (STDL) model for 24-hour short-term PM2.5 forecasts at six urban stations in Rizhao, a coastal city in China. Compared with two operational CTMs and three statistical models, the STDL model shows its superiority with improvements in all five evaluation metrics, notably in root mean square error (RMSE) for forecasts at lead times within 12 h with reductions of 49.8 % and 47.8 % respectively. This demonstrates the STDL model's capacity to represent nonlinear small-scale phenomena such as street-level emissions and urban meteorology that are in general not well represented in either CTMs or shallow statistical models. This gain of small-scale representation in forecast performance decreases at increasing lead times, leading to similar RMSEs to the statistical methods (linear shallow representations) at about 12 h and to the CTMs (mesoscale representations) at 24 h. The STDL model performs especially well in winter, when complex urban physical and chemical processes dominate the frequent severe air pollution, and in moisture conditions fostering hygroscopic growth of particles. The DL-based PM2.5 forecasts align with observed trends under various humidity and wind conditions. Such investigation into the potential and limitations of deep learning representation for urban PM2.5 forecasting could hopefully inspire further fusion of distinct representations from CTMs and deep networks to break the conventional limits of short-term PM2.5 forecasts.

8.
Pol J Radiol ; 89: e368-e377, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39139256

RESUMO

Purpose: To detect foot ulcers in diabetic patients by analysing thermal images of the foot using a deep learning model and estimate the effectiveness of the proposed model by comparing it with some existing studies. Material and methods: Open-source thermal images were used for the study. The dataset consists of two types of images of the feet of diabetic patients: normal and abnormal foot images. The dataset contains 1055 total images; among these, 543 are normal foot images, and the others are images of abnormal feet of the patient. The study's dataset was converted into a new and pre-processed dataset by applying canny edge detection and watershed segmentation. This pre-processed dataset was then balanced and enlarged using data augmentation, and after that, for prediction, a deep learning model was applied for the diagnosis of an ulcer in the foot. After applying canny edge detection and segmentation, the pre-processed dataset can enhance the model's performance for correct predictions and reduce the computational cost. Results: Our proposed model, utilizing ResNet50 and EfficientNetB0, was tested on both the original dataset and the pre-processed dataset after applying edge detection and segmentation. The results were highly promising, with ResNet50 achieving 89% and 89.1% accuracy for the two datasets, respectively, and EfficientNetB0 surpassing this with 96.1% and 99.4% accuracy for the two datasets, respectively. Conclusions: Our study offers a practical solution for foot ulcer detection, particularly in situations where expert analysis is not readily available. The efficacy of our models was tested using real images, and they outperformed other available models, demonstrating their potential for real-world application.

9.
Am J Otolaryngol ; 45(6): 104474, 2024 Aug 08.
Artigo em Inglês | MEDLINE | ID: mdl-39137696

RESUMO

OBJECTIVE: Early diagnosis of laryngeal cancer (LC) is crucial, particularly in rural areas. Despite existing studies on deep learning models for LC identification, challenges remain in selecting suitable models for rural areas with shortages of laryngologists and limited computer resources. We present the intelligent laryngeal cancer detection system (ILCDS), a deep learning-based solution tailored for effective LC screening in resource-constrained rural areas. METHODS: We compiled a dataset comprised of 2023 laryngoscopic images and applied data augmentation techniques for dataset expansion. Subsequently, we utilized eight deep learning models-AlexNet, VGG, ResNet, DenseNet, MobileNet, ShuffleNet, Vision Transformer, and Swin Transformer-for LC identification. A comprehensive evaluation of their performances and efficiencies was conducted, and the most suitable model was selected to assemble the ILCDS. RESULTS: Regarding performance, all models attained an average accuracy exceeding 90 % on the test set. Particularly noteworthy are VGG, DenseNet, and MobileNet, which exceeded an accuracy of 95 %, with scores of 95.32 %, 95.75 %, and 95.99 %, respectively. Regarding efficiency, MobileNet excels owing to its compact size and fast inference speed, making it an ideal model for integration into ILCDS. CONCLUSION: The ILCDS demonstrated promising accuracy in LC detection while maintaining modest computational resource requirements, indicating its potential to enhance LC screening accuracy and alleviate the workload on otolaryngologists in rural areas.

10.
Sci Rep ; 14(1): 18868, 2024 08 14.
Artigo em Inglês | MEDLINE | ID: mdl-39143122

RESUMO

Ovarian cysts pose significant health risks including torsion, infertility, and cancer, necessitating rapid and accurate diagnosis. Ultrasonography is commonly employed for screening, yet its effectiveness is hindered by challenges like weak contrast, speckle noise, and hazy boundaries in images. This study proposes an adaptive deep learning-based segmentation technique using a database of ovarian ultrasound cyst images. A Guided Trilateral Filter (GTF) is applied for noise reduction in pre-processing. Segmentation utilizes an Adaptive Convolutional Neural Network (AdaResU-net) for precise cyst size identification and benign/malignant classification, optimized via the Wild Horse Optimization (WHO) algorithm. Objective functions Dice Loss Coefficient and Weighted Cross-Entropy are optimized to enhance segmentation accuracy. Classification of cyst types is performed using a Pyramidal Dilated Convolutional (PDC) network. The method achieves a segmentation accuracy of 98.87%, surpassing existing techniques, thereby promising improved diagnostic accuracy and patient care outcomes.


Assuntos
Algoritmos , Aprendizado Profundo , Cistos Ovarianos , Ultrassonografia , Feminino , Humanos , Ultrassonografia/métodos , Cistos Ovarianos/diagnóstico por imagem , Redes Neurais de Computação , Processamento de Imagem Assistida por Computador/métodos
11.
J Nanobiotechnology ; 22(1): 482, 2024 Aug 13.
Artigo em Inglês | MEDLINE | ID: mdl-39135039

RESUMO

Treatment-induced ototoxicity and accompanying hearing loss are a great concern associated with chemotherapeutic or antibiotic drug regimens. Thus, prophylactic cure or early treatment is desirable by local delivery to the inner ear. In this study, we examined a novel way of intratympanically delivered sustained nanoformulation by using crosslinked hybrid nanoparticle (cHy-NPs) in a thermoresponsive hydrogel i.e. thermogel that can potentially provide a safe and effective treatment towards the treatment-induced or drug-induced ototoxicity. The prophylactic treatment of the ototoxicity can be achieved by using two therapeutic molecules, Flunarizine (FL: T-type calcium channel blocker) and Honokiol (HK: antioxidant) co-encapsulated in the same delivery system. Here we investigated, FL and HK as cytoprotective molecules against cisplatin-induced toxic effects in the House Ear Institute - Organ of Corti 1 (HEI-OC1) cells and in vivo assessments on the neuromast hair cell protection in the zebrafish lateral line. We observed that cytotoxic protective effect can be enhanced by using FL and HK in combination and developing a robust drug delivery formulation. Therefore, FL-and HK-loaded crosslinked hybrid nanoparticles (FL-cHy-NPs and HK-cHy-NPs) were synthesized using a quality-by-design approach (QbD) in which design of experiment-central composite design (DoE-CCD) following the standard least-square model was used for nanoformulation optimization. The physicochemical characterization of FL and HK loaded-NPs suggested the successful synthesis of spherical NPs with polydispersity index < 0.3, drugs encapsulation (> 75%), drugs loading (~ 10%), stability (> 2 months) in the neutral solution, and appropriate cryoprotectant selection. We assessed caspase 3/7 apopototic pathway in vitro that showed significantly reduced signals of caspase 3/7 activation after the FL-cHy-NPs and HK-cHy-NPs (alone or in combination) compared to the CisPt. The final formulation i.e. crosslinked-hybrid-nanoparticle-embedded-in-thermogel was developed by incorporating drug-loaded cHy-NPs in poloxamer-407, poloxamer-188, and carbomer-940-based hydrogel. A combination of artificial intelligence (AI)-based qualitative and quantitative image analysis determined the particle size and distribution throughout the visible segment. The developed formulation was able to release the FL and HK for at least a month. Overall, a highly stable nanoformulation was successfully developed for combating treatment-induced or drug-induced ototoxicity via local administration to the inner ear.


Assuntos
Nanopartículas , Peixe-Zebra , Animais , Nanopartículas/química , Orelha Interna/efeitos dos fármacos , Hidrogéis/química , Cisplatino/farmacologia , Cisplatino/química , Linhagem Celular , Compostos de Bifenilo/química , Sistemas de Liberação de Medicamentos/métodos , Lignanas/química , Lignanas/farmacologia , Lignanas/administração & dosagem , Camundongos , Sobrevivência Celular/efeitos dos fármacos
12.
J Med Syst ; 48(1): 67, 2024 Jul 19.
Artigo em Inglês | MEDLINE | ID: mdl-39028354

RESUMO

Medical advances prolonging life have led to more permanent pacemaker implants. When pacemaker implantation (PMI) is commonly caused by sick sinus syndrome or conduction disorders, predicting PMI is challenging, as patients often experience related symptoms. This study was designed to create a deep learning model (DLM) for predicting future PMI from ECG data and assess its ability to predict future cardiovascular events. In this study, a DLM was trained on a dataset of 158,471 ECGs from 42,903 academic medical center patients, with additional validation involving 25,640 medical center patients and 26,538 community hospital patients. Primary analysis focused on predicting PMI within 90 days, while all-cause mortality, cardiovascular disease (CVD) mortality, and the development of various cardiovascular conditions were addressed with secondary analysis. The study's raw ECG DLM achieved area under the curve (AUC) values of 0.870, 0.878, and 0.883 for PMI prediction within 30, 60, and 90 days, respectively, along with sensitivities exceeding 82.0% and specificities over 81.9% in the internal validation. Significant ECG features included the PR interval, corrected QT interval, heart rate, QRS duration, P-wave axis, T-wave axis, and QRS complex axis. The AI-predicted PMI group had higher risks of PMI after 90 days (hazard ratio [HR]: 7.49, 95% CI: 5.40-10.39), all-cause mortality (HR: 1.91, 95% CI: 1.74-2.10), CVD mortality (HR: 3.53, 95% CI: 2.73-4.57), and new-onset adverse cardiovascular events. External validation confirmed the model's accuracy. Through ECG analyses, our AI DLM can alert clinicians and patients to the possibility of future PMI and related mortality and cardiovascular risks, aiding in timely patient intervention.


Assuntos
Doenças Cardiovasculares , Aprendizado Profundo , Eletrocardiografia , Marca-Passo Artificial , Humanos , Eletrocardiografia/métodos , Feminino , Masculino , Idoso , Pessoa de Meia-Idade , Inteligência Artificial , Síndrome do Nó Sinusal
13.
Cancer Res Treat ; 2024 Jul 10.
Artigo em Inglês | MEDLINE | ID: mdl-38993092

RESUMO

Purpose: The genomic characteristics of uterine sarcomas have not been fully elucidated. This study aimed to explore the genomic landscape of the USs. Materials and Methods: Comprehensive genomic analysis through RNA-sequencing was conducted. Gene fusion, differentially expressed genes (DEGs), signaling pathway enrichment, immune cell infiltration, and prognosis were analyzed. A deep learning model was constructed to predict the survival of US patients. Results: A total of 71 US samples were examined, including 47 endometrial stromal sarcomas (ESS), 18 uterine leiomyosarcomas (uLMS), 3 adenosarcomas, 2 carcinosarcomas, and 1 uterine tumor resembling an ovarian sex-cord tumor (UTROSCT). ESS (including high-grade ESS and low-grade ESS) and uLMS showed distinct gene fusion signatures; a novel gene fusion site, MRPS18A - PDC-AS1 could be a potential diagnostic marker for the pathology differential diagnosis of uLMS and ESS; 797 and 477 uDEGs were identified in the ESS vs. uLMS and HGESS vs. LGESS groups, respectively. The uDEGs were enriched in multiple pathways. Fifteen genes including LAMB4 were confirmed with prognostic value in USs; immune infiltration analysis revealed the prognositic value of myeloid dendritic cells, plasmacytoid dendritic cells, natural killer cells, macrophage M1, monocytes and hematopoietic stem cells in USs; the deep learning model named MMN-MIL showed satisfactory performance in predicting the survival of US patients, with the area under the receiver operating curve curve reached 0.909 and accuracy achieved 0.804. Conclusion: USs harbored distinct gene fusion characteristics and gene expression features between HGESS, LGESS, and uLMS. The MMN-MIL model could effectively predict the survival of US patients.

14.
Artigo em Inglês | MEDLINE | ID: mdl-39054663

RESUMO

OBJECTIVES: We aimed to construct an artificial intelligence-enabled electrocardiogram (ECG) algorithm that can accurately predict the presence of left atrial low-voltage areas (LVAs) in patients with persistent atrial fibrillation. METHODS: The study included 587 patients with persistent atrial fibrillation who underwent catheter ablation procedures between March 2012 and December 2023 and 942 scanned images of 12-lead ECGs obtained before the ablation procedures were performed. Artificial intelligence-based algorithms were used to construct models for predicting the presence of LVAs. The DR-FLASH and APPLE clinical scores for LVA prediction were calculated. We used a receiver operating characteristic (ROC) curve, calibration curve, and decision curve analysis to evaluate model performance. RESULTS: The data obtained from the participants were split into training (n = 469), validation (n = 58), and test sets (n = 60). LVAs were detected in 53.7% of all participants. Using ECG alone, the deep learning algorithm achieved an area under the ROC curve (AUROC) of 0.752, outperforming both the DR-FLASH score (AUROC = 0.610) and the APPLE score (AUROC = 0.510). The random forest classification model, which integrated a probabilistic deep learning model and clinical features, showed a maximum AUROC of 0.759. Moreover, the ECG-based deep learning algorithm for predicting extensive LVAs achieved an AUROC of 0.775, with a sensitivity of 0.816 and a specificity of 0.896. The random forest classification model for predicting extensive LVAs achieved an AUROC of 0.897, with a sensitivity of 0.862, and a specificity of 0.935. CONCLUSION: The deep learning model based exclusively on ECG data and the machine learning model that combined a probabilistic deep learning model and clinical features both predicted the presence of LVAs with a higher degree of accuracy than the DR-FLASH and the APPLE risk scores.

15.
Sensors (Basel) ; 24(14)2024 Jul 10.
Artigo em Inglês | MEDLINE | ID: mdl-39065848

RESUMO

Proton-exchange membrane fuel cells (PEMFCs) play a crucial role in the transition to sustainable energy systems. Accurately estimating the state of health (SOH) of PEMFCs under dynamic operating conditions is essential for ensuring their reliability and longevity. This study designed dynamic operating conditions for fuel cells and conducted durability tests using both crack-free fuel cells and fuel cells with uniform cracks. Utilizing deep learning methods, we estimated the SOH of PEMFCs under dynamic operating conditions and investigated the performance of long short-term memory networks (LSTM), gated recurrent units (GRU), temporal convolutional networks (TCN), and transformer models for SOH estimation tasks. We also explored the impact of different sampling intervals and training set proportions on the predictive performance of these models. The results indicated that shorter sampling intervals and higher training set proportions significantly improve prediction accuracy. The study also highlighted the challenges posed by the presence of cracks. Cracks cause more frequent and intense voltage fluctuations, making it more difficult for the models to accurately capture the dynamic behavior of PEMFCs, thereby increasing prediction errors. However, under crack-free conditions, due to more stable voltage output, all models showed improved predictive performance. Finally, this study underscores the effectiveness of deep learning models in estimating the SOH of PEMFCs and provides insights into optimizing sampling and training strategies to enhance prediction accuracy. The findings make a significant contribution to the development of more reliable and efficient PEMFC systems for sustainable energy applications.

16.
J Clin Med ; 13(13)2024 Jun 30.
Artigo em Inglês | MEDLINE | ID: mdl-38999416

RESUMO

Background: Chest radiography is the standard method for detecting rib fractures. Our study aims to develop an artificial intelligence (AI) model that, with only a relatively small amount of training data, can identify rib fractures on chest radiographs and accurately mark their precise locations, thereby achieving a diagnostic accuracy comparable to that of medical professionals. Methods: For this retrospective study, we developed an AI model using 540 chest radiographs (270 normal and 270 with rib fractures) labeled for use with Detectron2 which incorporates a faster region-based convolutional neural network (R-CNN) enhanced with a feature pyramid network (FPN). The model's ability to classify radiographs and detect rib fractures was assessed. Furthermore, we compared the model's performance to that of 12 physicians, including six board-certified anesthesiologists and six residents, through an observer performance test. Results: Regarding the radiographic classification performance of the AI model, the sensitivity, specificity, and area under the receiver operating characteristic curve (AUROC) were 0.87, 0.83, and 0.89, respectively. In terms of rib fracture detection performance, the sensitivity, false-positive rate, and free-response receiver operating characteristic (JAFROC) figure of merit (FOM) were 0.62, 0.3, and 0.76, respectively. The AI model showed no statistically significant difference in the observer performance test compared to 11 of 12 and 10 of 12 physicians, respectively. Conclusions: We developed an AI model trained on a limited dataset that demonstrated a rib fracture classification and detection performance comparable to that of an experienced physician.

17.
Animals (Basel) ; 14(14)2024 Jul 09.
Artigo em Inglês | MEDLINE | ID: mdl-39061490

RESUMO

Since pig vocalization is an important indicator of monitoring pig conditions, pig vocalization detection and recognition using deep learning play a crucial role in the management and welfare of modern pig livestock farming. However, collecting pig sound data for deep learning model training takes time and effort. Acknowledging the challenges of collecting pig sound data for model training, this study introduces a deep convolutional neural network (DCNN) architecture for pig vocalization and non-vocalization classification with a real pig farm dataset. Various audio feature extraction methods were evaluated individually to compare the performance differences, including Mel-frequency cepstral coefficients (MFCC), Mel-spectrogram, Chroma, and Tonnetz. This study proposes a novel feature extraction method called Mixed-MMCT to improve the classification accuracy by integrating MFCC, Mel-spectrogram, Chroma, and Tonnetz features. These feature extraction methods were applied to extract relevant features from the pig sound dataset for input into a deep learning network. For the experiment, three datasets were collected from three actual pig farms: Nias, Gimje, and Jeongeup. Each dataset consists of 4000 WAV files (2000 pig vocalization and 2000 pig non-vocalization) with a duration of three seconds. Various audio data augmentation techniques are utilized in the training set to improve the model performance and generalization, including pitch-shifting, time-shifting, time-stretching, and background-noising. In this study, the performance of the predictive deep learning model was assessed using the k-fold cross-validation (k = 5) technique on each dataset. By conducting rigorous experiments, Mixed-MMCT showed superior accuracy on Nias, Gimje, and Jeongeup, with rates of 99.50%, 99.56%, and 99.67%, respectively. Robustness experiments were performed to prove the effectiveness of the model by using two farm datasets as a training set and a farm as a testing set. The average performance of the Mixed-MMCT in terms of accuracy, precision, recall, and F1-score reached rates of 95.67%, 96.25%, 95.68%, and 95.96%, respectively. All results demonstrate that the proposed Mixed-MMCT feature extraction method outperforms other methods regarding pig vocalization and non-vocalization classification in real pig livestock farming.

18.
Brief Bioinform ; 25(4)2024 May 23.
Artigo em Inglês | MEDLINE | ID: mdl-38980373

RESUMO

Inferring gene regulatory networks (GRNs) allows us to obtain a deeper understanding of cellular function and disease pathogenesis. Recent advances in single-cell RNA sequencing (scRNA-seq) technology have improved the accuracy of GRN inference. However, many methods for inferring individual GRNs from scRNA-seq data are limited because they overlook intercellular heterogeneity and similarities between different cell subpopulations, which are often present in the data. Here, we propose a deep learning-based framework, DeepGRNCS, for jointly inferring GRNs across cell subpopulations. We follow the commonly accepted hypothesis that the expression of a target gene can be predicted based on the expression of transcription factors (TFs) due to underlying regulatory relationships. We initially processed scRNA-seq data by discretizing data scattering using the equal-width method. Then, we trained deep learning models to predict target gene expression from TFs. By individually removing each TF from the expression matrix, we used pre-trained deep model predictions to infer regulatory relationships between TFs and genes, thereby constructing the GRN. Our method outperforms existing GRN inference methods for various simulated and real scRNA-seq datasets. Finally, we applied DeepGRNCS to non-small cell lung cancer scRNA-seq data to identify key genes in each cell subpopulation and analyzed their biological relevance. In conclusion, DeepGRNCS effectively predicts cell subpopulation-specific GRNs. The source code is available at https://github.com/Nastume777/DeepGRNCS.


Assuntos
Aprendizado Profundo , Redes Reguladoras de Genes , Análise de Célula Única , Humanos , Análise de Célula Única/métodos , Fatores de Transcrição/genética , Fatores de Transcrição/metabolismo , Biologia Computacional/métodos , Análise de Sequência de RNA/métodos , RNA-Seq/métodos
19.
Diagnostics (Basel) ; 14(12)2024 Jun 17.
Artigo em Inglês | MEDLINE | ID: mdl-38928694

RESUMO

OBJECTIVE: This study aimed to assess the impact of artificial intelligence (AI)-driven noise reduction algorithms on metal artifacts and image quality parameters in cone-beam computed tomography (CBCT) images of the oral cavity. MATERIALS AND METHODS: This retrospective study included 70 patients, 61 of whom were analyzed after excluding those with severe motion artifacts. CBCT scans, performed using a Hyperion X9 PRO 13 × 10 CBCT machine, included images with dental implants, amalgam fillings, orthodontic appliances, root canal fillings, and crowns. Images were processed with the ClariCT.AI deep learning model (DLM) for noise reduction. Objective image quality was assessed using metrics such as the differentiation between voxel values (ΔVVs), the artifact index (AIx), and the contrast-to-noise ratio (CNR). Subjective assessments were performed by two experienced readers, who rated overall image quality and artifact intensity on predefined scales. RESULTS: Compared with native images, DLM reconstructions significantly reduced the AIx and increased the CNR (p < 0.001), indicating improved image clarity and artifact reduction. Subjective assessments also favored DLM images, with higher ratings for overall image quality and lower artifact intensity (p < 0.001). However, the ΔVV values were similar between the native and DLM images, indicating that while the DLM reduced noise, it maintained the overall density distribution. Orthodontic appliances produced the most pronounced artifacts, while implants generated the least. CONCLUSIONS: AI-based noise reduction using ClariCT.AI significantly enhances CBCT image quality by reducing noise and metal artifacts, thereby improving diagnostic accuracy and treatment planning. Further research with larger, multicenter cohorts is recommended to validate these findings.

20.
China CDC Wkly ; 6(21): 487-492, 2024 May 24.
Artigo em Inglês | MEDLINE | ID: mdl-38854462

RESUMO

Introduction: Accurately filling out death certificates is essential for death surveillance. However, manually determining the underlying cause of death is often imprecise. In this study, we investigate the Wide and Deep framework as a method to improve the accuracy and reliability of inferring the underlying cause of death. Methods: Death report data from national-level cause of death surveillance sites in Fujian Province from 2016 to 2022, involving 403,547 deaths, were analyzed. The Wide and Deep embedded with Convolutional Neural Networks (CNN) was developed. Model performance was assessed using weighted accuracy, weighted precision, weighted recall, and weighted area under the curve (AUC). A comparison was made with XGBoost, CNN, Gated Recurrent Unit (GRU), Transformer, and GRU with Attention. Results: The Wide and Deep achieved strong performance metrics on the test set: precision of 95.75%, recall of 92.08%, F1 Score of 93.78%, and an AUC of 95.99%. The model also displayed specific F1 Scores for different cause-of-death chain lengths: 97.13% for single causes, 95.08% for double causes, 91.24% for triple causes, and 79.50% for quadruple causes. Conclusions: The Wide and Deep significantly enhances the ability to determine the root causes of death, providing a valuable tool for improving cause-of-death surveillance quality. Integrating artificial intelligence (AI) in this field is anticipated to streamline death registration and reporting procedures, thereby boosting the precision of public health data.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA