Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 3.084
Filter
1.
J Neural Eng ; 2024 Jul 10.
Article in English | MEDLINE | ID: mdl-38986464

ABSTRACT

Eye-tracking research has proven valuable in understanding numerous cognitive functions. Recently, Frey et al. provided an exciting deep learning method for learning eye movements from functional magnetic resonance imaging (fMRI) data. It employed the multi-step co-registration of fMRI into the group template to obtain eyeball signal, and thus required additional templates and was time consuming. To resolve this issue, in this paper, we propose a framework named MRGazer for predicting eye gaze points from fMRI in individual space. The MRGazer consists of an eyeball extraction module and a residual network-based eye gaze prediction module. Compared to the previous method, the proposed framework skips the fMRI co-registration step, simplifies the processing protocol, and achieves end-to-end eye gaze regression. The proposed method achieved superior performance in eye fixation regression (Euclidean error, EE=2.04°) than the co-registration-based method (EE=2.89°), and delivered objective results within a shorter time (~0.02 second/volume) than prior method (~0.3 second/volume). The code is available at https://github.com/ustc-bmec/MRGazer.

2.
Int Immunopharmacol ; 138: 112608, 2024 Jul 08.
Article in English | MEDLINE | ID: mdl-38981221

ABSTRACT

BACKGROUND: Abdominal aortic aneurysm (AAA) poses a significant health risk and is influenced by various compositional features. This study aimed to develop an artificial intelligence-driven multiomics predictive model for AAA subtypes to identify heterogeneous immune cell infiltration and predict disease progression. Additionally, we investigated neutrophil heterogeneity in patients with different AAA subtypes to elucidate the relationship between the immune microenvironment and AAA pathogenesis. METHODS: This study enrolled 517 patients with AAA, who were clustered using k-means algorithm to identify AAA subtypes and stratify the risk. We utilized residual convolutional neural network 200 to annotate and extract contrast-enhanced computed tomography angiography images of AAA. A precise predictive model for AAA subtypes was established using clinical, imaging, and immunological data. We performed a comparative analysis of neutrophil levels in the different subgroups and immune cell infiltration analysis to explore the associations between neutrophil levels and AAA. Quantitative polymerase chain reaction, Western blotting, and enzyme-linked immunosorbent assay were performed to elucidate the interplay between CXCL1, neutrophil activation, and the nuclear factor (NF)-κB pathway in AAA pathogenesis. Furthermore, the effect of CXCL1 silencing with small interfering RNA was investigated. RESULTS: Two distinct AAA subtypes were identified, one clinically more severe and more likely to require surgical intervention. The CNN effectively detected AAA-associated lesion regions on computed tomography angiography, and the predictive model demonstrated excellent ability to discriminate between patients with the two identified AAA subtypes (area under the curve, 0.927). Neutrophil activation, AAA pathology, CXCL1 expression, and the NF-κB pathway were significantly correlated. CXCL1, NF-κB, IL-1ß, and IL-8 were upregulated in AAA. CXCL1 silencing downregulated NF-κB, interleukin-1ß, and interleukin-8. CONCLUSION: The predictive model for AAA subtypes demonstrated accurate and reliable risk stratification and clinical management. CXCL1 overexpression activated neutrophils through the NF-κB pathway, contributing to AAA development. This pathway may, therefore, be a therapeutic target in AAA.

3.
Sensors (Basel) ; 24(13)2024 Jun 25.
Article in English | MEDLINE | ID: mdl-39000903

ABSTRACT

The South-to-North Water Diversion Project in China is an extensive inter-basin water transfer project, for which ensuring the safe operation and maintenance of infrastructure poses a fundamental challenge. In this context, structural health monitoring is crucial for the safe and efficient operation of hydraulic infrastructure. Currently, most health monitoring systems for hydraulic infrastructure rely on commercial software or algorithms that only run on desktop computers. This study developed for the first time a lightweight convolutional neural network (CNN) model specifically for early detection of structural damage in water supply canals and deployed it as a tiny machine learning (TinyML) application on a low-power microcontroller unit (MCU). The model uses damage images of the supply canals that we collected as input and the damage types as output. With data augmentation techniques to enhance the training dataset, the deployed model is only 7.57 KB in size and demonstrates an accuracy of 94.17 ± 1.67% and a precision of 94.47 ± 1.46%, outperforming other commonly used CNN models in terms of performance and energy efficiency. Moreover, each inference consumes only 5610.18 µJ of energy, allowing a standard 225 mAh button cell to run continuously for nearly 11 years and perform approximately 4,945,055 inferences. This research not only confirms the feasibility of deploying real-time supply canal surface condition monitoring on low-power, resource-constrained devices but also provides practical technical solutions for improving infrastructure security.

4.
Sensors (Basel) ; 24(13)2024 Jun 28.
Article in English | MEDLINE | ID: mdl-39000977

ABSTRACT

(1) Background: The objective of this study was to predict the vascular health status of elderly women during exercise using pulse wave data and Temporal Convolutional Neural Networks (TCN); (2) Methods: A total of 492 healthy elderly women aged 60-75 years were recruited for the study. The study utilized a cross-sectional design. Vascular endothelial function was assessed non-invasively using Flow-Mediated Dilation (FMD). Pulse wave characteristics were quantified using photoplethysmography (PPG) sensors, and motion-induced noise in the PPG signals was mitigated through the application of a recursive least squares (RLS) adaptive filtering algorithm. A fixed-load cycling exercise protocol was employed. A TCN was constructed to classify flow-mediated dilation (FMD) into "optimal", "impaired", and "at risk" levels; (3) Results: TCN achieved an average accuracy of 79.3%, 84.8%, and 83.2% in predicting FMD at the "optimal", "impaired", and "at risk" levels, respectively. The results of the analysis of variance (ANOVA) comparison demonstrated that the accuracy of the TCN in predicting FMD at the impaired and at-risk levels was significantly higher than that of Long Short-Term Memory (LSTM) networks and Random Forest algorithms; (4) Conclusions: The use of pulse wave data during exercise combined with the TCN for predicting the vascular health status of elderly women demonstrated high accuracy, particularly in predicting impaired and at-risk FMD levels. This indicates that the integration of exercise pulse wave data with TCN can serve as an effective tool for the assessment and monitoring of the vascular health of elderly women.


Subject(s)
Exercise , Neural Networks, Computer , Photoplethysmography , Pulse Wave Analysis , Humans , Female , Photoplethysmography/methods , Aged , Pulse Wave Analysis/methods , Exercise/physiology , Middle Aged , Cross-Sectional Studies , Algorithms
5.
Sensors (Basel) ; 24(13)2024 Jun 28.
Article in English | MEDLINE | ID: mdl-39000985

ABSTRACT

(1) Background: The objective of this study was to recognize tai chi movements using inertial measurement units (IMUs) and temporal convolutional neural networks (TCNs) and to provide precise interventions for elderly people. (2) Methods: This study consisted of two parts: firstly, 70 skilled tai chi practitioners were used for movement recognition; secondly, 60 elderly males were used for an intervention study. IMU data were collected from skilled tai chi practitioners performing Bafa Wubu, and TCN models were constructed and trained to classify these movements. Elderly participants were divided into a precision intervention group and a standard intervention group, with the former receiving weekly real-time IMU feedback. Outcomes measured included balance, grip strength, quality of life, and depression. (3) Results: The TCN model demonstrated high accuracy in identifying tai chi movements, with percentages ranging from 82.6% to 94.4%. After eight weeks of intervention, both groups showed significant improvements in grip strength, quality of life, and depression. However, only the precision intervention group showed a significant increase in balance and higher post-intervention scores compared to the standard intervention group. (4) Conclusions: This study successfully employed IMU and TCN to identify Tai Chi movements and provide targeted feedback to older participants. Real-time IMU feedback can enhance health outcome indicators in elderly males.


Subject(s)
Movement , Neural Networks, Computer , Quality of Life , Tai Ji , Humans , Tai Ji/methods , Aged , Male , Movement/physiology , Hand Strength/physiology , Postural Balance/physiology , Female , Depression/therapy
6.
Sensors (Basel) ; 24(13)2024 Jul 05.
Article in English | MEDLINE | ID: mdl-39001152

ABSTRACT

The search for structural and microstructural defects using simple human vision is associated with significant errors in determining voids, large pores, and violations of the integrity and compactness of particle packing in the micro- and macrostructure of concrete. Computer vision methods, in particular convolutional neural networks, have proven to be reliable tools for the automatic detection of defects during visual inspection of building structures. The study's objective is to create and compare computer vision algorithms that use convolutional neural networks to identify and analyze damaged sections in concrete samples from different structures. Networks of the following architectures were selected for operation: U-Net, LinkNet, and PSPNet. The analyzed images are photos of concrete samples obtained by laboratory tests to assess the quality in terms of the defection of the integrity and compactness of the structure. During the implementation process, changes in quality metrics such as macro-averaged precision, recall, and F1-score, as well as IoU (Jaccard coefficient) and accuracy, were monitored. The best metrics were demonstrated by the U-Net model, supplemented by the cellular automaton algorithm: precision = 0.91, recall = 0.90, F1 = 0.91, IoU = 0.84, and accuracy = 0.90. The developed segmentation algorithms are universal and show a high quality in highlighting areas of interest under any shooting conditions and different volumes of defective zones, regardless of their localization. The automatization of the process of calculating the damage area and a recommendation in the "critical/uncritical" format can be used to assess the condition of concrete of various types of structures, adjust the formulation, and change the technological parameters of production.

7.
Diagnostics (Basel) ; 14(13)2024 Jun 26.
Article in English | MEDLINE | ID: mdl-39001248

ABSTRACT

Deep learning utilizing convolutional neural networks (CNNs) stands out among the state-of-the-art procedures in PC-supported medical findings. The method proposed in this paper consists of two key stages. In the first stage, the proposed deep sequential CNN model preprocesses images to isolate regions of interest from skin lesions and extracts features, capturing the relevant patterns and detecting multiple lesions. The second stage incorporates a web tool to increase the visualization of the model by promising patient health diagnoses. The proposed model was thoroughly trained, validated, and tested utilizing a database related to the HAM 10,000 dataset. The model accomplished an accuracy of 96.25% in classifying skin lesions, exhibiting significant areas of strength. The results achieved with the proposed model validated by evaluation methods and user feedback indicate substantial improvement over the current state-of-the-art methods for skin lesion classification (malignant/benign). In comparison to other models, sequential CNN surpasses CNN transfer learning (87.9%), VGG 19 (86%), ResNet-50 + VGG-16 (94.14%), Inception v3 (90%), Vision Transformers (RGB images) (92.14%), and the Entropy-NDOELM method (95.7%). The findings demonstrate the potential of deep learning, convolutional neural networks, and sequential CNN in disease detection and classification, eventually revolutionizing melanoma detection and, thus, upgrading patient consideration.

8.
Diagnostics (Basel) ; 14(13)2024 Jun 29.
Article in English | MEDLINE | ID: mdl-39001283

ABSTRACT

The rapid advancement of artificial intelligence (AI) and robotics has led to significant progress in various medical fields including interventional radiology (IR). This review focuses on the research progress and applications of AI and robotics in IR, including deep learning (DL), machine learning (ML), and convolutional neural networks (CNNs) across specialties such as oncology, neurology, and cardiology, aiming to explore potential directions in future interventional treatments. To ensure the breadth and depth of this review, we implemented a systematic literature search strategy, selecting research published within the last five years. We conducted searches in databases such as PubMed and Google Scholar to find relevant literature. Special emphasis was placed on selecting large-scale studies to ensure the comprehensiveness and reliability of the results. This review summarizes the latest research directions and developments, ultimately analyzing their corresponding potential and limitations. It furnishes essential information and insights for researchers, clinicians, and policymakers, potentially propelling advancements and innovations within the domains of AI and IR. Finally, our findings indicate that although AI and robotics technologies are not yet widely applied in clinical settings, they are evolving across multiple aspects and are expected to significantly improve the processes and efficacy of interventional treatments.

9.
Diagnostics (Basel) ; 14(13)2024 Jul 01.
Article in English | MEDLINE | ID: mdl-39001292

ABSTRACT

Breast cancer diagnosis from histopathology images is often time consuming and prone to human error, impacting treatment and prognosis. Deep learning diagnostic methods offer the potential for improved accuracy and efficiency in breast cancer detection and classification. However, they struggle with limited data and subtle variations within and between cancer types. Attention mechanisms provide feature refinement capabilities that have shown promise in overcoming such challenges. To this end, this paper proposes the Efficient Channel Spatial Attention Network (ECSAnet), an architecture built on EfficientNetV2 and augmented with a convolutional block attention module (CBAM) and additional fully connected layers. ECSAnet was fine-tuned using the BreakHis dataset, employing Reinhard stain normalization and image augmentation techniques to minimize overfitting and enhance generalizability. In testing, ECSAnet outperformed AlexNet, DenseNet121, EfficientNetV2-S, InceptionNetV3, ResNet50, and VGG16 in most settings, achieving accuracies of 94.2% at 40×, 92.96% at 100×, 88.41% at 200×, and 89.42% at 400× magnifications. The results highlight the effectiveness of CBAM in improving classification accuracy and the importance of stain normalization for generalizability.

10.
Biomed Eng Lett ; 14(4): 663-675, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38946814

ABSTRACT

Schizophrenia (SZ) is a severe, chronic mental disorder without specific treatment. Due to the increasing prevalence of SZ in societies and the similarity of the characteristics of this disease with other mental illnesses such as bipolar disorder, most people are not aware of having it in their daily lives. Therefore, early detection of this disease will allow the sufferer to seek treatment or at least control it. Previous SZ detection studies through machine learning methods, require the extraction and selection of features before the classification process. This study attempts to develop a novel, end-to-end approach based on a 15-layers convolutional neural network (CNN) and a 16-layers CNN- long short-term memory (LSTM) to help psychiatrists automatically diagnose SZ from electroencephalogram (EEG) signals. The deep model uses CNN layers to learn the temporal properties of the signals, while LSTM layers provide the sequence learning mechanism. Also, data augmentation method based on generative adversarial networks is employed over the training set to increase the diversity of the data. Results on a large EEG dataset show the high diagnostic potential of both proposed methods, achieving remarkable accuracy of 98% and 99%. This study shows that the proposed framework is able to accurately discriminate SZ from healthy subject and is potentially useful for developing diagnostic tools for SZ disorder.

11.
J Cancer ; 15(13): 4275-4286, 2024.
Article in English | MEDLINE | ID: mdl-38947386

ABSTRACT

It's a major public health problem of global concern that malignant gliomas tend to grow rapidly and infiltrate surrounding tissues. Accurate grading of the tumor can determine the degree of malignancy to formulate the best treatment plan, which can eliminate the tumor or limit widespread metastasis of the tumor, saving the patient's life and improving their prognosis. To more accurately predict the grading of gliomas, we proposed a novel method of combining the advantages of 2D and 3D Convolutional Neural Networks for tumor grading by multimodality on Magnetic Resonance Imaging. The core of the innovation lies in our combination of tumor 3D information extracted from multimodal data with those obtained from a 2D ResNet50 architecture. It solves both the lack of temporal-spatial information provided by 3D imaging in 2D convolutional neural networks and avoids more noise from too much information in 3D convolutional neural networks, which causes serious overfitting problems. Incorporating explicit tumor 3D information, such as tumor volume and surface area, enhances the grading model's performance and addresses the limitations of both approaches. By fusing information from multiple modalities, the model achieves a more precise and accurate characterization of tumors. The model I s trained and evaluated using two publicly available brain glioma datasets, achieving an AUC of 0.9684 on the validation set. The model's interpretability is enhanced through heatmaps, which highlight the tumor region. The proposed method holds promise for clinical application in tumor grading and contributes to the field of medical diagnostics for prediction.

12.
Cureus ; 16(5): e61379, 2024 May.
Article in English | MEDLINE | ID: mdl-38947677

ABSTRACT

Leukemia is a rare but fatal cancer of the blood. This cancer arises from abnormal bone marrow cells and requires prompt diagnosis for effective treatment and positive patient prognosis. Traditional diagnostic methods (e.g., microscopy, flow cytometry, and biopsy) pose challenges in both accuracy and time, demanding an inquisition on the development and use of deep learning (DL) models, such as convolutional neural networks (CNN), which could allow for a faster and more exact diagnosis. Using specific, objective criteria, DL might hold promise as a tool for physicians to diagnose leukemia. The purpose of this review was to report the relevant available published literature on using DL to diagnose leukemia. Using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines, articles published between 2010 and 2023 were searched using Embase, Ovid MEDLINE, and Web of Science, searching the terms "leukemia" AND "deep learning" or "artificial neural network" OR "neural network" AND "diagnosis" OR "detection." After screening retrieved articles using pre-determined eligibility criteria, 20 articles were included in the final review and reported chronologically due to the nascent nature of the phenomenon. The initial studies laid the groundwork for subsequent innovations, illustrating the transition from specialized methods to more generalized approaches capitalizing on DL technologies for leukemia detection. This summary of recent DL models revealed a paradigm shift toward integrated architectures, resulting in notable enhancements in accuracy and efficiency. The continuous refinement of models and techniques, coupled with an emphasis on simplicity and efficiency, positions DL as a promising tool for leukemia detection. With the help of these neural networks, leukemia detection could be hastened, allowing for an improved long-term outlook and prognosis. Further research is warranted using real-life scenarios to confirm the suggested transformative effects DL models could have on leukemia diagnosis.

13.
Endosc Ultrasound ; 13(2): 65-75, 2024.
Article in English | MEDLINE | ID: mdl-38947752

ABSTRACT

Artificial intelligence (AI) is an epoch-making technology, among which the 2 most advanced parts are machine learning and deep learning algorithms that have been further developed by machine learning, and it has been partially applied to assist EUS diagnosis. AI-assisted EUS diagnosis has been reported to have great value in the diagnosis of pancreatic tumors and chronic pancreatitis, gastrointestinal stromal tumors, esophageal early cancer, biliary tract, and liver lesions. The application of AI in EUS diagnosis still has some urgent problems to be solved. First, the development of sensitive AI diagnostic tools requires a large amount of high-quality training data. Second, there is overfitting and bias in the current AI algorithms, leading to poor diagnostic reliability. Third, the value of AI still needs to be determined in prospective studies. Fourth, the ethical risks of AI need to be considered and avoided.

14.
Methods Mol Biol ; 2780: 303-325, 2024.
Article in English | MEDLINE | ID: mdl-38987475

ABSTRACT

Antibodies are a class of proteins that recognize and neutralize pathogens by binding to their antigens. They are the most significant category of biopharmaceuticals for both diagnostic and therapeutic applications. Understanding how antibodies interact with their antigens plays a fundamental role in drug and vaccine design and helps to comprise the complex antigen binding mechanisms. Computational methods for predicting interaction sites of antibody-antigen are of great value due to the overall cost of experimental methods. Machine learning methods and deep learning techniques obtained promising results.In this work, we predict antibody interaction interface sites by applying HSS-PPI, a hybrid method defined to predict the interface sites of general proteins. The approach abstracts the proteins in terms of hierarchical representation and uses a graph convolutional network to classify the amino acids between interface and non-interface. Moreover, we also equipped the amino acids with different sets of physicochemical features together with structural ones to describe the residues. Analyzing the results, we observe that the structural features play a fundamental role in the amino acid descriptions. We compare the obtained performances, evaluated using standard metrics, with the ones obtained with SVM with 3D Zernike descriptors, Parapred, Paratome, and Antibody i-Patch.


Subject(s)
Computational Biology , Computational Biology/methods , Antigens/immunology , Binding Sites, Antibody , Antibodies/immunology , Antibodies/chemistry , Humans , Antigen-Antibody Complex/chemistry , Antigen-Antibody Complex/immunology , Protein Binding , Machine Learning , Databases, Protein , Algorithms
15.
PeerJ Comput Sci ; 10: e2103, 2024.
Article in English | MEDLINE | ID: mdl-38983199

ABSTRACT

Images and videos containing fake faces are the most common type of digital manipulation. Such content can lead to negative consequences by spreading false information. The use of machine learning algorithms to produce fake face images has made it challenging to distinguish between genuine and fake content. Face manipulations are categorized into four basic groups: entire face synthesis, face identity manipulation (deepfake), facial attribute manipulation and facial expression manipulation. The study utilized lightweight convolutional neural networks to detect fake face images generated by using entire face synthesis and generative adversarial networks. The dataset used in the training process includes 70,000 real images in the FFHQ dataset and 70,000 fake images produced with StyleGAN2 using the FFHQ dataset. 80% of the dataset was used for training and 20% for testing. Initially, the MobileNet, MobileNetV2, EfficientNetB0, and NASNetMobile convolutional neural networks were trained separately for the training process. In the training, the models were pre-trained on ImageNet and reused with transfer learning. As a result of the first trainings EfficientNetB0 algorithm reached the highest accuracy of 93.64%. The EfficientNetB0 algorithm was revised to increase its accuracy rate by adding two dense layers (256 neurons) with ReLU activation, two dropout layers, one flattening layer, one dense layer (128 neurons) with ReLU activation function, and a softmax activation function used for the classification dense layer with two nodes. As a result of this process accuracy rate of 95.48% was achieved with EfficientNetB0 algorithm. Finally, the model that achieved 95.48% accuracy was used to train MobileNet and MobileNetV2 models together using the stacking ensemble learning method, resulting in the highest accuracy rate of 96.44%.

16.
Sensors (Basel) ; 24(13)2024 Jun 21.
Article in English | MEDLINE | ID: mdl-39000829

ABSTRACT

This paper presents a new deep-learning architecture designed to enhance the spatial synchronization between CMOS and event cameras by harnessing their complementary characteristics. While CMOS cameras produce high-quality imagery, they struggle in rapidly changing environments-a limitation that event cameras overcome due to their superior temporal resolution and motion clarity. However, effective integration of these two technologies relies on achieving precise spatial alignment, a challenge unaddressed by current algorithms. Our architecture leverages a dynamic graph convolutional neural network (DGCNN) to process event data directly, improving synchronization accuracy. We found that synchronization precision strongly correlates with the spatial concentration and density of events, with denser distributions yielding better alignment results. Our empirical results demonstrate that areas with denser event clusters enhance calibration accuracy, with calibration errors increasing in more uniformly distributed event scenarios. This research pioneers scene-based synchronization between CMOS and event cameras, paving the way for advancements in mixed-modality visual systems. The implications are significant for applications requiring detailed visual and temporal information, setting new directions for the future of visual perception technologies.

17.
Cureus ; 16(6): e62264, 2024 Jun.
Article in English | MEDLINE | ID: mdl-39011227

ABSTRACT

INTRODUCTION:  Oral tumors necessitate a dependable computer-assisted pathological diagnosis system considering their rarity and diversity. A content-based image retrieval (CBIR) system using deep neural networks has been successfully devised for digital pathology. No CBIR system for oral pathology has been investigated because of the lack of an extensive image database and feature extractors tailored to oral pathology. MATERIALS AND METHODS: This study uses a large CBIR database constructed from 30 categories of oral tumors to compare deep learning methods as feature extractors. RESULTS: The highest average area under the receiver operating characteristic curve (AUC) was achieved by models trained on database images using self-supervised learning (SSL) methods (0.900 with SimCLR and 0.897 with TiCo). The generalizability of the models was validated using query images from the same cases taken with smartphones. When smartphone images were tested as queries, both models yielded the highest mean AUC (0.871 with SimCLR and 0.857 with TiCo). We ensured the retrieved image result would be easily observed by evaluating the top 10 mean accuracies and checking for an exact diagnostic category and its differential diagnostic categories. CONCLUSION: Training deep learning models with SSL methods using image data specific to the target site is beneficial for CBIR tasks in oral tumor histology to obtain histologically meaningful results and high performance. This result provides insight into the effective development of a CBIR system to help improve the accuracy and speed of histopathology diagnosis and advance oral tumor research in the future.

18.
Sensors (Basel) ; 24(13)2024 Jun 24.
Article in English | MEDLINE | ID: mdl-39000868

ABSTRACT

Diabetes has emerged as a worldwide health crisis, affecting approximately 537 million adults. Maintaining blood glucose requires careful observation of diet, physical activity, and adherence to medications if necessary. Diet monitoring historically involves keeping food diaries; however, this process can be labor-intensive, and recollection of food items may introduce errors. Automated technologies such as food image recognition systems (FIRS) can make use of computer vision and mobile cameras to reduce the burden of keeping diaries and improve diet tracking. These tools provide various levels of diet analysis, and some offer further suggestions for improving the nutritional quality of meals. The current study is a systematic review of mobile computer vision-based approaches for food classification, volume estimation, and nutrient estimation. Relevant articles published over the last two decades are evaluated, and both future directions and issues related to FIRS are explored.


Subject(s)
Diabetes Mellitus , Smartphone , Humans , Diet Records , Blood Glucose/analysis
19.
Article in English | MEDLINE | ID: mdl-38963298

ABSTRACT

Metal-organic frameworks (MOFs) are one of the most promising hydrogen-storing materials due to their rich specific surface area, adjustable topological and pore structures, and modified functional groups. In this work, we developed automatically parallel computational workflows for high-throughput screening of ∼11,600 MOFs from the CoRE database and discovered 69 top-performing MOF candidates with work capacity greater than 1.00 wt % at 298.5 K and a pressure swing between 100 and 0.1 bar, which is at least twice that of MOF-5. In particular, ZITRUP, OQFAJ01, WANHOL, and VATYIZ showed excellent hydrogen storage performance of 4.48, 3.16, 2.19, and 2.16 wt %. We specifically analyzed the relationship between pore-limiting diameter, largest cavity diameter, void fraction, open metal sites, metal elements or nonmetallic atomic elements, and deliverable capacity and found that not only geometrical and physical features of crystalline but also chemical properties of adsorbate sites determined the H2 storage capacity of MOFs at room temperature. It is highlighted that we first proposed the modified crystal graph convolutional neural networks by incorporating the obtained geometrical and physical features into the convolutional high-dimensional feature vectors of period crystal structures for predicting H2 storage performance, which can improve the prediction accuracy of the neural network from the former mean absolute error (MAE) of 0.064 wt % to the current MAE of 0.047 wt % and shorten the consuming time to about 10-4 times of high-throughput computational screening. This work opens a new avenue toward high-throughput screening of MOFs for H2 adsorption capacity, which can be extended for the screening and discovery of other functional materials.

20.
Front Oncol ; 14: 1320220, 2024.
Article in English | MEDLINE | ID: mdl-38962264

ABSTRACT

Background: Our previous studies have demonstrated that Raman spectroscopy could be used for skin cancer detection with good sensitivity and specificity. The objective of this study is to determine if skin cancer detection can be further improved by combining deep neural networks and Raman spectroscopy. Patients and methods: Raman spectra of 731 skin lesions were included in this study, containing 340 cancerous and precancerous lesions (melanoma, basal cell carcinoma, squamous cell carcinoma and actinic keratosis) and 391 benign lesions (melanocytic nevus and seborrheic keratosis). One-dimensional convolutional neural networks (1D-CNN) were developed for Raman spectral classification. The stratified samples were divided randomly into training (70%), validation (10%) and test set (20%), and were repeated 56 times using parallel computing. Different data augmentation strategies were implemented for the training dataset, including added random noise, spectral shift, spectral combination and artificially synthesized Raman spectra using one-dimensional generative adversarial networks (1D-GAN). The area under the receiver operating characteristic curve (ROC AUC) was used as a measure of the diagnostic performance. Conventional machine learning approaches, including partial least squares for discriminant analysis (PLS-DA), principal component and linear discriminant analysis (PC-LDA), support vector machine (SVM), and logistic regression (LR) were evaluated for comparison with the same data splitting scheme as the 1D-CNN. Results: The ROC AUC of the test dataset based on the original training spectra were 0.886±0.022 (1D-CNN), 0.870±0.028 (PLS-DA), 0.875±0.033 (PC-LDA), 0.864±0.027 (SVM), and 0.525±0.045 (LR), which were improved to 0.909±0.021 (1D-CNN), 0.899±0.022 (PLS-DA), 0.895±0.022 (PC-LDA), 0.901±0.020 (SVM), and 0.897±0.021 (LR) respectively after augmentation of the training dataset (p<0.0001, Wilcoxon test). Paired analyses of 1D-CNN with conventional machine learning approaches showed that 1D-CNN had a 1-3% improvement (p<0.001, Wilcoxon test). Conclusions: Data augmentation not only improved the performance of both deep neural networks and conventional machine learning techniques by 2-4%, but also improved the performance of the models on spectra with higher noise or spectral shifting. Convolutional neural networks slightly outperformed conventional machine learning approaches for skin cancer detection by Raman spectroscopy.

SELECTION OF CITATIONS
SEARCH DETAIL
...