Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 2.636
Filtrar
1.
Brief Bioinform ; 25(4)2024 May 23.
Artigo em Inglês | MEDLINE | ID: mdl-38990514

RESUMO

Protein-peptide interactions (PPepIs) are vital to understanding cellular functions, which can facilitate the design of novel drugs. As an essential component in forming a PPepI, protein-peptide binding sites are the basis for understanding the mechanisms involved in PPepIs. Therefore, accurately identifying protein-peptide binding sites becomes a critical task. The traditional experimental methods for researching these binding sites are labor-intensive and time-consuming, and some computational tools have been invented to supplement it. However, these computational tools have limitations in generality or accuracy due to the need for ligand information, complex feature construction, or their reliance on modeling based on amino acid residues. To deal with the drawbacks of these computational algorithms, we describe a geometric attention-based network for peptide binding site identification (GAPS) in this work. The proposed model utilizes geometric feature engineering to construct atom representations and incorporates multiple attention mechanisms to update relevant biological features. In addition, the transfer learning strategy is implemented for leveraging the protein-protein binding sites information to enhance the protein-peptide binding sites recognition capability, taking into account the common structure and biological bias between proteins and peptides. Consequently, GAPS demonstrates the state-of-the-art performance and excellent robustness in this task. Moreover, our model exhibits exceptional performance across several extended experiments including predicting the apo protein-peptide, protein-cyclic peptide and the AlphaFold-predicted protein-peptide binding sites. These results confirm that the GAPS model is a powerful, versatile, stable method suitable for diverse binding site predictions.


Assuntos
Peptídeos , Sítios de Ligação , Peptídeos/química , Peptídeos/metabolismo , Ligação Proteica , Biologia Computacional/métodos , Algoritmos , Proteínas/química , Proteínas/metabolismo , Aprendizado de Máquina
2.
PeerJ Comput Sci ; 10: e2103, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38983199

RESUMO

Images and videos containing fake faces are the most common type of digital manipulation. Such content can lead to negative consequences by spreading false information. The use of machine learning algorithms to produce fake face images has made it challenging to distinguish between genuine and fake content. Face manipulations are categorized into four basic groups: entire face synthesis, face identity manipulation (deepfake), facial attribute manipulation and facial expression manipulation. The study utilized lightweight convolutional neural networks to detect fake face images generated by using entire face synthesis and generative adversarial networks. The dataset used in the training process includes 70,000 real images in the FFHQ dataset and 70,000 fake images produced with StyleGAN2 using the FFHQ dataset. 80% of the dataset was used for training and 20% for testing. Initially, the MobileNet, MobileNetV2, EfficientNetB0, and NASNetMobile convolutional neural networks were trained separately for the training process. In the training, the models were pre-trained on ImageNet and reused with transfer learning. As a result of the first trainings EfficientNetB0 algorithm reached the highest accuracy of 93.64%. The EfficientNetB0 algorithm was revised to increase its accuracy rate by adding two dense layers (256 neurons) with ReLU activation, two dropout layers, one flattening layer, one dense layer (128 neurons) with ReLU activation function, and a softmax activation function used for the classification dense layer with two nodes. As a result of this process accuracy rate of 95.48% was achieved with EfficientNetB0 algorithm. Finally, the model that achieved 95.48% accuracy was used to train MobileNet and MobileNetV2 models together using the stacking ensemble learning method, resulting in the highest accuracy rate of 96.44%.

3.
PeerJ Comput Sci ; 10: e2107, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38983235

RESUMO

Fine-tuning is an important technique in transfer learning that has achieved significant success in tasks that lack training data. However, as it is difficult to extract effective features for single-source domain fine-tuning when the data distribution difference between the source and the target domain is large, we propose a transfer learning framework based on multi-source domain called adaptive multi-source domain collaborative fine-tuning (AMCF) to address this issue. AMCF utilizes multiple source domain models for collaborative fine-tuning, thereby improving the feature extraction capability of model in the target task. Specifically, AMCF employs an adaptive multi-source domain layer selection strategy to customize appropriate layer fine-tuning schemes for the target task among multiple source domain models, aiming to extract more efficient features. Furthermore, a novel multi-source domain collaborative loss function is designed to facilitate the precise extraction of target data features by each source domain model. Simultaneously, it works towards minimizing the output difference among various source domain models, thereby enhancing the adaptability of the source domain model to the target data. In order to validate the effectiveness of AMCF, it is applied to seven public visual classification datasets commonly used in transfer learning, and compared with the most widely used single-source domain fine-tuning methods. Experimental results demonstrate that, in comparison with the existing fine-tuning methods, our method not only enhances the accuracy of feature extraction in the model but also provides precise layer fine-tuning schemes for the target task, thereby significantly improving the fine-tuning performance.

4.
Plant Methods ; 20(1): 101, 2024 Jul 05.
Artigo em Inglês | MEDLINE | ID: mdl-38970029

RESUMO

BACKGROUND: The occurrence, development, and outbreak of tea diseases and pests pose a significant challenge to the quality and yield of tea, necessitating prompt identification and control measures. Given the vast array of tea diseases and pests, coupled with the intricacies of the tea planting environment, accurate and rapid diagnosis remains elusive. In addressing this issue, the present study investigates the utilization of transfer learning convolution neural networks for the identification of tea diseases and pests. Our objective is to facilitate the accurate and expeditious detection of diseases and pests affecting the Yunnan Big leaf kind of tea within its complex ecological niche. RESULTS: Initially, we gathered 1878 image data encompassing 10 prevalent types of tea diseases and pests from complex environments within tea plantations, compiling a comprehensive dataset. Additionally, we employed data augmentation techniques to enrich the sample diversity. Leveraging the ImageNet pre-trained model, we conducted a comprehensive evaluation and identified the Xception architecture as the most effective model. Notably, the integration of an attention mechanism within the Xeption model did not yield improvements in recognition performance. Subsequently, through transfer learning and the freezing core strategy, we achieved a test accuracy rate of 98.58% and a verification accuracy rate of 98.2310%. CONCLUSIONS: These outcomes signify a significant stride towards accurate and timely detection, holding promise for enhancing the sustainability and productivity of Yunnan tea. Our findings provide a theoretical foundation and technical guidance for the development of online detection technologies for tea diseases and pests in Yunnan.

5.
J Cheminform ; 16(1): 79, 2024 Jul 07.
Artigo em Inglês | MEDLINE | ID: mdl-38972994

RESUMO

BACKGROUND: Previous deep learning methods for predicting protein binding pockets mainly employed 3D convolution, yet an abundance of convolution operations may lead the model to excessively prioritize local information, thus overlooking global information. Moreover, it is essential for us to account for the influence of diverse protein folding structural classes. Because proteins classified differently structurally exhibit varying biological functions, whereas those within the same structural class share similar functional attributes. RESULTS: We proposed LVPocket, a novel method that synergistically captures both local and global information of protein structure through the integration of Transformer encoders, which help the model achieve better performance in binding pockets prediction. And then we tailored prediction models for data of four distinct structural classes of proteins using the transfer learning. The four fine-tuned models were trained on the baseline LVPocket model which was trained on the sc-PDB dataset. LVPocket exhibits superior performance on three independent datasets compared to current state-of-the-art methods. Additionally, the fine-tuned model outperforms the baseline model in terms of performance. SCIENTIFIC CONTRIBUTION: We present a novel model structure for predicting protein binding pockets that provides a solution for relying on extensive convolutional computation while neglecting global information about protein structures. Furthermore, we tackle the impact of different protein folding structures on binding pocket prediction tasks through the application of transfer learning methods.

6.
J Neural Eng ; 2024 Jul 10.
Artigo em Inglês | MEDLINE | ID: mdl-38986468

RESUMO

OBJECTIVE: Electroencephalography (EEG) is widely recognized as an effective method for detecting fatigue. However, practical applications of EEG for fatigue detection in real-world scenarios are often challenging, particularly in cases involving subjects not included in the training datasets, owing to bio-individual differences and noisy labels. This study aims to develop an effective framework for cross-subject fatigue detection by addressing these challenges. APPROACH: In this study, we propose a novel framework, termed DP-MP, for cross-subject fatigue detection, which utilizes a Domain-Adversarial Neural Network (DANN)-based prototypical representation in conjunction with Mix-up pairwise learning. Our proposed DP-MP framework aims to mitigate the impact of bio-individual differences by encoding fatigue-related semantic structures within EEG signals and exploring shared fatigue prototype features across individuals. Notably, to the best of our knowledge, this work is the first to conceptualize fatigue detection as a pairwise learning task, thereby effectively reducing the interference from noisy labels. Furthermore, we propose the Mix-up pairwise learning (MixPa) approach in the field of fatigue detection, which broadens the advantages of pairwise learning by introducing more diverse and informative relationships among samples. RESULTS: Cross-subject experiments were conducted on two benchmark databases, SEED-VIG and FTEF, achieving state-of-the-art performance with average accuracies of 88.14% and 97.41%, respectively. These promising results demonstrate our model's effectiveness and excellent generalization capability. SIGNIFICANCE: This is the first time EEG-based fatigue detection has been conceptualized as a pairwise learning task, offering a novel perspective to this field. Moreover, our proposed DP-MP framework effectively tackles the challenges of bio-individual differences and noisy labels in the fatigue detection field and demonstrates superior performance. Our work provides valuable insights for future research, promoting the application of brain-computer interfaces for fatigue detection in real-world scenarios. .

7.
Comput Biol Med ; 179: 108734, 2024 Jul 03.
Artigo em Inglês | MEDLINE | ID: mdl-38964243

RESUMO

Artificial intelligence (AI) has played a vital role in computer-aided drug design (CADD). This development has been further accelerated with the increasing use of machine learning (ML), mainly deep learning (DL), and computing hardware and software advancements. As a result, initial doubts about the application of AI in drug discovery have been dispelled, leading to significant benefits in medicinal chemistry. At the same time, it is crucial to recognize that AI is still in its infancy and faces a few limitations that need to be addressed to harness its full potential in drug discovery. Some notable limitations are insufficient, unlabeled, and non-uniform data, the resemblance of some AI-generated molecules with existing molecules, unavailability of inadequate benchmarks, intellectual property rights (IPRs) related hurdles in data sharing, poor understanding of biology, focus on proxy data and ligands, lack of holistic methods to represent input (molecular structures) to prevent pre-processing of input molecules (feature engineering), etc. The major component in AI infrastructure is input data, as most of the successes of AI-driven efforts to improve drug discovery depend on the quality and quantity of data, used to train and test AI algorithms, besides a few other factors. Additionally, data-gulping DL approaches, without sufficient data, may collapse to live up to their promise. Current literature suggests a few methods, to certain extent, effectively handle low data for better output from the AI models in the context of drug discovery. These are transferring learning (TL), active learning (AL), single or one-shot learning (OSL), multi-task learning (MTL), data augmentation (DA), data synthesis (DS), etc. One different method, which enables sharing of proprietary data on a common platform (without compromising data privacy) to train ML model, is federated learning (FL). In this review, we compare and discuss these methods, their recent applications, and limitations while modeling small molecule data to get the improved output of AI methods in drug discovery. Article also sums up some other novel methods to handle inadequate data.

8.
Artigo em Inglês | MEDLINE | ID: mdl-38965165

RESUMO

PURPOSE: Cardiac perfusion MRI is vital for disease diagnosis, treatment planning, and risk stratification, with anomalies serving as markers of underlying ischemic pathologies. AI-assisted methods and tools enable accurate and efficient left ventricular (LV) myocardium segmentation on all DCE-MRI timeframes, offering a solution to the challenges posed by the multidimensional nature of the data. This study aims to develop and assess an automated method for LV myocardial segmentation on DCE-MRI data of a local hospital. METHODS: The study consists of retrospective DCE-MRI data from 55 subjects acquired at the local hospital using a 1.5 T MRI scanner. The dataset included subjects with and without cardiac abnormalities. The timepoint for the reference frame (post-contrast LV myocardium) was identified using standard deviation across the temporal sequences. Iterative image registration of other temporal images with respect to this reference image was performed using Maxwell's demons algorithm. The registered stack was fed to the model built using the U-Net framework for predicting the LV myocardium at all timeframes of DCE-MRI. RESULTS: The mean and standard deviation of the dice similarity coefficient (DSC) for myocardial segmentation using pre-trained network Net_cine is 0.78 ± 0.04, and for the fine-tuned network Net_dyn which predicts mask on all timeframes individually, it is 0.78 ± 0.03. The DSC for Net_dyn ranged from 0.71 to 0.93. The average DSC achieved for the reference frame is 0.82 ± 0.06. CONCLUSION: The study proposed a fast and fully automated AI-assisted method to segment LV myocardium on all timeframes of DCE-MRI data. The method is robust, and its performance is independent of the intra-temporal sequence registration and can easily accommodate timeframes with potential registration errors.

9.
J Med Imaging (Bellingham) ; 11(4): 044502, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38988991

RESUMO

Purpose: Lung cancer is the second most common cancer and the leading cause of cancer death globally. Low dose computed tomography (LDCT) is the recommended imaging screening tool for the early detection of lung cancer. A fully automated computer-aided detection method for LDCT will greatly improve the existing clinical workflow. Most of the existing methods for lung detection are designed for high-dose CTs (HDCTs), and those methods cannot be directly applied to LDCTs due to domain shifts and inferior quality of LDCT images. In this work, we describe a semi-automated transfer learning-based approach for the early detection of lung nodules using LDCTs. Approach: In this work, we developed an algorithm based on the object detection model, you only look once (YOLO) to detect lung nodules. The YOLO model was first trained on CTs, and the pre-trained weights were used as initial weights during the retraining of the model on LDCTs using a medical-to-medical transfer learning approach. The dataset for this study was from a screening trial consisting of LDCTs acquired from 50 biopsy-confirmed lung cancer patients obtained over 3 consecutive years (T1, T2, and T3). About 60 lung cancer patients' HDCTs were obtained from a public dataset. The developed model was evaluated using a hold-out test set comprising 15 patient cases (93 slices with cancerous nodules) using precision, specificity, recall, and F1-score. The evaluation metrics were reported patient-wise on a per-year basis and averaged for 3 years. For comparative analysis, the proposed detection model was trained using pre-trained weights from the COCO dataset as the initial weights. A paired t-test and chi-squared test with an alpha value of 0.05 were used for statistical significance testing. Results: The results were reported by comparing the proposed model developed using HDCT pre-trained weights with COCO pre-trained weights. The former approach versus the latter approach obtained a precision of 0.982 versus 0.93 in detecting cancerous nodules, specificity of 0.923 versus 0.849 in identifying slices with no cancerous nodules, recall of 0.87 versus 0.886, and F1-score of 0.924 versus 0.903. As the nodule progressed, the former approach achieved a precision of 1, specificity of 0.92, and sensitivity of 0.930. The statistical analysis performed in the comparative study resulted in a p -value of 0.0054 for precision and a p -value of 0.00034 for specificity. Conclusions: In this study, a semi-automated method was developed to detect lung nodules in LDCTs using HDCT pre-trained weights as the initial weights and retraining the model. Further, the results were compared by replacing HDCT pre-trained weights in the above approach with COCO pre-trained weights. The proposed method may identify early lung nodules during the screening program, reduce overdiagnosis and follow-ups due to misdiagnosis in LDCTs, start treatment options in the affected patients, and lower the mortality rate.

10.
Front Plant Sci ; 15: 1409194, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38966142

RESUMO

Introduction: Cotton yield estimation is crucial in the agricultural process, where the accuracy of boll detection during the flocculation period significantly influences yield estimations in cotton fields. Unmanned Aerial Vehicles (UAVs) are frequently employed for plant detection and counting due to their cost-effectiveness and adaptability. Methods: Addressing the challenges of small target cotton bolls and low resolution of UAVs, this paper introduces a method based on the YOLO v8 framework for transfer learning, named YOLO small-scale pyramid depth-aware detection (SSPD). The method combines space-to-depth and non-strided convolution (SPD-Conv) and a small target detector head, and also integrates a simple, parameter-free attentional mechanism (SimAM) that significantly improves target boll detection accuracy. Results: The YOLO SSPD achieved a boll detection accuracy of 0.874 on UAV-scale imagery. It also recorded a coefficient of determination (R2) of 0.86, with a root mean square error (RMSE) of 12.38 and a relative root mean square error (RRMSE) of 11.19% for boll counts. Discussion: The findings indicate that YOLO SSPD can significantly improve the accuracy of cotton boll detection on UAV imagery, thereby supporting the cotton production process. This method offers a robust solution for high-precision cotton monitoring, enhancing the reliability of cotton yield estimates.

11.
J Neural Eng ; 2024 Jul 05.
Artigo em Inglês | MEDLINE | ID: mdl-38968936

RESUMO

$Objective.$ Domain adaptation has been recognized as a potent solution to the challenge of limited training data for electroencephalography (EEG) classification tasks. Existing studies primarily focus on homogeneous environments, however, the heterogeneous properties of EEG data arising from device diversity cannot be overlooked. This motivates the development of heterogeneous domain adaptation methods that can fully exploit the knowledge from an auxiliary heterogeneous domain for EEG classification. $Approach.$ In this article, we propose a novel model named Informative Representation Fusion (IRF) to tackle the problem of unsupervised heterogeneous domain adaptation in the context of EEG data. In IRF, we consider different perspectives of data, i.e., independent identically distributed (iid) and non-iid, to learn different representations. Specifically, from the non-iid perspective, IRF models high-order correlations among data by hypergraphs and develops hypergraph encoders to obtain data representations of each domain. From the non-iid perspective, by applying multi-layer perceptron networks to the source and target domain data, we achieve another type of representation for both domains. Subsequently, an attention mechanism is used to fuse these two types of representations to yield informative features. To learn transferable representations, the Maximum Mean Discrepancy is utilized to align the distributions of the source and target domains based on the fused features. $Main~results.$ Experimental results on several real-world datasets demonstrate the effectiveness of the proposed model. $Significance.$ This article handles an EEG classification situation where the source and target EEG data lie in different spaces, and what's more, under an unsupervised learning setting. This situation is practical in the real world but barely studied in the literature. The proposed model achieves high classification accuracy, and this study is important for the commercial applications of EEG-based BCIs.

12.
Water Res ; 261: 121933, 2024 Jun 20.
Artigo em Inglês | MEDLINE | ID: mdl-38972234

RESUMO

Data-driven metamodels reproduce the input-output mapping of physics-based models while significantly reducing simulation times. Such techniques are widely used in the design, control, and optimization of water distribution systems. Recent research highlights the potential of metamodels based on Graph Neural Networks as they efficiently leverage graph-structured characteristics of water distribution systems. Furthermore, these metamodels possess inductive biases that facilitate generalization to unseen topologies. Transferable metamodels are particularly advantageous for problems that require an efficient evaluation of many alternative layouts or when training data is scarce. However, the transferability of metamodels based on GNNs remains limited, due to the lack of representation of physical processes that occur on edge level, i.e. pipes. To address this limitation, our work introduces Edge-Based Graph Neural Networks, which extend the set of inductive biases and represent link-level processes in more detail than traditional Graph Neural Networks. Such an architecture is theoretically related to the constraints of mass conservation at the junctions. To verify our approach, we test the suitability of the edge-based network to estimate pipe flowrates and nodal pressures emulating steady-state EPANET simulations. We first compare the effectiveness of the metamodels on several benchmark water distribution systems against Graph Neural Networks. Then, we explore transferability by evaluating the performance on unseen systems. For each configuration, we calculate model performance metrics, such as coefficient of determination and speed-up with respect to the original numerical model. Our results show that the proposed method captures the pipe-level physical processes more accurately than node-based models. When tested on unseen water networks with a similar distribution of demands, our model retains a good generalization performance with a coefficient of determination of up to 0.98 for flowrates and up to 0.95 for predicted heads. Further developments could include simultaneous derivation of pressures and flowrates.

13.
Sensors (Basel) ; 24(11)2024 May 30.
Artigo em Inglês | MEDLINE | ID: mdl-38894313

RESUMO

The purpose of this paper is to propose a novel transfer learning regularization method based on knowledge distillation. Recently, transfer learning methods have been used in various fields. However, problems such as knowledge loss still occur during the process of transfer learning to a new target dataset. To solve these problems, there are various regularization methods based on knowledge distillation techniques. In this paper, we propose a transfer learning regularization method based on feature map alignment used in the field of knowledge distillation. The proposed method is composed of two attention-based submodules: self-pixel attention (SPA) and global channel attention (GCA). The self-pixel attention submodule utilizes both the feature maps of the source and target models, so that it provides an opportunity to jointly consider the features of the target and the knowledge of the source. The global channel attention submodule determines the importance of channels through all layers, unlike the existing methods that calculate these only within a single layer. Accordingly, transfer learning regularization is performed by considering both the interior of each single layer and the depth of the entire layer. Consequently, the proposed method using both of these submodules showed overall improved classification accuracy than the existing methods in classification experiments on commonly used datasets.

14.
Sensors (Basel) ; 24(11)2024 Jun 01.
Artigo em Inglês | MEDLINE | ID: mdl-38894364

RESUMO

Transfer learning (TL) techniques have proven useful in a wide variety of applications traditionally dominated by machine learning (ML), such as natural language processing, computer vision, and computer-aided design. Recent extrapolations of TL to the radio frequency (RF) domain are being used to increase the potential applicability of RFML algorithms, seeking to improve the portability of models for spectrum situational awareness and transmission source identification. Unlike most of the computer vision and natural language processing applications of TL, applications within the RF modality must contend with inherent hardware distortions and channel condition variations. This paper seeks to evaluate the feasibility and performance trade-offs when transferring learned behaviors from functional RFML classification algorithms, specifically those designed for automatic modulation classification (AMC) and specific emitter identification (SEI), between homogeneous radios of similar construction and quality and heterogeneous radios of different construction and quality. Results derived from both synthetic data and over-the-air experimental collection show promising performance benefits from the application of TL to the RFML algorithms of SEI and AMC.

15.
Sensors (Basel) ; 24(11)2024 Jun 04.
Artigo em Inglês | MEDLINE | ID: mdl-38894421

RESUMO

Steel structures are susceptible to corrosion due to their exposure to the environment. Currently used non-destructive techniques require inspector involvement. Inaccessibility of the defective part may lead to unnoticed corrosion, allowing the corrosion to propagate and cause catastrophic structural failure over time. Autonomous corrosion detection is essential for mitigating these problems. This study investigated the effect of the type of encoder-decoder neural network and the training strategy that works the best to automate the segmentation of corroded pixels in visual images. Models using pre-trained DesnseNet121 and EfficientNetB7 backbones yielded 96.78% and 98.5% average pixel-level accuracy, respectively. Deeper EffiecientNetB7 performed the worst, with only 33% true-positive values, which was 58% less than ResNet34 and the original UNet. ResNet 34 successfully classified the corroded pixels, with 2.98% false positives, whereas the original UNet predicted 8.24% of the non-corroded pixels as corroded when tested on a specific set of images exclusive to the investigated training dataset. Deep networks were found to be better for transfer learning than full training, and a smaller dataset could be one of the reasons for performance degradation. Both fully trained conventional UNet and ResNet34 models were tested on some external images of different steel structures with different colors and types of corrosion, with the ResNet 34 backbone outperforming conventional UNet.

16.
Front Endocrinol (Lausanne) ; 15: 1296047, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38894742

RESUMO

Purpose: The main objective of this study is to assess the possibility of using radiomics, deep learning, and transfer learning methods for the analysis of chest CT scans. An additional aim is to combine these techniques with bone turnover markers to identify and screen for osteoporosis in patients. Method: A total of 488 patients who had undergone chest CT and bone turnover marker testing, and had known bone mineral density, were included in this study. ITK-SNAP software was used to delineate regions of interest, while radiomics features were extracted using Python. Multiple 2D and 3D deep learning models were trained to identify these regions of interest. The effectiveness of these techniques in screening for osteoporosis in patients was compared. Result: Clinical models based on gender, age, and ß-cross achieved an accuracy of 0.698 and an AUC of 0.665. Radiomics models, which utilized 14 selected radiomics features, achieved a maximum accuracy of 0.750 and an AUC of 0.739. The test group yielded promising results: the 2D Deep Learning model achieved an accuracy of 0.812 and an AUC of 0.855, while the 3D Deep Learning model performed even better with an accuracy of 0.854 and an AUC of 0.906. Similarly, the 2D Transfer Learning model achieved an accuracy of 0.854 and an AUC of 0.880, whereas the 3D Transfer Learning model exhibited an accuracy of 0.740 and an AUC of 0.737. Overall, the application of 3D deep learning and 2D transfer learning techniques on chest CT scans showed excellent screening performance in the context of osteoporosis. Conclusion: Bone turnover markers may not be necessary for osteoporosis screening, as 3D deep learning and 2D transfer learning techniques utilizing chest CT scans proved to be equally effective alternatives.


Assuntos
Biomarcadores , Aprendizado Profundo , Osteoporose , Tomografia Computadorizada por Raios X , Humanos , Osteoporose/diagnóstico por imagem , Feminino , Tomografia Computadorizada por Raios X/métodos , Masculino , Pessoa de Meia-Idade , Idoso , Densidade Óssea , Remodelação Óssea/fisiologia , Adulto , Radiômica
17.
Adv Sci (Weinh) ; : e2308881, 2024 Jun 18.
Artigo em Inglês | MEDLINE | ID: mdl-38889239

RESUMO

With wireless multimodal locomotion capabilities, magnetic soft millirobots have emerged as potential minimally invasive medical robotic platforms. Due to their diverse shape programming capability, they can generate various locomotion modes, and their locomotion can be adapted to different environments by controlling the external magnetic field signal. Existing adaptation methods, however, are based on hand-tuned signals. Here, a learning-based adaptive magnetic soft millirobot multimodal locomotion framework empowered by sim-to-real transfer is presented. Developing a data-driven magnetic soft millirobot simulation environment, the periodic magnetic actuation signal is learned for a given soft millirobot in simulation. Then, the learned locomotion strategy is deployed to the real world using Bayesian optimization and Gaussian processes. Finally, automated domain recognition and locomotion adaptation for unknown environments using a Kullback-Leibler divergence-based probabilistic method are illustrated. This method can enable soft millirobot locomotion to quickly and continuously adapt to environmental changes and explore the actuation space for unanticipated solutions with minimum experimental cost.

18.
Bioinformatics ; 2024 Jun 18.
Artigo em Inglês | MEDLINE | ID: mdl-38889274

RESUMO

MOTIVATION: Deep learning models have achieved remarkable success in a wide range of natural-world tasks, such as vision, language, and speech recognition. These accomplishments are largely attributed to the availability of open-source large-scale datasets. More importantly, pre-trained foundational modellearnings exhibit a surprising degree of transferability to downstream tasks, enabling efficient learning even with limited training examples. However, the application of such natural-domain models to the domain of tiny Cryo-Electron Tomography (Cryo-ET) images has been a relatively unexplored frontier. This research is motivated by the intuition that 3D Cryo-ET voxel data can be conceptually viewed as a sequence of progressively evolving video frames. RESULTS: Leveraging the above insight, we propose a novel approach that involves the utilization of 3D models pre-trained on large-scale video datasets to enhance Cryo-ET subtomogram classification. Our experiments, conducted on both simulated and real Cryo-ET datasets, reveal compelling results. The use of video initialization not only demonstrates improvements in classification accuracy but also substantially reduces training costs. Further analyses provide additional evidence of the value of video initialization in enhancing subtomogram feature extraction. Additionally, we observe that video initialization yields similar positive effects when applied to medical 3D classification tasks, underscoring the potential of cross-domain knowledge transfer from video-based models to advance the state-of-the-art in a wide range of biological and medical data types. AVAILABILITY AND IMPLEMENTATION: https://github.com/xulabs/aitom.

19.
IEEE Open J Eng Med Biol ; 5: 467-475, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38899015

RESUMO

Accurate short- and mid-term blood glucose predictions are crucial for patients with diabetes struggling to maintain healthy glucose levels, as well as for individuals at risk of developing the disease. Consequently, numerous efforts from the scientific community have focused on developing predictive models for glucose levels. This study harnesses physiological data collected from wearable sensors to construct a series of data-driven models based on deep learning approaches. We systematically compare these models to offer insights for practitioners and researchers venturing into glucose prediction using deep learning techniques. Key questions addressed in this work encompass the comparison of various deep learning architectures for this task, determining the optimal set of input variables for accurate glucose prediction, comparing population-wide, fine-tuned, and personalized models, and assessing the impact of an individual's data volume on model performance. Additionally, as part of our outcomes, we introduce a meticulously curated dataset inclusive of data from both healthy individuals and those with diabetes, recorded in free-living conditions. This dataset aims to foster research in this domain and facilitate equitable comparisons among researchers.

20.
Cancers (Basel) ; 16(11)2024 Jun 04.
Artigo em Inglês | MEDLINE | ID: mdl-38893257

RESUMO

Artificial intelligence (AI), encompassing machine learning (ML) and deep learning (DL), has revolutionized medical research, facilitating advancements in drug discovery and cancer diagnosis. ML identifies patterns in data, while DL employs neural networks for intricate processing. Predictive modeling challenges, such as data labeling, are addressed by transfer learning (TL), leveraging pre-existing models for faster training. TL shows potential in genetic research, improving tasks like gene expression analysis, mutation detection, genetic syndrome recognition, and genotype-phenotype association. This review explores the role of TL in overcoming challenges in mutation detection, genetic syndrome detection, gene expression, or phenotype-genotype association. TL has shown effectiveness in various aspects of genetic research. TL enhances the accuracy and efficiency of mutation detection, aiding in the identification of genetic abnormalities. TL can improve the diagnostic accuracy of syndrome-related genetic patterns. Moreover, TL plays a crucial role in gene expression analysis in order to accurately predict gene expression levels and their interactions. Additionally, TL enhances phenotype-genotype association studies by leveraging pre-trained models. In conclusion, TL enhances AI efficiency by improving mutation prediction, gene expression analysis, and genetic syndrome detection. Future studies should focus on increasing domain similarities, expanding databases, and incorporating clinical data for better predictions.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...