Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 8 de 8
Filter
1.
Cell Rep Med ; 3(10): 100777, 2022 10 18.
Article in English | MEDLINE | ID: mdl-36220069

ABSTRACT

Overconsumption of carbohydrate-rich food combined with adverse eating patterns contributes to the increasing incidence of metabolic syndrome (MetS) in China. Therefore, we conducted a randomized trial to determine the effects of a low-carbohydrate diet (LCD), an 8-h time-restricted eating (TRE) schedule, and their combination on body weight and abdominal fat area (i.e., primary outcomes) and cardiometabolic outcomes in participants with MetS. Compared with baseline, all 3-month treatments significantly reduce body weight and subcutaneous fat area, but only TRE and combination treatment reduce visceral fat area (VFA), fasting blood glucose, uric acid (UA), and dyslipidemia. Furthermore, compared with changes of LCD, TRE and combination treatment further decrease body weight and VFA, while only combination treatment yields more benefits on glycemic control, UA, and dyslipidemia. In conclusion, without change of physical activity, an 8-h TRE with or without LCD can serve as an effective treatment for MetS (ClinicalTrials.gov: NCT04475822).


Subject(s)
Dyslipidemias , Metabolic Syndrome , Humans , Intra-Abdominal Fat/metabolism , Metabolic Syndrome/metabolism , Blood Glucose/metabolism , Uric Acid/metabolism , Diet, Carbohydrate-Restricted , Body Weight , Dyslipidemias/epidemiology
2.
J Med Internet Res ; 23(9): e26025, 2021 09 21.
Article in English | MEDLINE | ID: mdl-34546174

ABSTRACT

BACKGROUND: Skin and subcutaneous disease is the fourth-leading cause of the nonfatal disease burden worldwide and constitutes one of the most common burdens in primary care. However, there is a severe lack of dermatologists, particularly in rural Chinese areas. Furthermore, although artificial intelligence (AI) tools can assist in diagnosing skin disorders from images, the database for the Chinese population is limited. OBJECTIVE: This study aims to establish a database for AI based on the Chinese population and presents an initial study on six common skin diseases. METHODS: Each image was captured with either a digital camera or a smartphone, verified by at least three experienced dermatologists and corresponding pathology information, and finally added to the Xiangya-Derm database. Based on this database, we conducted AI-assisted classification research on six common skin diseases and then proposed a network called Xy-SkinNet. Xy-SkinNet applies a two-step strategy to identify skin diseases. First, given an input image, we segmented the regions of the skin lesion. Second, we introduced an information fusion block to combine the output of all segmented regions. We compared the performance with 31 dermatologists of varied experiences. RESULTS: Xiangya-Derm, as a new database that consists of over 150,000 clinical images of 571 different skin diseases in the Chinese population, is the largest and most diverse dermatological data set of the Chinese population. The AI-based six-category classification achieved a top 3 accuracy of 84.77%, which exceeded the average accuracy of dermatologists (78.15%). CONCLUSIONS: Xiangya-Derm, the largest database for the Chinese population, was created. The classification of six common skin conditions was conducted based on Xiangya-Derm to lay a foundation for product research.


Subject(s)
Melanoma , Skin Diseases , Skin Neoplasms , Artificial Intelligence , China , Dermoscopy , Humans , Skin Diseases/diagnosis
3.
BMC Med Inform Decis Mak ; 20(Suppl 11): 307, 2020 12 30.
Article in English | MEDLINE | ID: mdl-33380322

ABSTRACT

BACKGROUND: The availability of massive amount of data enables the possibility of clinical predictive tasks. Deep learning methods have achieved promising performance on the tasks. However, most existing methods suffer from three limitations: (1) There are lots of missing value for real value events, many methods impute the missing value and then train their models based on the imputed values, which may introduce imputation bias. The models' performance is highly dependent on the imputation accuracy. (2) Lots of existing studies just take Boolean value medical events (e.g. diagnosis code) as inputs, but ignore real value medical events (e.g., lab tests and vital signs), which are more important for acute disease (e.g., sepsis) and mortality prediction. (3) Existing interpretable models can illustrate which medical events are conducive to the output results, but are not able to give contributions of patterns among medical events. METHODS: In this study, we propose a novel interpretable Pattern Attention model with Value Embedding (PAVE) to predict the risks of certain diseases. PAVE takes the embedding of various medical events, their values and the corresponding occurring time as inputs, leverage self-attention mechanism to attend to meaningful patterns among medical events for risk prediction tasks. Because only the observed values are embedded into vectors, we don't need to impute the missing values and thus avoids the imputations bias. Moreover, the self-attention mechanism is helpful for the model interpretability, which means the proposed model can output which patterns cause high risks. RESULTS: We conduct sepsis onset prediction and mortality prediction experiments on a publicly available dataset MIMIC-III and our proprietary EHR dataset. The experimental results show that PAVE outperforms existing models. Moreover, by analyzing the self-attention weights, our model outputs meaningful medical event patterns related to mortality. CONCLUSIONS: PAVE learns effective medical event representation by incorporating the values and occurring time, which can improve the risk prediction performance. Moreover, the presented self-attention mechanism can not only capture patients' health state information, but also output the contributions of various medical event patterns, which pave the way for interpretable clinical risk predictions. AVAILABILITY: The code for this paper is available at: https://github.com/yinchangchang/PAVE .


Subject(s)
Delivery of Health Care , Humans
4.
J Thorac Dis ; 12(9): 4690-4701, 2020 Sep.
Article in English | MEDLINE | ID: mdl-33145042

ABSTRACT

BACKGROUNDS: Conventional ultrasound manual scanning and artificial diagnosis approaches in breast are considered to be operator-dependence, slight slow and error-prone. In this study, we used Automated Breast Ultrasound (ABUS) machine for the scanning, and deep convolutional neural network (CNN) technology, a kind of Deep Learning (DL) algorithm, for the detection and classification of breast nodules, aiming to achieve the automatic and accurate diagnosis of breast nodules. METHODS: Two hundred and ninety-three lesions from 194 patients with definite pathological diagnosis results (117 benign and 176 malignancy) were recruited as case group. Another 70 patients without breast diseases were enrolled as control group. All the breast scans were carried out by an ABUS machine and then randomly divided into training set, verification set and test set, with a proportion of 7:1:2. In the training set, we constructed a detection model by a three-dimensionally U-shaped convolutional neural network (3D U-Net) architecture for the purpose of segment the nodules from background breast images. Processes such as residual block, attention connections, and hard mining were used to optimize the model while strategies of random cropping, flipping and rotation for data augmentation. In the test phase, the current model was compared with those in previously reported studies. In the verification set, the detection effectiveness of detection model was evaluated. In the classification phase, multiple convolutional layers and fully-connected layers were applied to set up a classification model, aiming to identify whether the nodule was malignancy. RESULTS: Our detection model yielded a sensitivity of 91% and 1.92 false positive subjects per automatically scanned imaging. The classification model achieved a sensitivity of 87.0%, a specificity of 88.0% and an accuracy of 87.5%. CONCLUSIONS: Deep CNN combined with ABUS maybe a promising tool for easy detection and accurate diagnosis of breast nodule.

5.
J Med Internet Res ; 22(9): e20645, 2020 09 28.
Article in English | MEDLINE | ID: mdl-32985996

ABSTRACT

BACKGROUND: Deep learning models have attracted significant interest from health care researchers during the last few decades. There have been many studies that apply deep learning to medical applications and achieve promising results. However, there are three limitations to the existing models: (1) most clinicians are unable to interpret the results from the existing models, (2) existing models cannot incorporate complicated medical domain knowledge (eg, a disease causes another disease), and (3) most existing models lack visual exploration and interaction. Both the electronic health record (EHR) data set and the deep model results are complex and abstract, which impedes clinicians from exploring and communicating with the model directly. OBJECTIVE: The objective of this study is to develop an interpretable and accurate risk prediction model as well as an interactive clinical prediction system to support EHR data exploration, knowledge graph demonstration, and model interpretation. METHODS: A domain-knowledge-guided recurrent neural network (DG-RNN) model is proposed to predict clinical risks. The model takes medical event sequences as input and incorporates medical domain knowledge by attending to a subgraph of the whole medical knowledge graph. A global pooling operation and a fully connected layer are used to output the clinical outcomes. The middle results and the parameters of the fully connected layer are helpful in identifying which medical events cause clinical risks. DG-Viz is also designed to support EHR data exploration, knowledge graph demonstration, and model interpretation. RESULTS: We conducted both risk prediction experiments and a case study on a real-world data set. A total of 554 patients with heart failure and 1662 control patients without heart failure were selected from the data set. The experimental results show that the proposed DG-RNN outperforms the state-of-the-art approaches by approximately 1.5%. The case study demonstrates how our medical physician collaborator can effectively explore the data and interpret the prediction results using DG-Viz. CONCLUSIONS: In this study, we present DG-Viz, an interactive clinical prediction system, which brings together the power of deep learning (ie, a DG-RNN-based model) and visual analytics to predict clinical risks and visually interpret the EHR prediction results. Experimental results and a case study on heart failure risk prediction tasks demonstrate the effectiveness and usefulness of the DG-Viz system. This study will pave the way for interactive, interpretable, and accurate clinical risk predictions.


Subject(s)
Deep Learning/standards , Electronic Health Records/standards , Humans , Knowledge Bases , Neural Networks, Computer
6.
Big Data ; 8(5): 379-390, 2020 10.
Article in English | MEDLINE | ID: mdl-32783631

ABSTRACT

Diagnosis prediction is an important predictive task in health care that aims to predict the patient future diagnosis based on their historical medical records. A crucial requirement for this task is to effectively model the high-dimensional, noisy, and temporal electronic health record (EHR) data. Existing studies fulfill this requirement by applying recurrent neural networks with attention mechanisms, but facing data insufficiency and noise problem. Recently, more accurate and robust medical knowledge-guided methods have been proposed and have achieved superior performance. These methods inject the knowledge from a graph structure medical ontology into deep models via attention mechanisms to provide supplementary information of the input data. However, these methods only partially leverage the knowledge graph and neglect the global structure information, which is an important feature. To address this problem, we propose an end-to-end robust solution, namely Graph Neural Network-Based Diagnosis Prediction (GNDP). First, we propose to utilize the medical knowledge graph as an internal information of a patient by constructing sequential patient graphs. These graphs not only carry the historical information from the EHR but also infuse with domain knowledge. Then we design a robust diagnosis prediction model based on a spatial-temporal graph convolutional network. The proposed model extracts meaningful features from sequential graph EHR data effectively through multiple spatial-temporal graph convolution units to generate robust patients' representations for accurate diagnosis predictions. We evaluate the performance of GNDP against a set of state-of-the-art methods on two real-world medical data sets, the results demonstrate that our methods can achieve a better utilization of knowledge graph and improve the accuracy on diagnosis prediction tasks.


Subject(s)
Diagnosis, Computer-Assisted , Neural Networks, Computer , Algorithms , Computer Graphics , Deep Learning , Electronic Health Records , Medical Informatics
7.
IEEE Trans Image Process ; 27(12): 6025-6038, 2018 Dec.
Article in English | MEDLINE | ID: mdl-30106729

ABSTRACT

Deep convolutional neural networks (CNNs) have shown superior performance on the task of single-label image classification. However, the applicability of CNNs to multi-label images still remains an open problem, mainly because of two reasons. First, each image is usually treated as an inseparable entity and represented as one instance, which mixes the visual information corresponding to different labels. Second, the correlations amongst labels are often overlooked. To address these limitations, we propose a deep multi-modal CNN for multi-instance multi-label image classification, called MMCNN-MIML. By combining CNNs with multi-instance multi-label (MIML) learning, our model represents each image as a bag of instances for image classification and inherits the merits of both CNNs and MIML. In particular, MMCNN-MIML has three main appealing properties: 1) it can automatically generate instance representations for MIML by exploiting the architecture of CNNs; 2) it takes advantage of the label correlations by grouping labels in its later layers; and 3) it incorporates the textual context of label groups to generate multi-modal instances, which are effective in discriminating visually similar objects belonging to different groups. Empirical studies on several benchmark multi-label image data sets show that MMCNN-MIML significantly outperforms the state-of-the-art baselines on multi-label image classification tasks.

8.
IEEE Trans Image Process ; 23(12): 5573-85, 2014 Dec.
Article in English | MEDLINE | ID: mdl-25373081

ABSTRACT

Semantic attributes have been recognized as a more spontaneous manner to describe and annotate image content. It is widely accepted that image annotation using semantic attributes is a significant improvement to the traditional binary or multiclass annotation due to its naturally continuous and relative properties. Though useful, existing approaches rely on an abundant supervision and high-quality training data, which limit their applicability. Two standard methods to overcome small amounts of guidance and low-quality training data are transfer and active learning. In the context of relative attributes, this would entail learning multiple relative attributes simultaneously and actively querying a human for additional information. This paper addresses the two main limitations in existing work: 1) it actively adds humans to the learning loop so that minimal additional guidance can be given and 2) it learns multiple relative attributes simultaneously and thereby leverages dependence amongst them. In this paper, we formulate a joint active learning to rank framework with pairwise supervision to achieve these two aims, which also has other benefits such as the ability to be kernelized. The proposed framework optimizes over a set of ranking functions (measuring the strength of the presence of attributes) simultaneously and dependently on each other. The proposed pairwise queries take the form of which one of these two pictures is more natural? These queries can be easily answered by humans. Extensive empirical study on real image data sets shows that our proposed method, compared with several state-of-the-art methods, achieves superior retrieval performance while requires significantly less human inputs.


Subject(s)
Algorithms , Artificial Intelligence , Image Processing, Computer-Assisted/methods , Pattern Recognition, Automated/methods , Cognition/physiology , Databases, Factual , Feedback , Humans , Magnetic Resonance Imaging , Models, Theoretical , Semantics
SELECTION OF CITATIONS
SEARCH DETAIL
...