Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 8 de 8
Filter
Add more filters










Database
Language
Publication year range
1.
Eur Heart J Digit Health ; 5(3): 260-269, 2024 May.
Article in English | MEDLINE | ID: mdl-38774376

ABSTRACT

Aims: Augmenting echocardiography with artificial intelligence would allow for automated assessment of routine parameters and identification of disease patterns not easily recognized otherwise. View classification is an essential first step before deep learning can be applied to the echocardiogram. Methods and results: We trained two- and three-dimensional convolutional neural networks (CNNs) using transthoracic echocardiographic (TTE) studies obtained from 909 patients to classify nine view categories (10 269 videos). Transthoracic echocardiographic studies from 229 patients were used in internal validation (2582 videos). Convolutional neural networks were tested on 100 patients with comprehensive TTE studies (where the two examples chosen by CNNs as most likely to represent a view were evaluated) and 408 patients with five view categories obtained via point-of-care ultrasound (POCUS). The overall accuracy of the two-dimensional CNN was 96.8%, and the averaged area under the curve (AUC) was 0.997 on the comprehensive TTE testing set; these numbers were 98.4% and 0.998, respectively, on the POCUS set. For the three-dimensional CNN, the accuracy and AUC were 96.3% and 0.998 for full TTE studies and 95.0% and 0.996 on POCUS videos, respectively. The positive predictive value, which defined correctly identified predicted views, was higher with two-dimensional rather than three-dimensional networks, exceeding 93% in apical, short-axis aortic valve, and parasternal long-axis left ventricle views. Conclusion: An automated view classifier utilizing CNNs was able to classify cardiac views obtained using TTE and POCUS with high accuracy. The view classifier will facilitate the application of deep learning to echocardiography.

2.
JACC Adv ; 2(9): 100632, 2023 Nov.
Article in English | MEDLINE | ID: mdl-38938722

ABSTRACT

Background: Cine images during coronary angiography contain a wealth of information besides the assessment of coronary stenosis. We hypothesized that deep learning (DL) can discern moderate-severe left ventricular dysfunction among patients undergoing coronary angiography. Objectives: The purpose of this study was to assess the ability of machine learning models in estimating left ventricular ejection fraction (LVEF) from routine coronary angiographic images. Methods: We developed a combined 3D-convolutional neural network (CNN) and transformer to estimate LVEF for diagnostic coronary angiograms of the left coronary artery (LCA). Two angiograms, left anterior oblique (LAO)-caudal and right anterior oblique (RAO)-cranial projections, were fed into the model simultaneously. The model classified LVEF as significantly reduced (LVEF ≤40%) vs normal or mildly reduced (LVEF>40%). Echocardiogram performed within 30 days served as the gold standard for LVEF. Results: A collection of 18,809 angiograms from 17,346 patients from Mayo Clinic were included (mean age 67.29; 35% women). Each patient appeared only in the training (70%), validation (10%), or testing set (20%). The model exhibited excellent performance (area under the receiver operator curve [AUC] 0.87; sensitivity 0.77; specificity 0.80) in the training set. The model's performance exceeded human expert assessment (AUC, sensitivity, and specificity of 0.86, 0.76, and 0.77, respectively) vs (AUC, sensitivity, and specificity of 0.76-0.77, 0.50-0.44, and 0.90-0.93, respectively). In additional sensitivity analyses, combining the LAO and RAO views yielded a higher AUC, sensitivity, and specificity than utilizing either LAO or RAO individually. The original model combining CNN and transformer was superior to DL models using either 3D-CNN or transformers. Conclusions: A novel DL algorithm demonstrated rapid and accurate assessment of LVEF from routine coronary angiography. The algorithm can be used to support clinical decision-making and form the foundation for future models that could extract meaningful data from routine angiography studies.

3.
Sci Rep ; 12(1): 20057, 2022 11 21.
Article in English | MEDLINE | ID: mdl-36414660

ABSTRACT

Wound classification is an essential step of wound diagnosis. An efficient classifier can assist wound specialists in classifying wound types with less financial and time costs and help them decide on an optimal treatment procedure. This study developed a deep neural network-based multi-modal classifier using wound images and their corresponding locations to categorize them into multiple classes, including diabetic, pressure, surgical, and venous ulcers. A body map was also developed to prepare the location data, which can help wound specialists tag wound locations more efficiently. Three datasets containing images and their corresponding location information were designed with the help of wound specialists. The multi-modal network was developed by concatenating the image-based and location-based classifier outputs with other modifications. The maximum accuracy on mixed-class classifications (containing background and normal skin) varies from 82.48 to 100% in different experiments. The maximum accuracy on wound-class classifications (containing only diabetic, pressure, surgical, and venous) varies from 72.95 to 97.12% in various experiments. The proposed multi-modal network also showed a significant improvement in results from the previous works of literature.


Subject(s)
Neural Networks, Computer
4.
Adv Wound Care (New Rochelle) ; 11(12): 687-709, 2022 12.
Article in English | MEDLINE | ID: mdl-34544270

ABSTRACT

Significance: Accurately predicting wound healing trajectories is difficult for wound care clinicians due to the complex and dynamic processes involved in wound healing. Wound care teams capture images of wounds during clinical visits generating big datasets over time. Developing novel artificial intelligence (AI) systems can help clinicians diagnose, assess the effectiveness of therapy, and predict healing outcomes. Recent Advances: Rapid developments in computer processing have enabled the development of AI-based systems that can improve the diagnosis and effectiveness of therapy in various clinical specializations. In the past decade, we have witnessed AI revolutionizing all types of medical imaging like X-ray, ultrasound, computed tomography, magnetic resonance imaging, etc., but AI-based systems remain to be developed clinically and computationally for high-quality wound care that can result in better patient outcomes. Critical Issues: In the current standard of care, collecting wound images on every clinical visit, interpreting and archiving the data are cumbersome and time consuming. Commercial platforms are developed to capture images, perform wound measurements, and provide clinicians with a workflow for diagnosis, but AI-based systems are still in their infancy. This systematic review summarizes the breadth and depth of the most recent and relevant work in intelligent image-based data analysis and system developments for wound assessment. Future Directions: With increasing availabilities of massive data (wound images, wound-specific electronic health records, etc.) as well as powerful computing resources, AI-based digital platforms will play a significant role in delivering data-driven care to people suffering from debilitating chronic wounds.


Subject(s)
Artificial Intelligence , Image Processing, Computer-Assisted , Electronic Health Records , Humans , Image Processing, Computer-Assisted/methods , Workflow
5.
COPD ; 18(6): 723-736, 2021 12.
Article in English | MEDLINE | ID: mdl-34865568

ABSTRACT

Cigarette smoking-related inflammation, cellular stresses, and tissue destruction play a key role in lung disease, such as chronic obstructive pulmonary disease (COPD). Notably, augmented apoptosis and impaired clearance of apoptotic cells, efferocytosis, contribute to the chronic inflammatory response and tissue destruction in patients with COPD. Of note, exposure to cigarette smoke can impair alveolar macrophages efferocytosis activity, which leads to secondary necrosis formation and tissue inflammation. A better understanding of the processes behind the effect of cigarette smoke on efferocytosis concerning lung disorders can help to design more efficient treatment approaches and also delay the development of lung disease, such as COPD. To this end, we aimed to seek mechanisms underlying the impairing effect of cigarette smoke on macrophages-mediated efferocytosis in COPD. Further, available therapeutic opportunities for restoring efferocytosis activity and ameliorating respiratory tract inflammation in smokers with COPD were also discussed.


Subject(s)
Cigarette Smoking , Pulmonary Disease, Chronic Obstructive , Cigarette Smoking/adverse effects , Humans , Inflammation , Macrophages, Alveolar , Phagocytosis , Pulmonary Disease, Chronic Obstructive/drug therapy
6.
Comput Biol Med ; 134: 104536, 2021 07.
Article in English | MEDLINE | ID: mdl-34126281

ABSTRACT

Acute and chronic wounds are a challenge to healthcare systems around the world and affect many people's lives annually. Wound classification is a key step in wound diagnosis that would help clinicians to identify an optimal treatment procedure. Hence, having a high-performance classifier assists wound specialists to classify wound types with less financial and time costs. Different wound classification methods based on machine learning and deep learning have been proposed in the literature. In this study, we have developed an ensemble Deep Convolutional Neural Network-based classifier to categorize wound images into multiple classes including surgical, diabetic, and venous ulcers. The output classification scores of two classifiers (namely, patch-wise and image-wise) are fed into a Multilayer Perceptron to provide a superior classification performance. A 5-fold cross-validation approach is used to evaluate the proposed method. We obtained maximum and average classification accuracy values of 96.4% and 94.28% for binary and 91.9% and 87.7% for 3-class classification problems. The proposed classifier was compared with some common deep classifiers and showed significantly higher accuracy metrics. We also tested the proposed method on the Medetec wound image dataset, and the accuracy values of 91.2% and 82.9% were obtained for binary and 3-class classifications. The results show that our proposed method can be used effectively as a decision support system in classification of wound images or other related clinical applications.


Subject(s)
Machine Learning , Neural Networks, Computer , Humans
7.
Sci Rep ; 10(1): 21897, 2020 12 14.
Article in English | MEDLINE | ID: mdl-33318503

ABSTRACT

Acute and chronic wounds have varying etiologies and are an economic burden to healthcare systems around the world. The advanced wound care market is expected to exceed $22 billion by 2024. Wound care professionals rely heavily on images and image documentation for proper diagnosis and treatment. Unfortunately lack of expertise can lead to improper diagnosis of wound etiology and inaccurate wound management and documentation. Fully automatic segmentation of wound areas in natural images is an important part of the diagnosis and care protocol since it is crucial to measure the area of the wound and provide quantitative parameters in the treatment. Various deep learning models have gained success in image analysis including semantic segmentation. This manuscript proposes a novel convolutional framework based on MobileNetV2 and connected component labelling to segment wound regions from natural images. The advantage of this model is its lightweight and less compute-intensive architecture. The performance is not compromised and is comparable to deeper neural networks. We build an annotated wound image dataset consisting of 1109 foot ulcer images from 889 patients to train and test the deep learning models. We demonstrate the effectiveness and mobility of our method by conducting comprehensive experiments and analyses on various segmentation neural networks. The full implementation is available at https://github.com/uwm-bigdata/wound-segmentation .


Subject(s)
Algorithms , Image Processing, Computer-Assisted , Neural Networks, Computer , Wound Healing , Wounds and Injuries/diagnostic imaging , Humans
8.
Int J Reprod Biomed ; 16(5): 315-322, 2018 May.
Article in English | MEDLINE | ID: mdl-30027147

ABSTRACT

BACKGROUND: Troxerutin is a flavonoid antioxidant that protect different organ against damage caused by ischemia-reperfusion. OBJECTIVE: The aim of this study was to evaluate the effect of troxerutin in reducing the damages caused by ischemia-reperfusion in rat's testis. MATERIALS AND METHODS: 40 Male Wistar rats (2 month old) were divide to four groups (n=10). Group1 (sham), Group 2 (control, ischemia-reperfusion (I/R) without treatment), Group 3 (I/R+150 mg/kg of troxerutin), and group 4 (I/R+20 mg/kg of vitamin C). Treatment of group 3 and group 4 during torsion (twists 720 counter clock at 90 min) followed by 50 days detorsion. After 50 days, blood samples were collected and rats in all study groups were killed and their testes were removed, and fixed with Bouin's solution. Testis was stained with hematoxylin and eosin dye and the level of testosterone, luteinizing hormone (LH) and follicle-stimulating hormone (FSH) were measured with ELISEA methods. TUNEL was employed to detect apoptosis. Epididymis caudal part was removed and total sperm count was determined. Johnson techniques were used for assessment of seminiferous tubules quality. RESULTS: Troxerutin treated group has higher Johnson score's (p≤0.001), antiapoptotic properties (p≤0.001), sperm count (p=0.065), and higher LH (p≤0.001), FSH (p≤0.001) and testosterone (p=0.002) levels than control group. Vitamin C treated group showed increase level of testosterone but didn't show significant differences on the number of apoptotic cells, Johnson scores, LH, FSH and sperm count than control group. CONCLUSION: Troxerutin has protective effects on testicular torsion induced injury and can ameliorate spermatogenesis in the torsion-detorsion models.

SELECTION OF CITATIONS
SEARCH DETAIL
...