Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 13 de 13
Filter
1.
IEEE Open J Eng Med Biol ; 5: 404-420, 2024.
Article in English | MEDLINE | ID: mdl-38899014

ABSTRACT

Goal: Augment a small, imbalanced, wound dataset by using semi-supervised learning with a secondary dataset. Then utilize the augmented wound dataset for deep learning-based wound assessment. Methods: The clinically-validated Photographic Wound Assessment Tool (PWAT) scores eight wound attributes: Size, Depth, Necrotic Tissue Type, Necrotic Tissue Amount, Granulation Tissue type, Granulation Tissue Amount, Edges, Periulcer Skin Viability to comprehensively assess chronic wound images. A small corpus of 1639 wound images labeled with ground truth PWAT scores was used as reference. A Semi-Supervised learning and Progressive Multi-Granularity training mechanism were used to leverage a secondary corpus of 9870 unlabeled wound images. Wound scoring utilized the EfficientNet Convolutional Neural Network on the augmented wound corpus. Results: Our proposed Semi-Supervised PMG EfficientNet (SS-PMG-EfficientNet) approach estimated all 8 PWAT sub-scores with classification accuracies and F1 scores of about 90% on average, and outperformed a comprehensive list of baseline models and had a 7% improvement over the prior state-of-the-art (without data augmentation). We also demonstrate that synthetic wound image generation using Generative Adversarial Networks (GANs) did not improve wound assessment. Conclusions: Semi-supervised learning on unlabeled wound images in a secondary dataset achieved impressive performance for deep learning-based wound grading.

2.
IEEE Trans Eng Manag ; 70(3): 912-926, 2023 Mar.
Article in English | MEDLINE | ID: mdl-37009627

ABSTRACT

This research employs design ethnography to study the design process of a design science research (DSR) project conducted over eight years. The DSR project focuses on chronic wounds and how Information Technology (IT) might support the management of those wounds. Since this is a new and complex problem not previously addressed by IT, it requires an exploration and discovery process. As such, we found that traditional DSR methodologies were not well-suited to guiding the design process. Instead we discovered that focusing on search, and in particular, the co-evolution of the problem and solution spaces, provides a much better focus for managing the DSR design process. The presentation of our findings from the ethnographic study includes a new representation for capturing the co-evolving problem/solution spaces, an illustration of the search process and co-evolving problem/solution spaces using the DSR project we studied, the need for changes in the purpose of DSR evaluation activities when using a search-focused design process, and how our proposed process extends and augments current DSR methodologies. Studying the DSR design process generates the knowledge that research project managers need for managing and guiding a DSR project, and contributes to our knowledge of the design process for research-oriented projects. Managerial Relevance Statement: From a managerial perspective, studying the design process provides the knowledge that research project managers need for managing and guiding DSR projects. In particular, research project managers can guide the search process by understanding when and why to explore different search spaces, to expand the solutions investigated, and to focus on promising solutions and to evaluate them. Overall, this research contributes to our knowledge of design and the design process, especially for highly research-oriented problems and solutions.

3.
IEEE Open J Eng Med Biol ; 2: 224-234, 2021.
Article in English | MEDLINE | ID: mdl-34532712

ABSTRACT

GOAL: Chronic wounds affect 6.5 million Americans. Wound assessment via algorithmic analysis of smartphone images has emerged as a viable option for remote assessment. METHODS: We comprehensively score wounds based on the clinically-validated Photographic Wound Assessment Tool (PWAT), which comprehensively assesses clinically important ranges of eight wound attributes: Size, Depth, Necrotic Tissue Type, Necrotic Tissue Amount, Granulation Tissue type, Granulation Tissue Amount, Edges, Periulcer Skin Viability. We proposed a DenseNet Convolutional Neural Network (CNN) framework with patch-based context-preserving attention to assess the 8 PWAT attributes of four wound types: diabetic ulcers, pressure ulcers, vascular ulcers and surgical wounds. RESULTS: In an evaluation on our dataset of 1639 wound images, our model estimated all 8 PWAT sub-scores with classification accuracies and F1 scores of over 80%. CONCLUSIONS: Our work is the first intelligent system that autonomously grades wounds comprehensively based on criteria in the PWAT rubric, alleviating the significant burden that manual wound grading imposes on wound care nurses.

4.
Smart Health (Amst) ; 182020 Nov.
Article in English | MEDLINE | ID: mdl-33299924

ABSTRACT

Lower extremity chronic wounds affect 4.5 million Americans annually. Due to inadequate access to wound experts in underserved areas, many patients receive non-uniform, non-standard wound care, resulting in increased costs and lower quality of life. We explored machine learning classifiers to generate actionable wound care decisions about four chronic wound types (diabetic foot, pressure, venous, and arterial ulcers). These decisions (target classes) were: (1) Continue current treatment, (2) Request non-urgent change in treatment from a wound specialist, (3) Refer patient to a wound specialist. We compare classification methods (single classifiers, bagged & boosted ensembles, and a deep learning network) to investigate (1) whether visual wound features are sufficient for generating a decision and (2) whether adding unstructured text from wound experts increases classifier accuracy. Using 205 wound images, the Gradient Boosted Machine (XGBoost) outperformed other methods when using both visual and textual wound features, achieving 81% accuracy. Using only visual features decreased the accuracy to 76%, achieved by a Support Vector Machine classifier. We conclude that machine learning classifiers can generate accurate wound care decisions on lower extremity chronic wounds, an important step toward objective, standardized wound care. Higher decision-making accuracy was achieved by leveraging clinical comments from wound experts.

5.
IEEE Access ; 8: 181590-181604, 2020.
Article in English | MEDLINE | ID: mdl-33251080

ABSTRACT

Smartphone wound image analysis has recently emerged as a viable way to assess healing progress and provide actionable feedback to patients and caregivers between hospital appointments. Segmentation is a key image analysis step, after which attributes of the wound segment (e.g. wound area and tissue composition) can be analyzed. The Associated Hierarchical Random Field (AHRF) formulates the image segmentation problem as a graph optimization problem. Handcrafted features are extracted, which are then classified using machine learning classifiers. More recently deep learning approaches have emerged and demonstrated superior performance for a wide range of image analysis tasks. FCN, U-Net and DeepLabV3 are Convolutional Neural Networks used for semantic segmentation. While in separate experiments each of these methods have shown promising results, no prior work has comprehensively and systematically compared the approaches on the same large wound image dataset, or more generally compared deep learning vs non-deep learning wound image segmentation approaches. In this paper, we compare the segmentation performance of AHRF and CNN approaches (FCN, U-Net, DeepLabV3) using various metrics including segmentation accuracy (dice score), inference time, amount of training data required and performance on diverse wound sizes and tissue types. Improvements possible using various image pre- and post-processing techniques are also explored. As access to adequate medical images/data is a common constraint, we explore the sensitivity of the approaches to the size of the wound dataset. We found that for small datasets (< 300 images), AHRF is more accurate than U-Net but not as accurate as FCN and DeepLabV3. AHRF is also over 1000x slower. For larger datasets (> 300 images), AHRF saturates quickly, and all CNN approaches (FCN, U-Net and DeepLabV3) are significantly more accurate than AHRF.

6.
Proc Am Conf Inf Syst ; 20202020 Aug.
Article in English | MEDLINE | ID: mdl-34713278

ABSTRACT

A key requirement for the successful adoption of clinical decision support systems (CDSS) is their ability to provide users with reliable explanations for any given recommendation which can be challenging for some tasks such as wound management decisions. Despite the abundance of decision guidelines, wound non-expert (novice hereafter) clinicians who usually provide most of the treatments still have decision uncertainties. Our goal is to evaluate the use of a Wound CDSS smartphone App that provides explanations for recommendations it produces. The App utilizes wound images taken by the novice clinician using smartphone camera. This study experiments with two proposed variations of rule-tracing explanations called verbose-based and gist-based. Deriving upon theories of decision making, and unlike prior literature that says rule-tracing explanations are only preferred by novices, we hypothesize that, rule-tracing explanations are preferred by both clinicians but in different forms: novices prefer verbose-based rule-tracing and experts prefer gist-based rule-tracing.

7.
J Med Imaging (Bellingham) ; 6(2): 024002, 2019 Apr.
Article in English | MEDLINE | ID: mdl-31037245

ABSTRACT

As traditional visual-examination-based methods provide neither reliable nor consistent wound assessment, several computer-based approaches for quantitative wound image analysis have been proposed in recent years. However, these methods require either some level of human interaction for proper image processing or that images be captured under controlled conditions. However, to become a practical tool of diabetic patients for wound management, the wound image algorithm needs to be able to correctly locate and detect the wound boundary of images acquired under less-constrained conditions, where the illumination and camera angle can vary within reasonable bounds. We present a wound boundary determination method that is robust to lighting and camera orientation perturbations by applying the associative hierarchical random field (AHRF) framework, which is an improved conditional random field (CRF) model originally applied to natural image multiscale analysis. To validate the robustness of the AHRF framework for wound boundary recognition tasks, we have tested the method on two image datasets: (1) foot and leg ulcer images (for the patients we have tracked for 2 years) that were captured under one of the two conditions, such that 70% of the entire dataset are captured with image capture box to ensure consistent lighting and range and the remaining 30% of the images are captured by a handheld camera under varied conditions of lighting, incident angle, and range and (2) moulage wound images that were captured under similarly varied conditions. Compared to other CRF-based machine learning strategies, our new method provides a determination accuracy with the best global performance rates (specificity: > 95 % and sensitivity: > 77 % .

8.
IEEE Access ; 7: 179151-179162, 2019.
Article in English | MEDLINE | ID: mdl-33777590

ABSTRACT

Diabetes mellitus is a serious chronic disease that affects millions of people worldwide. In patients with diabetes, ulcers occur frequently and heal slowly. Grading and staging of diabetic ulcers is the first step of effective treatment and wound depth and granulation tissue amount are two important indicators of wound healing progress. However, wound depths and granulation tissue amount of different severities can visually appear quite similar, making accurate machine learning classification challenging. In this paper, we innovatively adopted the fine-grained classification idea for diabetic wound grading by using a Bilinear CNN (Bi-CNN) architecture to deal with highly similar images of five grades. Wound area extraction, sharpening, resizing and augmentation were used to pre-process images before being input to the Bi-CNN. Innovative modifications of the generic Bi-CNN network architecture are explored to improve its performance. Our research generated a valuable wound dataset. In collaboration with wound experts from University of Massachusetts Medical School, we collected a diabetic wound dataset of 1639 images and annotated them with wound depth and granulation tissue grades as labels for classification. Deep learning experiments were conducted using holdout validation on this diabetic wound dataset. Comparisons with widely used CNN classification architectures demonstrated that our Bi-CNN fine-grained classification approach outperformed prior work for the task of grading diabetic wounds.

9.
IEEE Trans Biomed Eng ; 64(9): 2098-2109, 2017 09.
Article in English | MEDLINE | ID: mdl-27893380

ABSTRACT

The standard chronic wound assessment method based on visual examination is potentially inaccurate and also represents a significant clinical workload. Hence, computer-based systems providing quantitative wound assessment may be valuable for accurately monitoring wound healing status, with the wound area the best suited for automated analysis. Here, we present a novel approach, using support vector machines (SVM) to determine the wound boundaries on foot ulcer images captured with an image capture box, which provides controlled lighting and range. After superpixel segmentation, a cascaded two-stage classifier operates as follows: in the first stage, a set of k binary SVM classifiers are trained and applied to different subsets of the entire training images dataset, and incorrectly classified instances are collected. In the second stage, another binary SVM classifier is trained on the incorrectly classified set. We extracted various color and texture descriptors from superpixels that are used as input for each stage in the classifier training. Specifically, color and bag-of-word representations of local dense scale invariant feature transformation features are descriptors for ruling out irrelevant regions, and color and wavelet-based features are descriptors for distinguishing healthy tissue from wound regions. Finally, the detected wound boundary is refined by applying the conditional random field method. We have implemented the wound classification on a Nexus 5 smartphone platform, except for training which was done offline. Results are compared with other classifiers and show that our approach provides high global performance rates (average sensitivity = 73.3%, specificity = 94.6%) and is sufficiently efficient for a smartphone-based image analysis.


Subject(s)
Diabetic Foot/diagnostic imaging , Image Interpretation, Computer-Assisted/methods , Mobile Applications , Pattern Recognition, Automated/methods , Photography/methods , Support Vector Machine , Algorithms , Colorimetry/methods , Diabetic Foot/pathology , Humans , Reproducibility of Results , Sensitivity and Specificity , Severity of Illness Index , Smartphone
10.
Chest ; 149(1): 272-7, 2016 Jan.
Article in English | MEDLINE | ID: mdl-26066707

ABSTRACT

The potential of patient portals to improve patient engagement and health outcomes has been discussed for more than a decade. The slow growth in patient portal adoption rates among patients and providers in the United States, despite external incentives, indicates that this is a complex issue. We examined evidence of patient portal use and effects with a focus on the pulmonary domain. We found a paucity of studies of patient portal use in pulmonary practice, and highlight gaps for future research. We also report on the experience of a pulmonary department using a patient portal to highlight the potential of these systems.


Subject(s)
Electronic Health Records , Patient Participation , Humans
11.
J Diabetes Sci Technol ; 10(2): 421-8, 2015 Aug 07.
Article in English | MEDLINE | ID: mdl-26253144

ABSTRACT

BACKGROUND: For individuals with type 2 diabetes, foot ulcers represent a significant health issue. The aim of this study is to design and evaluate a wound assessment system to help wound clinics assess patients with foot ulcers in a way that complements their current visual examination and manual measurements of their foot ulcers. METHODS: The physical components of the system consist of an image capture box, a smartphone for wound image capture and a laptop for analyzing the wound image. The wound image assessment algorithms calculate the overall wound area, color segmented wound areas, and a healing score, to provide a quantitative assessment of the wound healing status both for a single wound image and comparisons of subsequent images to an initial wound image. RESULTS: The system was evaluated by assessing foot ulcers for 12 patients in the Wound Clinic at University of Massachusetts Medical School. As performance measures, the Matthews correlation coefficient (MCC) value for the wound area determination algorithm tested on 32 foot ulcer images was .68. The clinical validity of our healing score algorithm relative to the experienced clinicians was measured by Krippendorff's alpha coefficient (KAC) and ranged from .42 to .81. CONCLUSION: Our system provides a promising real-time method for wound assessment based on image analysis. Clinical comparisons indicate that the optimized mean-shift-based algorithm is well suited for wound area determination. Clinical evaluation of our healing score algorithm shows its potential to provide clinicians with a quantitative method for evaluating wound healing status.


Subject(s)
Diabetes Mellitus, Type 2 , Diabetic Foot/pathology , Image Processing, Computer-Assisted/methods , Telemedicine/methods , Adult , Algorithms , Automation , Color , Diabetes Mellitus, Type 2/complications , Female , Humans , Male , Severity of Illness Index , Smartphone , Telemedicine/instrumentation , Wound Healing
12.
IEEE Trans Biomed Eng ; 62(2): 477-88, 2015 Feb.
Article in English | MEDLINE | ID: mdl-25248175

ABSTRACT

Diabetic foot ulcers represent a significant health issue. Currently, clinicians and nurses mainly base their wound assessment on visual examination of wound size and healing status, while the patients themselves seldom have an opportunity to play an active role. Hence, a more quantitative and cost-effective examination method that enables the patients and their caregivers to take a more active role in daily wound care potentially can accelerate wound healing, save travel cost and reduce healthcare expenses. Considering the prevalence of smartphones with a high-resolution digital camera, assessing wounds by analyzing images of chronic foot ulcers is an attractive option. In this paper, we propose a novel wound image analysis system implemented solely on the Android smartphone. The wound image is captured by the camera on the smartphone with the assistance of an image capture box. After that, the smartphone performs wound segmentation by applying the accelerated mean-shift algorithm. Specifically, the outline of the foot is determined based on skin color, and the wound boundary is found using a simple connected region detection method. Within the wound boundary, the healing status is next assessed based on red-yellow-black color evaluation model. Moreover, the healing status is quantitatively assessed, based on trend analysis of time records for a given patient. Experimental results on wound images collected in UMASS-Memorial Health Center Wound Clinic (Worcester, MA) following an Institutional Review Board approved protocol show that our system can be efficiently used to analyze the wound healing status with promising accuracy.


Subject(s)
Algorithms , Cell Phone , Diabetic Foot/pathology , Image Interpretation, Computer-Assisted/methods , Photography/methods , Wound Healing , Equipment Design , Equipment Failure Analysis , Humans , Image Interpretation, Computer-Assisted/instrumentation , Lighting/instrumentation , Lighting/methods , Mobile Applications , Photography/instrumentation , Reproducibility of Results , Sensitivity and Specificity
SELECTION OF CITATIONS
SEARCH DETAIL
...