Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters










Database
Language
Publication year range
1.
Article in English | MEDLINE | ID: mdl-38083624

ABSTRACT

Crackles are explosive breathing patterns caused by lung air sacs filling with fluid and act as an indicator for a plethora of pulmonary diseases. Clinical studies suggest a strong correlation between the presence of these adventitious auscultations and mortality rate, especially in pediatric patients, underscoring the importance of their pathological indication. While clinically important, crackles occur rarely in breathing signals relative to other phases and abnormalities of lung sounds, imposing a considerable class imbalance in developing learning methodologies for automated tracking and diagnosis of lung pathologies. The scarcity and clinical relevance of crackle sounds compel a need for exploring data augmentation techniques to enrich the space of crackle signals. Given their unique nature, the current study proposes a crackle-specific constrained synthetic sampling (CSS) augmentation that captures the geometric properties of crackles across different projected object spaces. We also outline a task-agnostic validation methodology that evaluates different augmentation techniques based on their goodness of fit relative to the space of original crackles. This evaluation considers both the separability of the manifold space generated by augmented data samples as well as a statistical distance space of the synthesized data relative to the original. Compared to a range of augmentation techniques, the proposed constrained-synthetic sampling of crackle sounds is shown to generate the most analogous samples relative to original crackle sounds, highlighting the importance of carefully considering the statistical constraints of the class under study.


Subject(s)
Lung Diseases , Respiratory Sounds , Humans , Child , Respiratory Sounds/diagnosis , Lung , Auscultation , Sound
2.
Article in English | MEDLINE | ID: mdl-38274002

ABSTRACT

Stethoscopes are used ubiquitously in clinical settings to 'listen' to lung sounds. The use of these systems in a variety of healthcare environments (hospitals, urgent care rooms, private offices, community sites, mobile clinics, etc.) presents a range of challenges in terms of ambient noise and distortions that mask lung signals from being heard clearly or processed accurately using auscultation devices. With advances in technology, computerized techniques have been developed to automate analysis or access a digital rendering of lung sounds. However, most approaches are developed and tested in controlled environments and do not reflect real-world conditions where auscultation signals are typically acquired. Without a priori access to a recording of the ambient noise (for signal-to-noise estimation) or a reference signal that reflects the true undistorted lung sound, it is difficult to evaluate the quality of the lung signal and its potential clinical interpretability. The current study proposes an objective reference-free Auscultation Quality Metric (AQM) which incorporates low-level signal attributes with high-level representational embeddings mapped to a nonlinear quality space to provide an independent evaluation of the auscultation quality. This metric is carefully designed to solely judge the signal based on its integrity relative to external distortions and masking effects and not confuse an adventitious breathing pattern as low-quality auscultation. The current study explores the robustness of the proposed AQM method across multiple clinical categorizations and different distortion types. It also evaluates the temporal sensitivity of this approach and its translational impact for deployment in digital auscultation devices.

3.
Annu Int Conf IEEE Eng Med Biol Soc ; 2022: 4421-4425, 2022 07.
Article in English | MEDLINE | ID: mdl-36086501

ABSTRACT

Thanks to recent advances in digital stethoscopes and rapid adoption of deep learning techniques, there has been tremendous progress in the field of Computerized Auscultation Analysis (CAA). Despite these promising leaps, the deploy-ment of these technologies in real-world applications remains limited due to inherent challenges with properly interpreting clinical data, particularly auscultations. One of the limiting factors is the inherent ambiguity that comes with variability in clinical opinion, even from highly trained experts. The lack of unanimity in expert opinions is often ignored in developing machine learning techniques to automatically screen normal from abnormal lung signals, with most algorithms being developed and tested on highly curated datasets. To better understand the potential pitfalls this selective analysis could cause in deployment, the current work explores the impact of clinical opinion variability on algorithms to detect adventitious patterns in lung sounds when trained on gold-standard data. The study shows that uncertainty in clinical opinion introduces far more variability and performance drop than dissidence in expert judgments. The study also explores the feasibility of automatically flagging auscultation signals based on their estimated uncertainty, thereby recommending further reassessment as well as improving computer-aided analysis.


Subject(s)
Auscultation , Stethoscopes , Computers , Humans , Lung , Respiratory Sounds/diagnosis
4.
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 772-775, 2020 07.
Article in English | MEDLINE | ID: mdl-33018100

ABSTRACT

A stethoscope is a ubiquitous tool used to 'listen' to sounds from the chest in order to assess lung and heart conditions. With advances in health technologies including digital devices and new wearable sensors, access to these sounds is becoming easier and abundant; yet proper measures of signal quality do not exist. In this work, we develop an objective quality metric of lung sounds based on low-level and high-level features in order to independently assess the integrity of the signal in presence of interference from ambient sounds and other distortions. The proposed metric outlines a mapping of auscultation signals onto rich low-level features extracted directly from the signal which capture spectral and temporal characteristics of the signal. Complementing these signal-derived attributes, we propose high-level learnt embedding features extracted from a generative auto-encoder trained to map auscultation signals onto a representative space that best captures the inherent statistics of lung sounds. Integrating both low-level (signal-derived) and high-level (embedding) features yields a robust correlation of 0.85 to infer the signal-to-noise ratio of recordings with varying quality levels. The method is validated on a large dataset of lung auscultation recorded in various clinical settings with controlled varying degrees of noise interference. The proposed metric is also validated against opinions of expert physicians in a blind listening test to further corroborate the efficacy of this method for quality assessment.


Subject(s)
Auscultation , Stethoscopes , Child , Humans , Lung , Noise , Respiratory Sounds
SELECTION OF CITATIONS
SEARCH DETAIL
...