Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
PLOS Digit Health ; 2(10): e0000353, 2023 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-37788239

RESUMO

In 2021, the National Guideline Alliance for the Royal College of Obstetricians and Gynaecologists reviewed the body of evidence, including two meta-analyses, implicating supine sleeping position as a risk factor for growth restriction and stillbirth. While they concluded that pregnant people should be advised to avoid going to sleep on their back after 28 weeks' gestation, their main critique of the evidence was that, to date, all studies were retrospective and sleeping position was not objectively measured. As such, the Alliance noted that it would not be possible to prospectively study the associations between sleeping position and adverse pregnancy outcomes. Our aim was to demonstrate the feasibility of building a vision-based model for automated and accurate detection and quantification of sleeping position throughout the third trimester-a model with the eventual goal to be developed further and used by researchers as a tool to enable them to either confirm or disprove the aforementioned associations. We completed a Canada-wide, cross-sectional study in 24 participants in the third trimester. Infrared videos of eleven simulated sleeping positions unique to pregnancy and a sitting position both with and without bed sheets covering the body were prospectively collected. We extracted 152,618 images from 48 videos, semi-randomly down-sampled and annotated 5,970 of them, and fed them into a deep learning algorithm, which trained and validated six models via six-fold cross-validation. The performance of the models was evaluated using an unseen testing set. The models detected the twelve positions, with and without bed sheets covering the body, achieving an average precision of 0.72 and 0.83, respectively, and an average recall ("sensitivity") of 0.67 and 0.76, respectively. For the supine class with and without bed sheets covering the body, the models achieved an average precision of 0.61 and 0.75, respectively, and an average recall of 0.74 and 0.81, respectively.

2.
IEEE J Transl Eng Health Med ; 10: 4900308, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35492508

RESUMO

Background: Lyme disease (caused by Borrelia burgdorferi) is an infectious disease transmitted to humans by a bite from infected blacklegged ticks (Ixodes scapularis) in eastern North America. Lyme disease can be prevented if antibiotic prophylaxis is given to a patient within 72 hours of a blacklegged tick bite. Therefore, recognizing a blacklegged tick could facilitate the management of Lyme disease. Methods: In this work, we build an automated detection tool that can differentiate blacklegged ticks from other tick species using advanced computer vision approaches in real-time. Specially, we use convolution neural network models, trained end-to-end, to classify tick species. Also, advanced knowledge transfer techniques are adopted to improve the performance of convolution neural network models. Results: Our best convolution neural network model achieves 92% accuracy on unseen tick species. Conclusion: Our proposed vision-based approach simplifies tick identification and contributes to the emerging work on public health surveillance of ticks and tick-borne diseases. In addition, it can be integrated with the geography of exposure and potentially be leveraged to inform the risk of Lyme disease infection. This is the first report of using deep learning technologies to classify ticks, providing the basis for automation of tick surveillance, and advancing tick-borne disease ecology and risk management.


Assuntos
Borrelia burgdorferi , Ixodes , Doença de Lyme , Doenças Transmitidas por Carrapatos , Animais , Computadores , Humanos , Doença de Lyme/diagnóstico , Doenças Transmitidas por Carrapatos/diagnóstico
3.
J Med Internet Res ; 23(11): e26524, 2021 11 01.
Artigo em Inglês | MEDLINE | ID: mdl-34723817

RESUMO

BACKGROUND: Sleep apnea is a respiratory disorder characterized by frequent breathing cessation during sleep. Sleep apnea severity is determined by the apnea-hypopnea index (AHI), which is the hourly rate of respiratory events. In positional sleep apnea, the AHI is higher in the supine sleeping position than it is in other sleeping positions. Positional therapy is a behavioral strategy (eg, wearing an item to encourage sleeping toward the lateral position) to treat positional apnea. The gold standard of diagnosing sleep apnea and whether or not it is positional is polysomnography; however, this test is inconvenient, expensive, and has a long waiting list. OBJECTIVE: The objective of this study was to develop and evaluate a noncontact method to estimate sleep apnea severity and to distinguish positional versus nonpositional sleep apnea. METHODS: A noncontact deep-learning algorithm was developed to analyze infrared video of sleep for estimating AHI and to distinguish patients with positional vs nonpositional sleep apnea. Specifically, a 3D convolutional neural network (CNN) architecture was used to process movements extracted by optical flow to detect respiratory events. Positional sleep apnea patients were subsequently identified by combining the AHI information provided by the 3D-CNN model with the sleeping position (supine vs lateral) detected via a previously developed CNN model. RESULTS: The algorithm was validated on data of 41 participants, including 26 men and 15 women with a mean age of 53 (SD 13) years, BMI of 30 (SD 7), AHI of 27 (SD 31) events/hour, and sleep duration of 5 (SD 1) hours; 20 participants had positional sleep apnea, 15 participants had nonpositional sleep apnea, and the positional status could not be discriminated for the remaining 6 participants. AHI values estimated by the 3D-CNN model correlated strongly and significantly with the gold standard (Spearman correlation coefficient 0.79, P<.001). Individuals with positional sleep apnea (based on an AHI threshold of 15) were identified with 83% accuracy and an F1-score of 86%. CONCLUSIONS: This study demonstrates the possibility of using a camera-based method for developing an accessible and easy-to-use device for screening sleep apnea at home, which can be provided in the form of a tablet or smartphone app.


Assuntos
Síndromes da Apneia do Sono , Apneia Obstrutiva do Sono , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Movimento , Polissonografia , Sono , Síndromes da Apneia do Sono/diagnóstico , Apneia Obstrutiva do Sono/diagnóstico
4.
Nat Sci Sleep ; 12: 1009-1021, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-33235534

RESUMO

PURPOSE: The current gold standard to detect sleep/wakefulness is based on electroencephalogram, which is inconvenient if included in portable sleep screening devices. Therefore, a challenge in the portable devices is sleeping time estimation. Without sleeping time, sleep parameters such as apnea/hypopnea index (AHI), an index for quantifying sleep apnea severity, can be underestimated. Recent studies have used tracheal sounds and movements for sleep screening and calculating AHI without considering sleeping time. In this study, we investigated the detection of sleep/wakefulness states and estimation of sleep parameters using tracheal sounds and movements. MATERIALS AND METHODS: Participants with suspected sleep apnea who were referred for sleep screening were included in this study. Simultaneously with polysomnography, tracheal sounds and movements were recorded with a small wearable device, called the Patch, attached over the trachea. Each 30-second epoch of tracheal data was scored as sleep or wakefulness using an automatic classification algorithm. The performance of the algorithm was compared to the sleep/wakefulness scored blindly based on the polysomnography. RESULTS: Eighty-eight subjects were included in this study. The accuracy of sleep/wakefulness detection was 82.3±8.66% with a sensitivity of 87.8±10.8 % (sleep), specificity of 71.4±18.5% (awake), F1 of 88.1±9.3% and Cohen's kappa of 0.54. The correlations between the estimated and polysomnography-based measures for total sleep time and sleep efficiency were 0.78 (p<0.001) and 0.70 (p<0.001), respectively. CONCLUSION: Sleep/wakefulness periods can be detected using tracheal sound and movements. The results of this study combined with our previous studies on screening sleep apnea with tracheal sounds provide strong evidence that respiratory sounds analysis can be used to develop robust, convenient and cost-effective portable devices for sleep apnea monitoring.

5.
J Med Internet Res ; 22(5): e17252, 2020 05 22.
Artigo em Inglês | MEDLINE | ID: mdl-32441656

RESUMO

BACKGROUND: Sleep apnea is a respiratory disorder characterized by an intermittent reduction (hypopnea) or cessation (apnea) of breathing during sleep. Depending on the presence of a breathing effort, sleep apnea is divided into obstructive sleep apnea (OSA) and central sleep apnea (CSA) based on the different pathologies involved. If the majority of apneas in a person are obstructive, they will be diagnosed as OSA or otherwise as CSA. In addition, as it is challenging and highly controversial to divide hypopneas into central or obstructive, the decision about sleep apnea type (OSA vs CSA) is made based on apneas only. Choosing the appropriate treatment relies on distinguishing between obstructive apnea (OA) and central apnea (CA). OBJECTIVE: The objective of this study was to develop a noncontact method to distinguish between OAs and CAs. METHODS: Five different computer vision-based algorithms were used to process infrared (IR) video data to track and analyze body movements to differentiate different types of apnea (OA vs CA). In the first two methods, supervised classifiers were trained to process optical flow information. In the remaining three methods, a convolutional neural network (CNN) was designed to extract distinctive features from optical flow and to distinguish OA from CA. RESULTS: Overnight sleeping data of 42 participants (mean age 53, SD 15 years; mean BMI 30, SD 7 kg/m2; 27 men and 15 women; mean number of OA 16, SD 30; mean number of CA 3, SD 7; mean apnea-hypopnea index 27, SD 31 events/hour; mean sleep duration 5 hours, SD 1 hour) were collected for this study. The test and train data were recorded in two separate laboratory rooms. The best-performing model (3D-CNN) obtained 95% accuracy and an F1 score of 89% in differentiating OA vs CA. CONCLUSIONS: In this study, the first vision-based method was developed that differentiates apnea types (OA vs CA). The developed algorithm tracks and analyses chest and abdominal movements captured via an IR video camera. Unlike previously developed approaches, this method does not require any attachment to a user that could potentially alter the sleeping condition.


Assuntos
Aprendizado Profundo/normas , Polissonografia/métodos , Apneia do Sono Tipo Central/diagnóstico , Apneia Obstrutiva do Sono/diagnóstico , Espectrofotometria Infravermelho/métodos , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Apneia do Sono Tipo Central/fisiopatologia , Apneia Obstrutiva do Sono/fisiopatologia
6.
IEEE J Transl Eng Health Med ; 7: 1900708, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-32166048

RESUMO

A reliable, accessible, and non-intrusive method for tracking respiratory and heart rate is important for improving monitoring and detection of sleep apnea. In this study, an algorithm based on motion analysis of infrared video recordings was validated in 50 adults referred for clinical overnight polysomnography (PSG). The algorithm tracks the displacements of selected feature points on each sleeping participant and extracts respiratory rate using principal component analysis and heart rate using independent component analysis. For respiratory rate estimation (mean ± standard deviation), 89.89 % ± 10.95 % of the overnight estimation was accurate within 1 breath per minute compared to the PSG-derived respiratory rate from the respiratory inductive plethysmography signal, with an average root mean square error (RMSE) of 2.10 ± 1.64 breaths per minute. For heart rate estimation, 77.97 % ± 18.91 % of the overnight estimation was within 5 beats per minute of the heart rate derived from the pulse oximetry signal from PSG, with mean RMSE of 7.47 ± 4.79 beats per minute. No significant difference in estimation of RMSE of either signal was found according to differences in body position, sleep stage, or amount of the body covered by blankets. This vision-based method may prove suitable for overnight, non-contact monitoring of respiratory rate. However, at present, heart rate monitoring is less reliable and will require further work to improve accuracy.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...