Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
1.
J Health Psychol ; : 13591053241241840, 2024 Apr 15.
Artigo em Inglês | MEDLINE | ID: mdl-38618999

RESUMO

This study aimed to assess the moderating effect of social support on the effectiveness of a web-based, computer-tailored physical activity intervention for older adults. In the Active for Life trial, 243 inactive adults aged 65+ years were randomised into: (1) tailoring + Fitbit (n = 78), (2) tailoring-only (n = 96) or (3) control (n = 69). For the current study, participants were categorised as having higher (n = 146) or lower (n = 97) social support based on the Duke Social Support Index (DSSI_10). Moderate-to-vigorous physical activity (MVPA) was measured through accelerometers at baseline and post-intervention. A linear mixed model analysis demonstrated that among participants with lower social support, the tailoring + Fitbit participants, but not the tailoring only participants increased their MVPA more than the control. Among participants with higher social support, no differences in MVPA changes were observed between groups. Web-based computer-tailored interventions with Fitbit integration may be more effective in older adults with lower levels of social support.

3.
Lancet Digit Health ; 4(10): e738-e747, 2022 10.
Artigo em Inglês | MEDLINE | ID: mdl-36150782

RESUMO

Infectious disease modelling can serve as a powerful tool for situational awareness and decision support for policy makers. However, COVID-19 modelling efforts faced many challenges, from poor data quality to changing policy and human behaviour. To extract practical insight from the large body of COVID-19 modelling literature available, we provide a narrative review with a systematic approach that quantitatively assessed prospective, data-driven modelling studies of COVID-19 in the USA. We analysed 136 papers, and focused on the aspects of models that are essential for decision makers. We have documented the forecasting window, methodology, prediction target, datasets used, and geographical resolution for each study. We also found that a large fraction of papers did not evaluate performance (25%), express uncertainty (50%), or state limitations (36%). To remedy some of these identified gaps, we recommend the adoption of the EPIFORGE 2020 model reporting guidelines and creating an information-sharing system that is suitable for fast-paced infectious disease outbreak science.


Assuntos
COVID-19 , COVID-19/epidemiologia , Previsões , Humanos , Estados Unidos/epidemiologia
4.
JAMA Ophthalmol ; 139(2): 206-213, 2021 02 01.
Artigo em Inglês | MEDLINE | ID: mdl-33377944

RESUMO

Importance: Adherence to screening for vision-threatening proliferative sickle cell retinopathy is limited among patients with sickle cell hemoglobinopathy despite guidelines recommending dilated fundus examinations beginning in childhood. An automated algorithm for detecting sea fan neovascularization from ultra-widefield color fundus photographs could expand access to rapid retinal evaluations to identify patients at risk of vision loss from proliferative sickle cell retinopathy. Objective: To develop a deep learning system for detecting sea fan neovascularization from ultra-widefield color fundus photographs from patients with sickle cell hemoglobinopathy. Design, Setting, and Participants: In a cross-sectional study conducted at a single-institution, tertiary academic referral center, deidentified, retrospectively collected, ultra-widefield color fundus photographs from 190 adults with sickle cell hemoglobinopathy were independently graded by 2 masked retinal specialists for presence or absence of sea fan neovascularization. A third masked retinal specialist regraded images with discordant or indeterminate grades. Consensus retinal specialist reference standard grades were used to train a convolutional neural network to classify images for presence or absence of sea fan neovascularization. Participants included nondiabetic adults with sickle cell hemoglobinopathy receiving care from a Wilmer Eye Institute retinal specialist; the patients had received no previous laser or surgical treatment for sickle cell retinopathy and underwent imaging with ultra-widefield color fundus photographs between January 1, 2012, and January 30, 2019. Interventions: Deidentified ultra-widefield color fundus photographs were retrospectively collected. Main Outcomes and Measures: Sensitivity, specificity, and area under the receiver operating characteristic curve of the convolutional neural network for sea fan detection. Results: A total of 1182 images from 190 patients were included. Of the 190 patients, 101 were women (53.2%), and the mean (SD) age at baseline was 36.2 (12.3) years; 119 patients (62.6%) had hemoglobin SS disease and 46 (24.2%) had hemoglobin SC disease. One hundred seventy-nine patients (94.2%) were of Black or African descent. Images with sea fan neovascularization were obtained in 57 patients (30.0%). The convolutional neural network had an area under the curve of 0.988 (95% CI, 0.969-0.999), with sensitivity of 97.4% (95% CI, 86.5%-99.9%) and specificity of 97.0% (95% CI, 93.5%-98.9%) for detecting sea fan neovascularization from ultra-widefield color fundus photographs. Conclusions and Relevance: This study reports an automated system with high sensitivity and specificity for detecting sea fan neovascularization from ultra-widefield color fundus photographs from patients with sickle cell hemoglobinopathy, with potential applications for improving screening for vision-threatening proliferative sickle cell retinopathy.


Assuntos
Anemia Falciforme/complicações , Aprendizado Profundo , Angiofluoresceinografia , Interpretação de Imagem Assistida por Computador , Fotografação , Neovascularização Retiniana/diagnóstico por imagem , Vasos Retinianos/diagnóstico por imagem , Adulto , Anemia Falciforme/diagnóstico , Estudos Transversais , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Variações Dependentes do Observador , Reconhecimento Automatizado de Padrão , Valor Preditivo dos Testes , Reprodutibilidade dos Testes , Neovascularização Retiniana/etiologia , Estudos Retrospectivos , Adulto Jovem
5.
Ann Otol Rhinol Laryngol ; 130(3): 286-291, 2021 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-32795159

RESUMO

OBJECTIVE: Computer-aided analysis of laryngoscopy images has potential to add objectivity to subjective evaluations. Automated classification of biomedical images is extremely challenging due to the precision required and the limited amount of annotated data available for training. Convolutional neural networks (CNNs) have the potential to improve image analysis and have demonstrated good performance in many settings. This study applied machine-learning technologies to laryngoscopy to determine the accuracy of computer recognition of known laryngeal lesions found in patients post-extubation. METHODS: This is a proof of concept study that used a convenience sample of transnasal, flexible, distal-chip laryngoscopy images from patients post-extubation in the intensive care unit. After manually annotating images at the pixel-level, we applied a CNN-based method for analysis of granulomas and ulcerations to test potential machine-learning approaches for laryngoscopy analysis. RESULTS: A total of 127 images from 25 patients were manually annotated for presence and shape of these lesions-100 for training, 27 for evaluating the system. There were 193 ulcerations (148 in the training set; 45 in the evaluation set) and 272 granulomas (208 in the training set; 64 in the evaluation set) identified. Time to annotate each image was approximately 3 minutes. Machine-based analysis demonstrated per-pixel sensitivity of 82.0% and 62.8% for granulomas and ulcerations respectively; specificity was 99.0% and 99.6%. CONCLUSION: This work demonstrates the feasibility of machine learning via CNN-based methods to add objectivity to laryngoscopy analysis, suggesting that CNN may aid in laryngoscopy analysis for other conditions in the future.


Assuntos
Granuloma Laríngeo/patologia , Processamento de Imagem Assistida por Computador/métodos , Laringoscopia , Laringe/patologia , Redes Neurais de Computação , Úlcera/patologia , Extubação , Humanos , Unidades de Terapia Intensiva , Intubação Intratraqueal , Laringe/lesões , Aprendizado de Máquina , Estudo de Prova de Conceito , Respiração Artificial
6.
JAMA Netw Open ; 2(4): e191860, 2019 04 05.
Artigo em Inglês | MEDLINE | ID: mdl-30951163

RESUMO

Importance: Competence in cataract surgery is a public health necessity, and videos of cataract surgery are routinely available to educators and trainees but currently are of limited use in training. Machine learning and deep learning techniques can yield tools that efficiently segment videos of cataract surgery into constituent phases for subsequent automated skill assessment and feedback. Objective: To evaluate machine learning and deep learning algorithms for automated phase classification of manually presegmented phases in videos of cataract surgery. Design, Setting, and Participants: This was a cross-sectional study using a data set of videos from a convenience sample of 100 cataract procedures performed by faculty and trainee surgeons in an ophthalmology residency program from July 2011 to December 2017. Demographic characteristics for surgeons and patients were not captured. Ten standard labels in the procedure and 14 instruments used during surgery were manually annotated, which served as the ground truth. Exposures: Five algorithms with different input data: (1) a support vector machine input with cross-sectional instrument label data; (2) a recurrent neural network (RNN) input with a time series of instrument labels; (3) a convolutional neural network (CNN) input with cross-sectional image data; (4) a CNN-RNN input with a time series of images; and (5) a CNN-RNN input with time series of images and instrument labels. Each algorithm was evaluated with 5-fold cross-validation. Main Outcomes and Measures: Accuracy, area under the receiver operating characteristic curve, sensitivity, specificity, and precision. Results: Unweighted accuracy for the 5 algorithms ranged between 0.915 and 0.959. Area under the receiver operating characteristic curve for the 5 algorithms ranged between 0.712 and 0.773, with small differences among them. The area under the receiver operating characteristic curve for the image-only CNN-RNN (0.752) was significantly greater than that of the CNN with cross-sectional image data (0.712) (difference, -0.040; 95% CI, -0.049 to -0.033) and the CNN-RNN with images and instrument labels (0.737) (difference, 0.016; 95% CI, 0.014 to 0.018). While specificity was uniformly high for all phases with all 5 algorithms (range, 0.877 to 0.999), sensitivity ranged between 0.005 (95% CI, 0.000 to 0.015) for the support vector machine for wound closure (corneal hydration) and 0.974 (95% CI, 0.957 to 0.991) for the RNN for main incision. Precision ranged between 0.283 and 0.963. Conclusions and Relevance: Time series modeling of instrument labels and video images using deep learning techniques may yield potentially useful tools for the automated detection of phases in cataract surgery procedures.


Assuntos
Extração de Catarata/instrumentação , Processamento de Imagem Assistida por Computador/métodos , Gravação em Vídeo/métodos , Algoritmos , Catarata/epidemiologia , Estudos Transversais , Aprendizado Profundo , Humanos , Aprendizado de Máquina , Redes Neurais de Computação , Estudos Observacionais como Assunto , Estudos Retrospectivos , Sensibilidade e Especificidade
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...