Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
PLoS One ; 19(3): e0298977, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38437233

RESUMO

OBJECTIVE: To analyse the relationship between health app quality with user ratings and the number of downloads of corresponding health apps. MATERIALS AND METHODS: Utilising a dataset of 881 Android-based health apps, assessed via the 300-point objective Organisation for the Review of Care and Health Applications (ORCHA) assessment tool, we explored whether subjective user-level indicators of quality (user ratings and downloads) correlate with objective quality scores in the domains of user experience, data privacy and professional/clinical assurance. For this purpose, we applied spearman correlation and multiple linear regression models. RESULTS: For user experience, professional/clinical assurance and data privacy scores, all models had very low adjusted R squared values (< .02). Suggesting that there is no meaningful link between subjective user ratings or the number of health app downloads and objective quality measures. Spearman correlations suggested that prior downloads only had a very weak positive correlation with user experience scores (Spearman = .084, p = .012) and data privacy scores (Spearman = .088, p = .009). There was a very weak negative correlation between downloads and professional/clinical assurance score (Spearman = -.081, p = .016). Additionally, user ratings demonstrated a very weak correlation with no statistically significant correlations observed between user ratings and the scores (all p > 0.05). For ORCHA scores multiple linear regression had adjusted R-squared = -.002. CONCLUSION: This study highlights that widely available proxies which users may perceive to signify the quality of health apps, namely user ratings and downloads, are inaccurate predictors for estimating quality. This indicates the need for wider use of quality assurance methodologies which can accurately determine the quality, safety, and compliance of health apps. Findings suggest more should be done to enable users to recognise high-quality health apps, including digital health literacy training and the provision of nationally endorsed "libraries".


Assuntos
Letramento em Saúde , Bibliotecas , Aplicativos Móveis , Saúde Digital , Modelos Lineares
2.
JMIR Mhealth Uhealth ; 11: e47043, 2023 11 23.
Artigo em Inglês | MEDLINE | ID: mdl-37995121

RESUMO

BACKGROUND: There are more than 350,000 digital health interventions (DHIs) in the app stores. To ensure that they are effective and safe to use, they should be assessed for compliance with best practice standards. OBJECTIVE: The objective of this paper was to examine and compare the compliance of DHIs with best practice standards and adherence to user experience (UX), professional and clinical assurance (PCA), and data privacy (DP). METHODS: We collected assessment data from 1574 DHIs using the Organisation for the Review of Care and Health Apps Baseline Review (OBR) assessment tool. As part of the assessment, each DHI received a score out of 100 for each of the abovementioned areas (ie, UX, PCA, and DP). These 3 OBR scores are combined to make up the overall ORCHA score (a proxy for quality). Inferential statistics, probability distributions, Kruskal-Wallis, Wilcoxon rank sum test, Cliff delta, and Dunn tests were used to conduct the data analysis. RESULTS: We found that 57.3% (902/1574) of the DHIs had an Organisation for the Review of Care and Health Apps (ORCHA) score below the threshold of 65. The overall median OBR score (ORCHA score) for all DHIs was 61.5 (IQR 51.0-73.0) out of 100. A total of 46.2% (12/26) of DHI's health care domains had a median equal to or above the ORCHA threshold score of 65. For the 3 assessment areas (UX, DP, and PCA), DHIs scored the highest for the UX assessment 75.2 (IQR 70.0-79.6), followed by DP 65.1 (IQR 55.0-73.4) and PCA 49.6 (IQR 31.9-76.1). UX scores had the least variance (SD 13.9), while PCA scores had the most (SD 24.8). Respiratory and urology DHIs were consistently highly ranked in the National Institute for Health and Care Excellence Evidence Standards Framework tiers B and C based on their ORCHA score. CONCLUSIONS: There is a high level of variability in the ORCHA scores of DHIs across different health care domains. This suggests that there is an urgent need to improve compliance with best practices in some health care areas. Possible explanations for the observed differences might include varied market maturity and commercial interests within the different health care domains. More investment to support the development of higher-quality DHIs in areas such as ophthalmology, allergy, women's health, sexual health, and dental care may be needed.


Assuntos
Oftalmologia , Análise de Dados Secundários , Humanos , Feminino , Análise de Dados , Instalações de Saúde , Privacidade
3.
JMIR Mhealth Uhealth ; 10(8): e37290, 2022 08 18.
Artigo em Inglês | MEDLINE | ID: mdl-35980732

RESUMO

BACKGROUND: The System Usability Scale (SUS) is a widely used scale that has been used to quantify the usability of many software and hardware products. However, the SUS was not specifically designed to evaluate mobile apps, or in particular digital health apps (DHAs). OBJECTIVE: The aim of this study was to examine whether the widely used SUS distribution for benchmarking (mean 68, SD 12.5) can be used to reliably assess the usability of DHAs. METHODS: A search of the literature was performed using the ACM Digital Library, IEEE Xplore, CORE, PubMed, and Google Scholar databases to identify SUS scores related to the usability of DHAs for meta-analysis. This study included papers that published the SUS scores of the evaluated DHAs from 2011 to 2021 to get a 10-year representation. In total, 117 SUS scores for 114 DHAs were identified. R Studio and the R programming language were used to model the DHA SUS distribution, with a 1-sample, 2-tailed t test used to compare this distribution with the standard SUS distribution. RESULTS: The mean SUS score when all the collected apps were included was 76.64 (SD 15.12); however, this distribution exhibited asymmetrical skewness (-0.52) and was not normally distributed according to Shapiro-Wilk test (P=.002). The mean SUS score for "physical activity" apps was 83.28 (SD 12.39) and drove the skewness. Hence, the mean SUS score for all collected apps excluding "physical activity" apps was 68.05 (SD 14.05). A 1-sample, 2-tailed t test indicated that this health app SUS distribution was not statistically significantly different from the standard SUS distribution (P=.98). CONCLUSIONS: This study concludes that the SUS and the widely accepted benchmark of a mean SUS score of 68 (SD 12.5) are suitable for evaluating the usability of DHAs. We speculate as to why physical activity apps received higher SUS scores than expected. A template for reporting mean SUS scores to facilitate meta-analysis is proposed, together with future work that could be done to further examine the SUS benchmark scores for DHAs.


Assuntos
Aplicativos Móveis , Telemedicina , Benchmarking , Humanos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...