Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
J Med Internet Res ; 23(10): e29406, 2021 10 08.
Artigo em Inglês | MEDLINE | ID: mdl-34623316

RESUMO

BACKGROUND: Providers of on-demand care, such as those in urgent care centers, may prescribe antibiotics unnecessarily because they fear receiving negative reviews on web-based platforms from unsatisfied patients-the so-called Yelp effect. This effect is hypothesized to be a significant driver of inappropriate antibiotic prescribing, which exacerbates antibiotic resistance. OBJECTIVE: In this study, we aimed to determine the frequency with which patients left negative reviews on web-based platforms after they expected to receive antibiotics in an urgent care setting but did not. METHODS: We obtained a list of 8662 urgent care facilities from the Yelp application programming interface. By using this list, we automatically collected 481,825 web-based reviews from Google Maps between January 21 and February 10, 2019. We used machine learning algorithms to summarize the contents of these reviews. Additionally, 200 randomly sampled reviews were analyzed by 4 annotators to verify the types of messages present and whether they were consistent with the Yelp effect. RESULTS: We collected 481,825 reviews, of which 1696 (95% CI 1240-2152) exhibited the Yelp effect. Negative reviews primarily identified operations issues regarding wait times, rude staff, billing, and communication. CONCLUSIONS: Urgent care patients rarely express expectations for antibiotics in negative web-based reviews. Thus, our findings do not support an association between a lack of antibiotic prescriptions and negative web-based reviews. Rather, patients' dissatisfaction with urgent care was most strongly linked to operations issues that were not related to the clinical management plan.


Assuntos
Instituições de Assistência Ambulatorial , Satisfação do Paciente , Assistência Ambulatorial , Antibacterianos/uso terapêutico , Comunicação , Humanos , Internet
2.
Artigo em Inglês | MEDLINE | ID: mdl-27014744

RESUMO

BACKGROUND: Public health officials and policy makers in the United States expend significant resources at the national, state, county, and city levels to measure the rate of influenza infection. These individuals rely on influenza infection rate information to make important decisions during the course of an influenza season driving vaccination campaigns, clinical guidelines, and medical staffing. Web and social media data sources have emerged as attractive alternatives to supplement existing practices. While traditional surveillance methods take 1-2 weeks, and significant labor, to produce an infection estimate in each locale, web and social media data are available in near real-time for a broad range of locations. OBJECTIVE: The objective of this study was to analyze the efficacy of flu surveillance from combining data from the websites Google Flu Trends and HealthTweets at the local level. We considered both emergency department influenza-like illness cases and laboratory-confirmed influenza cases for a single hospital in the City of Baltimore. METHODS: This was a retrospective observational study comparing estimates of influenza activity of Google Flu Trends and Twitter to actual counts of individuals with laboratory-confirmed influenza, and counts of individuals presenting to the emergency department with influenza-like illness cases. Data were collected from November 20, 2011 through March 16, 2014. Each parameter was evaluated on the municipal, regional, and national scale. We examined the utility of social media data for tracking actual influenza infection at the municipal, state, and national levels. Specifically, we compared the efficacy of Twitter and Google Flu Trends data. RESULTS: We found that municipal-level Twitter data was more effective than regional and national data when tracking actual influenza infection rates in a Baltimore inner-city hospital. When combined, national-level Twitter and Google Flu Trends data outperformed each data source individually. In addition, influenza-like illness data at all levels of geographic granularity were best predicted by national Google Flu Trends data. CONCLUSIONS: In order to overcome sensitivity to transient events, such as the news cycle, the best-fitting Google Flu Trends model relies on a 4-week moving average, suggesting that it may also be sacrificing sensitivity to transient fluctuations in influenza infection to achieve predictive power. Implications for influenza forecasting are discussed in this report.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...