Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Acad Emerg Med ; 20(10): 1004-12, 2013 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-24127703

RESUMO

BACKGROUND: The "BEEM" (best evidence in emergency medicine) rater scale was created for emergency physicians (EPs) to evaluate the physician-derived clinical relevance score of recently published, emergency medicine (EM)-related studies. BEEM therefore is designed to help make EPs aware of studies most likely to confirm or change current clinical practice. OBJECTIVES: The objective was to validate the BEEM rater score as a predictor of literature citation, using a bibliometric construct of clinical relevance to EM based on author-, document-, and journal-level measures (first and last author h-indices, number of authors including corporate and group authors, citations from date of publication to 2011, and journal impact factor scores) and study characteristics (design, category, and sample size). METHODS: Each month from 2007 through 2012, approximately 200 EPs from around the world voluntarily reviewed the titles and conclusions of recently published EM-related studies identified by BEEM faculty via the McMaster Health Information Research Unit. Using the BEEM rater scale, a reliable seven-item instrument that evaluates the clinical relevance of studies, raters independently assigned BEEM scores to approximately 10 to 20 articles each month. Two investigators independently abstracted the bibliometric indices for these articles. A citation rate for each article was calculated by dividing the Thomson Reuters Web of Science (WoS) total citation count by the number of years in publication. BEEM rater scores were correlated with the citation rate using Spearman's rho. The performance of the BEEM rater score was assessed for each article using negative binomial regression with composite citation count as the criterion standard, while controlling for other independent bibliometric variables in three models. RESULTS: The BEEM raters evaluated 605 articles with a mean (±SD) BEEM score of 3.84 (±0.7) and a median BEEM score of 3.85 (interquartile range = 3.38 to 4.30). Articles were primarily therapeutic (59%) and diagnostic (27%), with various designs, including 37% systematic reviews, 32% randomized controlled trials (RCTs), and 30% observational designs. The citation rate and BEEM rater score correlated positively (0.144), while the BEEM rater score and the Journal Citation Report (JCR) impact factor score were minimally correlated (0.053). In the first model, the BEEM rater score significantly predicted WoS citation rate (p < 0.0001) with an odds ratio (OR) of 1.24 (95% confidence interval [CI] = 1.106 to 1.402). In subsequent models adjusting for the JCR impact factor score, the h-indices of the first and last authors, number of authors, and study design, the BEEM rater score was not significant (p = 0.08). CONCLUSIONS: To the best of our knowledge, the BEEM rater score is the only known measure of clinical relevance. It has a high interrater reliability and face validity and correlates with future citations. Future research should assess this instrument against alternative constructs of clinical relevance.


Assuntos
Bibliometria , Medicina de Emergência/métodos , Medicina Baseada em Evidências/métodos , Publicações/estatística & dados numéricos , Editoração , Austrália , Canadá , Humanos , Reino Unido , Estados Unidos
2.
Acad Emerg Med ; 18(11): 1193-200, 2011 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-22092904

RESUMO

BACKGROUND: Studies published in general and specialty medical journals have the potential to improve emergency medicine (EM) practice, but there can be delayed awareness of this evidence because emergency physicians (EPs) are unlikely to read most of these journals. Also, not all published studies are intended for or ready for clinical practice application. The authors developed "Best Evidence in Emergency Medicine" (BEEM) to ameliorate these problems by searching for, identifying, appraising, and translating potentially practice-changing studies for EPs. An initial step in the BEEM process is the BEEM rater scale, a novel tool for EPs to collectively evaluate the relative clinical relevance of EM-related studies found in more than 120 journals. The BEEM rater process was designed to serve as a clinical relevance filter to identify those studies with the greatest potential to affect EM practice. Therefore, only those studies identified by BEEM raters as having the highest clinical relevance are selected for the subsequent critical appraisal process and, if found methodologically sound, are promoted as the best evidence in EM. OBJECTIVES: The primary objective was to measure inter-rater reliability (IRR) of the BEEM rater scale. Secondary objectives were to determine the minimum number of EP raters needed for the BEEM rater scale to achieve acceptable reliability and to compare performance of the scale against a previously published evidence rating system, the McMaster Online Rating of Evidence (MORE), in an EP population. METHODS: The authors electronically distributed the title, conclusion, and a PubMed link for 23 recently published studies related to EM to a volunteer group of 134 EPs. The volunteers answered two demographic questions and rated the articles using one of two randomly assigned seven-point Likert scales, the BEEM rater scale (n = 68) or the MORE scale (n = 66), over two separate administrations. The IRR of each scale was measured using generalizability theory. RESULTS: The IRR of the BEEM rater scale ranged between 0.90 (95% confidence interval [CI] = 0.86 to 0.93) to 0.92 (95% CI = 0.89 to 0.94) across administrations. Decision studies showed a minimum of 12 raters is required for acceptable reliability of the BEEM rater scale. The IRR of the MORE scale was 0.82 to 0.84. CONCLUSIONS: The BEEM rater scale is a highly reliable, single-question tool for a small number of EPs to collectively rate the relative clinical relevance within the specialty of EM of recently published studies from a variety of medical journals. It compares favorably with the MORE system because it achieves a high IRR despite simply requiring raters to read each article's title and conclusion.


Assuntos
Benchmarking/normas , Conferências de Consenso como Assunto , Medicina de Emergência/normas , Medicina de Emergência Baseada em Evidências/normas , Benchmarking/organização & administração , Medicina de Emergência Baseada em Evidências/organização & administração , Humanos , Reprodutibilidade dos Testes
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...