Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 512
Filter
6.
JAMA ; 330(14): 1329-1330, 2023 10 10.
Article in English | MEDLINE | ID: mdl-37738250

ABSTRACT

This Viewpoint examines the demands of maintenance of certification (MOC) requirements from the ABIM on balance with the projected benefits to quality of patient care.


Subject(s)
Clinical Competence , Specialty Boards , Certification/standards , Clinical Competence/standards , Education, Medical, Continuing/standards , Specialty Boards/standards , United States
9.
Anesth Analg ; 133(5): 1331-1341, 2021 Nov 01.
Article in English | MEDLINE | ID: mdl-34517394

ABSTRACT

In 2020, the coronavirus disease 2019 (COVID-19) pandemic interrupted the administration of the APPLIED Examination, the final part of the American Board of Anesthesiology (ABA) staged examination system for initial certification. In response, the ABA developed, piloted, and implemented an Internet-based "virtual" form of the examination to allow administration of both components of the APPLIED Exam (Standardized Oral Examination and Objective Structured Clinical Examination) when it was impractical and unsafe for candidates and examiners to travel and have in-person interactions. This article describes the development of the ABA virtual APPLIED Examination, including its rationale, examination format, technology infrastructure, candidate communication, and examiner training. Although the logistics are formidable, we report a methodology for successfully introducing a large-scale, high-stakes, 2-element, remote examination that replicates previously validated assessments.


Subject(s)
Anesthesiology/education , COVID-19/epidemiology , Certification/methods , Computer-Assisted Instruction/methods , Educational Measurement/methods , Specialty Boards , Anesthesiology/standards , COVID-19/prevention & control , Certification/standards , Clinical Competence/standards , Computer-Assisted Instruction/standards , Educational Measurement/standards , Humans , Internship and Residency/methods , Internship and Residency/standards , Specialty Boards/standards , United States/epidemiology
10.
World Neurosurg ; 155: e236-e239, 2021 11.
Article in English | MEDLINE | ID: mdl-34419657

ABSTRACT

OBJECTIVE: There are few objective measures for evaluating individual performance throughout surgical residency. Two commonly used objective measures are the case log numbers and written board examination scores. The objective of this study was to investigate possible correlations between these measures. METHODS: We conducted a retrospective review of the American Board of Neurological Surgery (ABNS) written board scores and the Accreditation Council for Graduate Medical Education case logs of 27 recent alumni from neurologic surgery residency training programs at The Ohio State Wexner Medical Center and the University of Nebraska Medical Center. RESULTS: The number of spine cases logged was significantly correlated with the ABNS written examination performance in univariate linear regression (r2 = 0.182, P = 0.0265). However, case numbers from all other neurosurgical subspecialties did not significantly correlate with ABNS written board performance (P > 0.1). CONCLUSIONS: Identifying which objective measures correlate most closely with resident education could help optimize the structure of residency training programs. We believe that early exposure to focused aspects of neurosurgery helps the young resident learn quickly and efficiently and ultimately score highly on standardized examinations. Therefore program directors may want to ensure focused exposure during the early years of residency, with particular attention to worthwhile rotations in spine neurosurgery.


Subject(s)
Accreditation/standards , Internship and Residency , Neurosurgery/education , Clinical Competence/standards , Humans , Retrospective Studies , Specialty Boards/standards
11.
Surg Clin North Am ; 101(4): 703-715, 2021 Aug.
Article in English | MEDLINE | ID: mdl-34242611

ABSTRACT

Continuing medical education is an ongoing process to educate clinicians and provide patients with up-to-date, evidence-based care. Since its inception, the maintenance of certification (MOC) program has changed dramatically. This article reviews the development of MOC and its integration with the 6 core competencies, including the practice-based learning and improvement cycle. The concept of lifelong learning is discussed, with specific focus on different methods for surgeons to engage in learning, including simulation, coaching, and communities of practice. In addition, the future of MOC in continuous professional development is reviewed.


Subject(s)
Clinical Competence/standards , Education, Medical, Continuing/standards , General Surgery/education , Learning , Surgeons/education , Surgeons/standards , Certification/standards , Education, Medical, Continuing/methods , General Surgery/standards , Humans , Specialty Boards/standards , Surgeons/psychology , United States
13.
Arch Pathol Lab Med ; 145(9): 1089-1094, 2021 09 01.
Article in English | MEDLINE | ID: mdl-33406235

ABSTRACT

CONTEXT.­: Certification by the American Board of Pathology (ABPath) is a valued credential that serves patients, families, and the public and improves patient care. The ABPath establishes professional and educational standards and assesses the knowledge of candidates for initial certification in pathology. Diplomates certified in 2006 and thereafter are required to participate in Continuing Certification (CC; formerly Maintenance of Certification) in order to maintain certification. OBJECTIVE.­: To inform and update the pathology community on the history of board certification, the requirements for CC, ABPath CertLink, changes to the CC program, and ABPath compliance with recommendations from the American Board of Medical Specialties Vision Commission; and to demonstrate the value of CC participation for diplomates with non-time-limited certification. DATA SOURCES.­: This review uses ABPath archived minutes of the CC Committee and the Board of Trustees, the ABPath CC Booklet of Information, the collective knowledge of the ABPath staff and trustees, and the American Board of Medical Specialties 2018-2019 Board Certification Report. CONCLUSIONS.­: The ABPath continues to update the CC program to make it more relevant and meaningful and less burdensome for diplomates. Adding ABPath CertLink to the program has been a significant enhancement for the assessment of medical knowledge and has been well received by diplomates.


Subject(s)
Clinical Competence/standards , Education, Medical, Continuing/methods , Pathology/education , Specialty Boards/standards , Certification/methods , Certification/standards , Education, Medical, Continuing/standards , Humans , United States
14.
Anesth Analg ; 133(1): 226-232, 2021 07 01.
Article in English | MEDLINE | ID: mdl-33481404

ABSTRACT

BACKGROUND: The American Board of Anesthesiology administers the APPLIED Examination as a part of initial certification, which as of 2018 includes 2 components-the Standardized Oral Examination (SOE) and the Objective Structured Clinical Examination (OSCE). The goal of this study is to investigate the measurement construct(s) of the APPLIED Examination to assess whether the SOE and the OSCE measure distinct constructs (ie, factors). METHODS: Exploratory item factor analysis of candidates' performance ratings was used to determine the number of constructs, and confirmatory item factor analysis to estimate factor loadings within each construct and correlation(s) between the constructs. RESULTS: In exploratory item factor analysis, the log-likelihood ratio test and Akaike information criterion index favored the 3-factor model, with factors reflecting the SOE, OSCE Communication and Professionalism, and OSCE Technical Skills. The Bayesian information criterion index favored the 2-factor model, with factors reflecting the SOE and the OSCE. In confirmatory item factor analysis, both models suggest moderate correlation between the SOE factor and the OSCE factor; the correlation was 0.49 (95% confidence interval [CI], 0.42-0.55) for the 3-factor model and 0.61 (95% CI, 0.54-0.64) for the 2-factor model. The factor loadings were lower for Technical Skills stations of the OSCE (ranging from 0.11 to 0.25) compared with those of the SOE and Communication and Professionalism stations of the OSCE (ranging from 0.36 to 0.50). CONCLUSIONS: The analyses provide evidence that the SOE and the OSCE measure distinct constructs, supporting the rationale for administering both components of the APPLIED Examination for initial certification in anesthesiology.


Subject(s)
Anesthesiology/education , Anesthesiology/standards , Certification/standards , Independent Medical Evaluation , Specialty Boards/standards , Humans
15.
Am J Phys Med Rehabil ; 100(2S Suppl 1): S34-S39, 2021 02 01.
Article in English | MEDLINE | ID: mdl-33048889

ABSTRACT

ABSTRACT: The Accreditation Council of Graduate Medical Education developed the Milestones to assist training programs in assessing resident physicians in the context of their participation in Accreditation Council of Graduate Medical Education-accredited training programs. Biannual assessments are done over a resident's entire training period to define the trajectory in achieving specialty-specific competencies. As part of its process of initial certification, the American Board of Physical Medicine and Rehabilitation requires successful completion of two examinations administered approximately 9 mos apart. The Part I Examination measures a single dimensional construct, physical medicine and rehabilitation medical knowledge, whereas Part II assesses the application of medical and physiatric knowledge to multiple domains, including data acquisition, problem solving, patient management, systems-based practice, and interpersonal and communication skills through specific patient case scenarios. This study aimed to investigate the validity of the Milestones by demonstrating its association with performance in the American Board of Physical Medicine and Rehabilitation certifying examinations. A cohort of 233 physical medicine and rehabilitation trainees in 3-yr residency programs (postgraduate year 2 entry) in the United States from academic years 2014-2016, who also took the American Board of Physical Medicine and Rehabilitation Parts I and II certifying examinations between 2016 and 2018, were included in the study. Milestones ratings in four distinct observation periods were correlated with scores in the American Board of Physical Medicine and Rehabilitation Parts I and II Examinations. Milestones ratings of medical knowledge (but not patient care, professionalism, problem-based learning, interpersonal and communication skills, and systems-based practice) predicted performance in subsequent Part I American Board of Physical Medicine and Rehabilitation Examination, but none of the Milestone ratings correlated with Part II Examination scaled scores.


Subject(s)
Clinical Competence/standards , Internship and Residency/standards , Physical and Rehabilitation Medicine/education , Practice Patterns, Physicians'/standards , Specialty Boards/standards , Certification/standards , Cohort Studies , Education, Medical, Graduate/standards , Educational Measurement/methods , Humans , United States
16.
Rofo ; 193(2): 160-167, 2021 Feb.
Article in English | MEDLINE | ID: mdl-32698235

ABSTRACT

OBJECTIVE: To estimate the human resources required for a retrospective quality review of different percentages of all routine diagnostic procedures in the Department of Radiology at Bern University Hospital, Switzerland. MATERIALS AND METHODS: Three board-certified radiologists retrospectively evaluated the quality of the radiological reports of a total of 150 examinations (5 different examination types: abdominal CT, chest CT, mammography, conventional X-ray images and abdominal MRI). Each report was assigned a RADPEER score of 1 to 3 (score 1: concur with previous interpretation; score 2: discrepancy in interpretation/not ordinarily expected to be made; score 3: discrepancy in interpretation/should be made most of the time). The time (in seconds, s) required for each review was documented and compared. A sensitivity analysis was conducted to calculate the total workload for reviewing different percentages of the total annual reporting volume of the clinic. RESULTS: Among the total of 450 reviews analyzed, 91.1 % (410/450) were assigned a score of 1 and 8.9 % (40/450) were assigned scores of 2 or 3. The average time (in seconds) required for a peer review was 60.4 s (min. 5 s, max. 245 s). The reviewer with the greatest clinical experience needed significantly less time for reviewing the reports than the two reviewers with less clinical expertise (p < 0.05). Average review times were longer for discrepant ratings with a score of 2 or 3 (p < 0.05). The total time requirement calculated for reviewing all 5 types of examination for one year would be more than 1200 working hours. CONCLUSION: A retrospective peer review of reports of radiological examinations using the RADPEER system requires considerable human resources. However, to improve quality, it seems feasible to peer review at least a portion of the total yearly reporting volume. KEY POINTS: · A systematic retrospective assessment of the content of radiological reports using the RADPEER system involves high personnel costs.. · The retrospective assessment of all reports of a clinic or practice seems unrealistic due to the lack of highly specialized personnel.. · At least part of all reports should be reviewed with the aim of improving the quality of reports.. CITATION FORMAT: · Maurer MH, Brönnimann M, Schroeder C et al. Time Requirement and Feasibility of a Systematic Quality Peer Review of Reporting in Radiology. Fortschr Röntgenstr 2021; 193: 160 - 167.


Subject(s)
Peer Review/methods , Quality Assurance, Health Care/methods , Radiologists/statistics & numerical data , Radiology/statistics & numerical data , Abdominal Cavity/diagnostic imaging , Feasibility Studies , Humans , Magnetic Resonance Imaging/methods , Magnetic Resonance Imaging/statistics & numerical data , Mammography/methods , Mammography/statistics & numerical data , Radiography/methods , Radiography/statistics & numerical data , Radiology/standards , Research Report , Retrospective Studies , Specialty Boards/standards , Switzerland , Thorax/diagnostic imaging , Time Factors , Tomography, X-Ray Computed/methods , Tomography, X-Ray Computed/statistics & numerical data , Workload
17.
Teach Learn Med ; 33(1): 21-27, 2021.
Article in English | MEDLINE | ID: mdl-32928000

ABSTRACT

Phenomenon: Internal medicine physicians in the United States must pass the American Board of Internal Medicine Internal Medicine Maintenance of Certification (ABIM IM-MOC) examination as part of their ABIM IM-MOC requirements. Many of these physicians use an examination product to help them prepare, such as e-Learning products, including the ACP's MKSAP, UpToDate, and NEJM Knowledge+, yet their effectiveness remains largely unstudied. Approach: We compared ABIM IM-MOC examination performance among 177 physicians who attempted an ABIM IM-MOC examination between 2014-2017 and completed at least 75% of the NEJM Knowledge+ product prior to the ABIM IM-MOC examination and 177 very similar matched control physicians who did not use NEJM Knowledge+. Our measures of ABIM IM-MOC exam performance for NEJM Knowledge+ users were based on the results of the first attempt immediately following the NEJM Knowledge+ use and for non-users were based on the applicable matched examination performance. The three dichotomous examination performance outcomes measured on the first attempt at the ABIM IM-MOC examination included pass rate, scoring in the upper quartile, and scoring in the lower quartile. Findings: Use of NEJM Knowledge+ was associated with a regression adjusted 10.6% (5.37% to 15.8%) greater likelihood of passing the MOC examination (p < .001), 10.7% (2.61% to 18.7%) greater likelihood of having an examination score in the top quartile (p = .009), and -10.8% (-16.8% to -4.86%) lower likelihood of being in the bottom quartile of the MOC examination (p < .001) as compared to similar physicians who did not use NEJM Knowledge+. Insight: Physicians who used NEJM Knowledge+ had better ABIM IM-MOC exam performance. Further research is needed to determine what aspects of e-Learning products best prepare physicians for MOC examinations.


Subject(s)
Certification/standards , Clinical Competence/standards , Educational Measurement/statistics & numerical data , Internal Medicine/education , Licensure, Medical/standards , Specialty Boards/standards , Academic Performance , Attitude of Health Personnel , Humans , United States
19.
Plast Reconstr Surg ; 147(1): 231-238, 2021 01 01.
Article in English | MEDLINE | ID: mdl-33370071

ABSTRACT

BACKGROUND: Non-board-certified plastic surgeons performing cosmetic procedures and advertising as plastic surgeons may have an adverse effect on a patient's understanding of their practitioner's medical training and patient safety. The authors aim to assess (1) the impact of city size and locations and (2) the impact of health care transparency acts on the ratio of board-certified and non-American Board of Plastic Surgeons physicians. METHODS: The authors performed a systematic Google search for the term "plastic surgeon [city name]" to simulate a patient search of online providers. Comparisons of board certification status between the top hits for each city were made. Data gathered included city population, regional location, practice setting, and states with the passage of truth-in-advertising laws. RESULTS: One thousand six hundred seventy-seven unique practitioners were extracted. Of these, 1289 practitioners (76.9 percent) were American Board of Plastic Surgery-certified plastic surgeons. When comparing states with truth-in-advertising laws and states without such laws, the authors found no significant differences in board-certification rates among "plastic surgery" practitioners (88.9 percent versus 92.0 percent; p = 0.170). There was a significant difference between board-certified "plastic surgeons" versus out-of-scope practitioners on Google search between large, medium, and small cities (100 percent versus 92.9 percent versus 86.5; p < 0.001). CONCLUSIONS: Non-board-certified providers tend to localize to smaller cities. Truth-in-advertising laws have not yet had an impact on the way a number of non-American Board of Plastic Surgery-certified practitioners market themselves. There may be room to expand the scope of truth-in-advertising laws to the online world and to smaller cities.


Subject(s)
Advertising/statistics & numerical data , Marketing of Health Services/statistics & numerical data , Specialty Boards/standards , Surgeons/statistics & numerical data , Surgery, Plastic/standards , Advertising/legislation & jurisprudence , Certification/statistics & numerical data , Cities/statistics & numerical data , Computer Simulation , Cosmetic Techniques/statistics & numerical data , Cross-Sectional Studies , Humans , Internet/legislation & jurisprudence , Internet/statistics & numerical data , Marketing of Health Services/legislation & jurisprudence , Patient Safety , Surgeons/legislation & jurisprudence , Surgeons/standards , Surgery, Plastic/statistics & numerical data , United States
SELECTION OF CITATIONS
SEARCH DETAIL
...