Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 14 de 14
Filter
1.
Vital Health Stat 1 ; (203): 1-16, 2023 06.
Article in English | MEDLINE | ID: mdl-37367198

ABSTRACT

As part of modernization efforts, in 2021 the National Ambulatory Medical Care Survey (NAMCS) began collecting electronic health records (EHRs) for ambulatory care visits in its Health Center (HC) Component. As a result, the National Center for Health Statistics (NCHS)needed to adjust the approaches used in the sampling design for the HC Component. This report provides details on these changes to the 2021-2022 NAMCS.


Subject(s)
Electronic Health Records , Health Facilities , Humans , Ambulatory Care , Data Collection/methods , Health Care Surveys , Office Visits , United States
2.
Vital Health Stat 2 ; (175): 1-22, 2017 Aug.
Article in English | MEDLINE | ID: mdl-30248016

ABSTRACT

The National Center for Health Statistics (NCHS) disseminates information on a broad range of health topics through diverse publications. These publications must rely on clear and transparent presentation standards that can be broadly and efficiently applied. Standards are particularly important for large, cross-cutting reports where estimates cannot be individually evaluated and indicators of precision cannot be included alongside the estimates. This report describes the NCHS Data Presentation Standards for Proportions. The multistep NCHS Data Presentation Standards for Proportions are based on a minimum denominator sample size and on the absolute and relative widths of a confidence interval calculated using the Clopper-Pearson method. Proportions (usually multiplied by 100 and expressed as percentages) are the most commonly reported estimates in NCHS reports.


Subject(s)
Health Surveys/standards , Research Design/standards , Statistics as Topic/standards , Confidence Intervals , Data Interpretation, Statistical , Female , Humans , Male , National Center for Health Statistics, U.S. , Reference Standards , Sample Size , United States
3.
Vital Health Stat 2 ; (171): 1-42, 2016 Feb.
Article in English | MEDLINE | ID: mdl-27301078

ABSTRACT

BACKGROUND: The National Ambulatory Medical Care Survey (NAMCS) is an annual, nationally representative sample survey of physicians and of visits to physicians. Two major changes were made to the 2012 NAMCS to support reliable state estimates. The sampling design changed from an area sample to a fivefold-larger list sample of physicians stratified by the nine U.S. Census Bureau divisions and 34 states. At the same time, the data collection mode changed from paper forms to laptop-assisted data collection and from physician or office staff abstraction of medical records to predominantly Census interviewer abstraction using automated Patient Record Forms (PRFs). OBJECTIVES: This report presents an analysis of potential nonresponse bias in 2012 NAMCS estimates of physicians and visits to physicians. This analysis used two sets of physician-based estimates: one measuring the completion of the physician induction interview and another based on completing any PRF. Evaluation of visit response was measured by the percentage of expected PRFs completed. For each type of physician estimate, response was evaluated by (a) comparing percent distributions of respondents and nonrespondents by physician characteristics available for all in-scope sample physicians, (b) comparing response rates by physician characteristics with the national response rate, and (c) analyzing nonresponse bias after adjustments for nonresponse were applied in survey weights. For visit estimates, response was evaluated by (a) comparing the percent distributions of expected visits and completed visits, (b) comparing visit response rates by physician characteristics with the national visit response rate, and (c) analyzing visit-level nonresponse bias after adjustments for nonresponse were applied in visit survey weights. Finally, potential bias in the two physician-level estimates was computed by comparing them with those from an external survey.


Subject(s)
Ambulatory Care/organization & administration , Data Collection/methods , Health Care Surveys/standards , Physicians/statistics & numerical data , Professional Practice/statistics & numerical data , Adult , Electronic Health Records/organization & administration , Electronic Health Records/statistics & numerical data , Female , Humans , Interviews as Topic , Male , Middle Aged , Research Design , Residence Characteristics/statistics & numerical data , Selection Bias , Surveys and Questionnaires , United States
4.
J Off Stat ; 32(1): 147-164, 2016.
Article in English | MEDLINE | ID: mdl-30948863

ABSTRACT

Multiple imputation is a popular approach to handling missing data. Although it was originally motivated by survey nonresponse problems, it has been readily applied to other data settings. However, its general behavior still remains unclear when applied to survey data with complex sample designs, including clustering. Recently, Lewis et al. (2014) compared single- and multiple-imputation analyses for certain incomplete variables in the 2008 National Ambulatory Medicare Care Survey, which has a nationally representative, multistage, and clustered sampling design. Their study results suggested that the increase of the variance estimate due to multiple imputation compared with single imputation largely disappears for estimates with large design effects. We complement their empirical research by providing some theoretical reasoning. We consider data sampled from an equally weighted, single-stage cluster design and characterize the process using a balanced, one-way normal random-effects model. Assuming that the missingness is completely at random, we derive analytic expressions for the within- and between-multiple-imputation variance estimators for the mean estimator, and thus conveniently reveal the impact of design effects on these variance estimators. We propose approximations for the fraction of missing information in clustered samples, extending previous results for simple random samples. We discuss some generalizations of this research and its practical implications for data release by statistical agencies.

5.
Article in English | MEDLINE | ID: mdl-32336961

ABSTRACT

Wait time is the differences between the time a patient arrives in the emergency department (ED) and the time an ED provider examines that patient. This study focuses on the development of a negative binomial model to examine factors associated with ED wait time using the National Hospital Ambulatory Medical Care Survey (NHAMCS). Conducted by National Center for Health Statistics (NCHS), NHAMCS has been gathering, analyzing, and disseminating information annually about visits made for medical care to hospital outpatient department and EDs since 1992. To analyze ED wait times, a negative binomial model was fit to the ED visit data using publically released micro data from the 2009 NHAMCS. In this model, the wait time is the dependent variable while hospital, patient, and visit characteristics are the independent variables. Wait time was collapsed into discrete values representing 15 minutes intervals. The findings are presented.

6.
Appl Math (Irvine) ; 5: 3421-3430, 2014 Dec.
Article in English | MEDLINE | ID: mdl-27398258

ABSTRACT

How many imputations are sufficient in multiple imputations? The answer given by different researchers varies from as few as 2 - 3 to as many as hundreds. Perhaps no single number of imputations would fit all situations. In this study, η, the minimally sufficient number of imputations, was determined based on the relationship between m, the number of imputations, and ω, the standard error of imputation variances using the 2012 National Ambulatory Medical Care Survey (NAMCS) Physician Workflow mail survey. Five variables of various value ranges, variances, and missing data percentages were tested. For all variables tested, ω decreased as m increased. The m value above which the cost of further increase in m would outweigh the benefit of reducing ω was recognized as the η. This method has a potential to be used by anyone to determine η that fits his or her own data situation.

7.
Disasters ; 36(2): 270-90, 2012 Apr.
Article in English | MEDLINE | ID: mdl-21992191

ABSTRACT

The 2005 hurricane season caused extensive damage and induced a mass migration of approximately 1.1 million people from southern Louisiana in the United States. Current and accurate estimates of population size and demographics and an assessment of the critical needs for public services were required to guide recovery efforts. Since forecasts using pre-hurricane data may produce inaccurate estimates of the post-hurricane population, a household survey in 18 hurricane-affected parishes was conducted to provide timely and credible information on the size of these populations, their demographics and their condition. This paper describes the methods used, the challenges encountered, and the key factors for successful implementation. This post-disaster survey was unique because it identified the needs of the people in the affected parishes and quantified the number of people with these needs. Consequently, this survey established new population and health indicator baselines that otherwise would have not been available to guide the relief and recovery efforts in southern Louisiana.


Subject(s)
Cyclonic Storms , Disaster Planning/methods , Health Surveys , Needs Assessment , Population Dynamics , Humans , Louisiana
8.
Natl Health Stat Report ; (37): 1-14, 2011 Mar 24.
Article in English | MEDLINE | ID: mdl-21476489

ABSTRACT

OBJECTIVE: This report is a summary of hospital preparedness for responding to public health emergencies, including mass casualties and epidemics of naturally occurring diseases such as influenza. METHODS: Data are from an emergency response preparedness supplement to the 2008 National Hospital Ambulatory Medical Care Survey, which uses a national probability sample of nonfederal general and short-stay hospitals in the United States. Sample data were weighted to produce national estimates.


Subject(s)
Disaster Planning , Emergencies , Emergency Service, Hospital/standards , Emergency Treatment/standards , Disease Outbreaks , Health Care Surveys , Humans , Mass Casualty Incidents , Outpatient Clinics, Hospital/standards , Standard of Care , United States
9.
Vital Health Stat 1 ; (53): 1-192, 2010 Jul.
Article in English | MEDLINE | ID: mdl-20737836

ABSTRACT

OBJECTIVES: This methods report provides an overview of the redesigned National Home and Hospice Care Survey (NHHCS) conducted in 2007. NHHCS is a national probability sample survey that collects data on U.S. home health and hospice care agencies, their staffs and services, and the people they serve. The redesigned survey included computerized data collection, greater survey content, increased sample sizes for current home health care patients and hospice care discharges, and a first-ever supplemental survey called the National Home Health Aide Survey. METHODS: The 2007 NHHCS was conducted between August 2007 and February 2008. NHHCS used a two-stage probability sampling design in which agencies providing home health and/or hospice care were sampled. Then, up to 10 current patients were sampled from each home health care agency, up to 10 discharges from each hospice care agency, and a combination of up to 10 patients/discharges from each agency that provided both home health and hospice care services. In-person interviews were conducted with agency directors and their designated staff; no interviews were conducted directly with patients. The survey instrument contained agency- and person-level modules, sampling modules, and a self-administered staffing questionnaire. RESULTS: Data were collected on 1036 agencies, 4683 current home health care patients, and 4733 hospice care discharges. The first-stage agency weighted response rate (for differential probabilities of selection) was 59%. The second-stage patient/discharge weighted response rate was 96%. Three public-use files were released: an agency-level file, a patient/discharge-level file, and a medication file. The files include sampling weights, which are necessary to generate national estimates, and design variables to enable users to calculate accurate standard errors.


Subject(s)
Health Care Surveys/instrumentation , Home Care Agencies/statistics & numerical data , Hospice Care/statistics & numerical data , Research Design , United States
10.
Stat Med ; 26(8): 1788-801, 2007 Apr 15.
Article in English | MEDLINE | ID: mdl-17221832

ABSTRACT

The linked population/establishment survey (LS) of health services utilization is a two-phase sample survey that links the sample designs of the population sample survey (PS) and the health-care provider establishment sample survey (ES) of health services utilization. In Phase I, household respondents in the PS identify their health-care providers during a specified calendar period. In Phase II, health-care providers identified in Phase I report the variables of interest for all or a sample of their transactions with all households during the same calendar period. The LS has been proposed as a potential design alternative to the PS whenever the health-care transactions of interest are hard to find or enumerate in household surveys and as a potential design alternative to the ES whenever it is infeasible or expensive to construct or maintain complete sampling provider frames that list all health-care providers with good measures of provider size. Suppose that the non-sampling errors are ignorable, how do the LS, PS and ES sampling errors compare? This paper addresses that question by summarizing and extending recent research findings that compare expressions of the sampling variance of (1) the LS and PS of equivalent household sample size and (2) the LS and the ES of equivalent expected health-care provider and transaction sample sizes. The paper identifies the parameters contributing to the precision differences and assesses the conditions that favour the LS or one or the other surveys. Published in 2007 by John Wiley & Sons, Ltd.


Subject(s)
Data Interpretation, Statistical , Health Care Surveys/methods , Office Visits/statistics & numerical data , Health Personnel , Humans , Patients
11.
Suicide Life Threat Behav ; 36(2): 192-212, 2006 Apr.
Article in English | MEDLINE | ID: mdl-16704324

ABSTRACT

The absence of validated U.S. rates of nonfatal suicidal behavior places risk management and injury prevention programs at danger of being poorly informed and inadequately conceptualized. In this study we compare estimated rates of intentional self-harm from two ongoing surveys (National Electronic Injury Surveillance System-All Injury Program-NEISS-AIP; National Hospital Ambulatory Medical Care Survey-NHAMCS) to data from the Toxic Exposure Surveillance System. Results suggest that, for every 2002-2003 suicide, there were 12 (NEISSAIP) or 15 (NHAMCS) self-harm-related emergency department visits, and for every intentional self-poisoning death there were 33 intentional overdoses reported to poison control centers, of which two ultimately went untreated.


Subject(s)
Databases as Topic , Self-Injurious Behavior/epidemiology , Self-Injurious Behavior/prevention & control , Drug Overdose , Humans , Intention , Poisoning/epidemiology , United States/epidemiology
12.
Vital Health Stat 2 ; (139): 1-32, 2005 Jun.
Article in English | MEDLINE | ID: mdl-15984725

ABSTRACT

OBJECTIVES: This report describes effects due to form length and/or item formats on respondent cooperation and survey estimates. METHODS: Two formats were used for the Patient Record form for the 2001 NAMCS and OPD component of the NHAMCS: a short form with 70 subitems and a long form with 140 subitems. The short form also contained many write-in items and fit on a one-sided page. The long form contained more check boxes and other unique items and required a two-sided page. The NAMCS sample of physicians and NHAMCS sample of hospitals were randomly divided into two half samples and randomly assigned to either the short or long form. Unit and item nonresponse rates, as well as survey estimates from the two forms, were compared using SUDAAN software, which takes into account the complex sample design of the surveys. RESULTS: Physician unit response was lower for the long form overall and in certain geographic regions. Overall OPD unit response was not affected by form length, although there were some differences in favor of the long form for some types of hospitals. Despite having twice the number of check boxes on the long form as the short form, there was no difference in the percentage of visits with any diagnostic or screening services ordered or provided. However, visit estimates were usually higher for services collected with long form check-boxes than with (recoded) short form write-in entries. Finally, the study confirmed the feasibility of collecting certain items found only on the long form. CONCLUSION: Overall, physician cooperation was more sensitive to form length than was OPD cooperation. The quality of the data was not affected by form length. Visit estimates were influenced by both content and item format.


Subject(s)
Ambulatory Care/statistics & numerical data , Forms and Records Control/classification , Health Care Surveys/methods , Office Visits/statistics & numerical data , Outpatient Clinics, Hospital/statistics & numerical data , Surveys and Questionnaires/classification , Attitude of Health Personnel , Cooperative Behavior , Data Collection/methods , Diagnosis-Related Groups , Feasibility Studies , Health Surveys , Humans , Software , United States
13.
Inquiry ; 40(4): 401-15, 2003.
Article in English | MEDLINE | ID: mdl-15055838

ABSTRACT

Until recently, sample design information needed to correctly estimate standard errors from the National Ambulatory Medical Care Survey (NAMCS) and the National Hospital Ambulatory Medical Care Survey (NHAMCS) public use files was not released for confidentiality reasons. In 2002, masked sample design variables were released for the first time with the 1995-2000 NAMCS and NHAMCS public use files. This paper shows how to use masked design variables to compute standard errors in three software applications. It also discusses when masking overstates or understates "in-house" standard errors, and how masking affects the significance levels of point estimates and logistic regression parameters.


Subject(s)
Ambulatory Care/statistics & numerical data , Bias , Emergency Service, Hospital/statistics & numerical data , Health Care Surveys/methods , Health Care Surveys/statistics & numerical data , Office Visits/statistics & numerical data , Outpatient Clinics, Hospital/statistics & numerical data , Adolescent , Adult , Aged , Child , Child, Preschool , Cluster Analysis , Data Collection/methods , Database Management Systems , Female , Humans , Infant , Infant, Newborn , Logistic Models , Male , Middle Aged , Models, Statistical , National Center for Health Statistics, U.S. , Probability , United States
14.
Public Health Rep ; 117(4): 393-407, 2002.
Article in English | MEDLINE | ID: mdl-12477922

ABSTRACT

OBJECTIVES: When a single survey does not cover a domain of interest, estimates from two or more complementary surveys can be combined to extend coverage. The purposes of this article are to discuss and demonstrate the benefits of combining estimates from complementary surveys and to provide a catalog of the analytic issues involved. METHODS: The authors present a case study in which data from the National Health Interview Survey and the National Nursing Home Survey were combined to obtain prevalence estimates for several chronic health conditions for the years 1985, 1995, and 1997. The combined prevalences were estimated by ratio estimation, and the associated variances were estimated by Taylor linearization. The survey weights, stratification, and clustering were reflected in the estimation procedures. RESULTS: In the case study, for the age group of 65 and older, the combined prevalence estimates for households and nursing homes are close to those for households alone. For the age group of 85 and older, however, the combined estimates are sometimes substantially different from the household estimates. Such differences are seen both for estimates within a single year and for estimates of trends across years. CONCLUSIONS: Several general issues regarding comparability arise when there is a goal of combining complementary survey data. As illustrated by this case study, combining estimates can be very useful for improving coverage and avoiding misleading conclusions.


Subject(s)
Chronic Disease/epidemiology , Health Surveys , Nursing Homes/statistics & numerical data , Public Health Informatics , Aged , Aged, 80 and over , Arthritis/epidemiology , Breast Neoplasms/epidemiology , Cerebrovascular Disorders/epidemiology , Chronic Disease/classification , Cross-Sectional Studies , Diabetes Mellitus/epidemiology , Family Characteristics , Female , Humans , Hypertension/epidemiology , Male , Myocardial Ischemia/epidemiology , National Center for Health Statistics, U.S. , Organizational Case Studies , Prevalence , Systems Integration , United States/epidemiology
SELECTION OF CITATIONS
SEARCH DETAIL
...