Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 44
Filter
1.
J Occup Environ Hyg ; 21(1): 47-57, 2024 Jan.
Article in English | MEDLINE | ID: mdl-37874933

ABSTRACT

The NIOSH Pocket Guide to Chemical Hazards is a trusted resource that displays key information for a collection of chemicals commonly encountered in the workplace. Entries contain chemical structures-occupational exposure limit information ranging from limits based on full-shift time-weighted averages to acute limits such as short-term exposure limits and immediately dangerous to life or health values, as well as a variety of other data such as chemical-physical properties and symptoms of exposure. The NIOSH Pocket Guide (NPG) is available as a printed, hardcopy book, a PDF version, an electronic database, and a downloadable application for mobile phones. All formats of the NIOSH Pocket Guide allow users to access the data for each chemical separately, however, the guide does not support data analytics or visualization across chemicals. This project reformatted existing data in the NPG to make it searchable and compatible with exploration and analysis using a web application. The resulting application allows users to investigate the relationships between occupational exposure limits, the range and distribution of occupational exposure limits, and the specialized sorting of chemicals by health endpoint or to summarize information of particular interest. These tasks would have previously required manual extraction of the data and analysis. The usability of this application was evaluated among industrial hygienists and researchers and while the existing application seems most relevant to researchers, the open-source code and data are amenable to modification by users to increase customization.


Subject(s)
Occupational Exposure , United States , National Institute for Occupational Safety and Health, U.S. , Occupational Exposure/analysis , Threshold Limit Values , Workplace
2.
Environ Toxicol Chem ; 42(7): 1614-1623, 2023 07.
Article in English | MEDLINE | ID: mdl-37014189

ABSTRACT

In aquatic toxicology experiments, organisms are randomly assigned to an exposure group that receives a particular concentration level of a toxicant (including a control group with no exposure), and their survival, growth, or reproduction outcomes are recorded. Standard experiments use equal numbers of organisms in each exposure group. In the present study, we explored the potential benefits of modifying the current design of aquatic toxicology experiments when it is of interest to estimate the concentration associated with a specific level of decrease from control reproduction responses. A function of the parameter estimates from fitting a generalized linear regression model used to describe the relationship between individual responses and the toxicant concentration provides an estimate of the potency of the toxicant. After comparing different allocations of organisms to concentration groups, we observed that a reallocation of organisms among these concentration groups could provide more precise estimates of toxicity endpoints than the standard experimental design that uses equal number of organisms in each concentration group; this provides greater precision without the added cost of conducting the experiment. More specifically, assigning more observations to the control zero-concentration condition may result in more precise interval estimates of potency. Environ Toxicol Chem 2023;42:1614-1623. © 2023 SETAC.


Subject(s)
Cladocera , Water Pollutants, Chemical , Animals , Cladocera/physiology , Reproduction , Linear Models , Water Pollutants, Chemical/toxicity
3.
Comput Toxicol ; 252023 Feb.
Article in English | MEDLINE | ID: mdl-36909352

ABSTRACT

The need to analyze the complex relationships observed in high-throughput toxicogenomic and other omic platforms has resulted in an explosion of methodological advances in computational toxicology. However, advancements in the literature often outpace the development of software researchers can implement in their pipelines, and existing software is frequently based on pre-specified workflows built from well-vetted assumptions that may not be optimal for novel research questions. Accordingly, there is a need for a stable platform and open-source codebase attached to a programming language that allows users to program new algorithms. To fill this gap, the Biostatistics and Computational Biology Branch of the National Institute of Environmental Health Sciences, in cooperation with the National Toxicology Program (NTP) and US Environmental Protection Agency (EPA), developed ToxicR, an open-source R programming package. The ToxicR platform implements many of the standard analyses used by the NTP and EPA, including dose-response analyses for continuous and dichotomous data that employ Bayesian, maximum likelihood, and model averaging methods, as well as many standard tests the NTP uses in rodent toxicology and carcinogenicity studies, such as the poly-K and Jonckheere trend tests. ToxicR is built on the same codebase as current versions of the EPA's Benchmark Dose software and NTP's BMDExpress software but has increased flexibility because it directly accesses this software. To demonstrate ToxicR, we developed a custom workflow to illustrate its capabilities for analyzing toxicogenomic data. The unique features of ToxicR will allow researchers in other fields to add modules, increasing its functionality in the future.

4.
Environ Epidemiol ; 5(2): e144, 2021 Apr.
Article in English | MEDLINE | ID: mdl-33870016

ABSTRACT

Despite the precipitous decline of airborne lead concentrations following the removal of lead in gasoline, lead is still detectable in ambient air in most urban areas. Few studies, however, have examined the health effects of contemporary airborne lead concentrations in children. METHODS: We estimated monthly air lead exposure among 263 children (Cincinnati Childhood Allergy and Air Pollution Study; Cincinnati, OH; 2001-2005) using temporally scaled predictions from a validated land use model and assessed neurobehavioral outcomes at age 12 years using the parent-completed Behavioral Assessment System for Children, 2nd edition. We used distributed lag models to estimate the effect of airborne lead exposure on behavioral outcomes while adjusting for potential confounding by maternal education, community-level deprivation, blood lead concentrations, greenspace, and traffic related air pollution. RESULTS: We identified sensitive windows during mid- and late childhood for increased anxiety and atypicality scores, whereas sensitive windows for increased aggression and attention problems were identified immediately following birth. The strongest effect was at age 12, where a 1 ng/m3 increase in airborne lead exposure was associated with a 3.1-point (95% confidence interval: 0.4, 5.7) increase in anxiety scores. No sensitive windows were identified for depression, somatization, conduct problems, hyperactivity, or withdrawal behaviors. CONCLUSIONS: We observed associations between exposure to airborne lead concentrations and poor behavioral outcomes at concentrations 10 times lower than the National Ambient Air Quality Standards set by the US Environmental Protection Agency.

5.
Environ Toxicol Chem ; 37(6): 1565-1578, 2018 06.
Article in English | MEDLINE | ID: mdl-29350430

ABSTRACT

The fish acute toxicity test method is foundational to aquatic toxicity testing strategies, yet the literature lacks a concise sample size assessment. Although various sources address sample size, historical precedent seems to play a larger role than objective measures. We present a novel and comprehensive quantification of the effect of sample size on estimation of the median lethal concentration (LC50), covering a wide range of scenarios. The results put into perspective the practical differences across a range of sample sizes, from n = 5/concentration up to n = 23/concentration. We also provide a framework for setting sample size guidance illustrating ways to quantify the performance of LC50 estimation, which can be used to set sample size guidance given reasonably difficult (or worst-case) scenarios. There is a clear benefit to larger sample size studies: they reduce error in the determination of LC50s, and lead to more robust safe environmental concentration determinations, particularly in cases likely to be called worst-case (shallow slope and true LC50 near the edges of the concentration range). Given that the use of well-justified sample sizes is crucial to reducing uncertainty in toxicity estimates, these results lead us to recommend a reconsideration of the current de minimis 7/concentration sample size for critical studies (e.g., studies needed for a chemical registration, which are being tested for the first time, or involving difficult test substances). Environ Toxicol Chem 2018;37:1565-1578. © 2018 SETAC.


Subject(s)
Fishes , Toxicity Tests, Acute/methods , Animals , Sample Size , Water Pollutants, Chemical/toxicity
6.
Risk Anal ; 37(11): 2107-2118, 2017 11.
Article in English | MEDLINE | ID: mdl-28555874

ABSTRACT

Quantitative risk assessment often begins with an estimate of the exposure or dose associated with a particular risk level from which exposure levels posing low risk to populations can be extrapolated. For continuous exposures, this value, the benchmark dose, is often defined by a specified increase (or decrease) from the median or mean response at no exposure. This method of calculating the benchmark dose does not take into account the response distribution and, consequently, cannot be interpreted based upon probability statements of the target population. We investigate quantile regression as an alternative to the use of the median or mean regression. By defining the dose-response quantile relationship and an impairment threshold, we specify a benchmark dose as the dose associated with a specified probability that the population will have a response equal to or more extreme than the specified impairment threshold. In addition, in an effort to minimize model uncertainty, we use Bayesian monotonic semiparametric regression to define the exposure-response quantile relationship, which gives the model flexibility to estimate the quantal dose-response function. We describe this methodology and apply it to both epidemiology and toxicology data.

7.
Environ Toxicol Chem ; 33(10): 2399-406, 2014 Oct.
Article in English | MEDLINE | ID: mdl-24943385

ABSTRACT

Although many experiments in environmental toxicology use standard statistical experimental designs, there are situations that arise where no such standard design is natural or applicable because of logistical constraints. For example, the layout of a laboratory may suggest that each shelf serve as a block, with the number of experimental units per shelf either greater than or less than the number of treatments in a way that precludes the use of a typical block design. In such cases, an effective and powerful alternative is to employ optimal experimental design principles, a strategy that produces designs with precise statistical estimates. Here, a D-optimal design was generated for an experiment in environmental toxicology that has 2 factors, 16 treatments, and constraints similar to those described above. After initial consideration of a randomized complete block design and an intuitive cyclic design, it was decided to compare a D-optimal design and a slightly more complicated version of the cyclic design. Simulations were conducted generating random responses under a variety of scenarios that reflect conditions motivated by a similar toxicology study, and the designs were evaluated via D-efficiency as well as by a power analysis. The cyclic design performed well compared to the D-optimal design.


Subject(s)
Ecotoxicology/methods , Research Design , Toxicity Tests/methods , Animals , Computer Simulation , Ecotoxicology/statistics & numerical data , Environmental Pollutants/toxicity , Laboratories , Models, Biological , Models, Statistical , Pesticides/toxicity , Research Design/statistics & numerical data , Toxicity Tests/statistics & numerical data
8.
Regul Toxicol Pharmacol ; 67(1): 75-82, 2013 Oct.
Article in English | MEDLINE | ID: mdl-23831127

ABSTRACT

Experiments with relatively high doses are often used to predict risks at appreciably lower doses. A point of departure (PoD) can be calculated as the dose associated with a specified moderate response level that is often in the range of experimental doses considered. A linear extrapolation to lower doses often follows. An alternative to the PoD method is to develop a model that accounts for the model uncertainty in the dose-response relationship and to use this model to estimate the risk at low doses. Two such approaches that account for model uncertainty are model averaging (MA) and semi-parametric methods. We use these methods, along with the PoD approach in the context of a large animal (40,000+ animal) bioassay that exhibited sub-linearity. When models are fit to high dose data and risks at low doses are predicted, the methods that account for model uncertainty produce dose estimates associated with an excess risk that are closer to the observed risk than the PoD linearization. This comparison provides empirical support to accompany previous simulation studies that suggest methods that incorporate model uncertainty provide viable, and arguably preferred, alternatives to linear extrapolation from a PoD.


Subject(s)
Models, Biological , Uncertainty , Animals , Benchmarking , Dose-Response Relationship, Drug , Risk Assessment
9.
Cyberpsychol Behav Soc Netw ; 16(5): 370-7, 2013 May.
Article in English | MEDLINE | ID: mdl-23530546

ABSTRACT

The impact of exchanges and client-therapist alliance of online therapy text exchanges were compared to previously published results in face-to-face therapy, and the moderating effects of four participant factors found significant in previously published face-to-face studies were investigated using statistical mixed-effect modeling analytic techniques. Therapists (N=30) and clients (N=30) engaged in online therapy were recruited from private practitioner sites, e-clinics, online counseling centers, and mental-health-related discussion boards. In a naturalistic design, they each visited an online site weekly and completed the standard impact and alliance questionnaires for at least 6 weeks. Results indicated that the impact of exchanges and client-therapist alliance in text therapy was similar to, but in some respects more positive than, previous evaluations of face-to-face therapy. The significance of participant factors previously found to influence impact and alliance in face-to-face therapy (client symptom severity, social support, therapist theoretical orientation, and therapist experience) was not replicated, except that therapists with the more symptomatic clients rated their text exchanges as less smooth and comfortable. Although its small size and naturalistic design impose limitations on sensitivity and generalizability, this study provides some insights into treatment impact and the alliance in online therapy.


Subject(s)
Professional-Patient Relations , Psychotherapy/methods , Text Messaging , Therapy, Computer-Assisted/methods , Adult , Cooperative Behavior , Counseling , Female , Humans , Male , Middle Aged , Research Design , Social Support , Surveys and Questionnaires , Young Adult
10.
Environ Toxicol Chem ; 31(8): 1920-30, 2012 Aug.
Article in English | MEDLINE | ID: mdl-22605507

ABSTRACT

Chemicals in aquatic systems may impact a variety of endpoints including mortality, growth, or reproduction. Clearly, growth or reproduction will only be observed in organisms that survive. Because it is common to observe mortality in studies focusing on the reproduction of organisms, especially in higher concentration conditions, the resulting observed numbers of young become a mixture of zeroes and positive counts. Zeroes are recorded for organisms that die before having any young and living organisms with no offspring. Positive counts are recorded for living organisms with offspring. Thus, responses reflect both fecundity and mortality of the organisms used in such tests. In the present study, the authors propose the estimation of the concentration associated with a specified level of reproductive inhibition (RIp) using a Bayesian zero-inflated Poisson (ZIP) regression model. This approach allows any prior information and expert knowledge about the model parameters to be incorporated into the regression coefficients or RIp estimation. Simulation studies are conducted to compare the Bayesian ZIP regression model and classical methods. The Bayesian estimator outperforms the frequentist alternative by producing more precise point estimates with smaller mean square differences between RIp estimates and true values, narrower interval estimates with better coverage probabilities. The authors also applied their proposed model to a study of Ceriodaphnia dubia exposed to a test toxicant.


Subject(s)
Bayes Theorem , Cladocera/drug effects , Cladocera/physiology , Models, Statistical , Water Pollutants, Chemical/toxicity , Animals , Fertility/drug effects , Models, Biological , Poisson Distribution , Regression Analysis , Reproduction/drug effects , Survival Rate
11.
Gerontologist ; 52(6): 822-32, 2012 Dec.
Article in English | MEDLINE | ID: mdl-22437329

ABSTRACT

PURPOSE OF THE STUDY: A novel logistic regression tree-based method was applied to identify fall risk factors and possible interaction effects of those risk factors. DESIGN AND METHODS: A nationally representative sample of American older adults aged 65 years and older (N = 9,592) in the Health and Retirement Study 2004 and 2006 modules was used. Logistic Tree with Unbiased Selection, a computer algorithm for tree-based modeling, recursively split the entire group in the data set into mutually exclusive subgroups and fit a logistic regression model in each subgroup to generate an easily interpreted tree diagram. RESULTS: A subgroup of older adults with a fall history and either no activities of daily living (ADL) limitation and at least one instrumental activity of daily living or at least one ADL limitation was classified as at high risk of falling. Additionally, within each identified subgroup, the best predictor of falls varied over subgroups and was also evaluated. IMPLICATIONS: Application of tree-based methods may provide useful information for intervention program design and resource allocation planning targeting subpopulations of older adults at risk of falls.


Subject(s)
Accidental Falls/statistics & numerical data , Geriatric Assessment/methods , Logistic Models , Risk Assessment/methods , Accidental Falls/prevention & control , Activities of Daily Living , Aged , Aged, 80 and over , Decision Trees , Female , Health Surveys , Humans , Male , Predictive Value of Tests , Residence Characteristics , Socioeconomic Factors , United States
12.
Risk Anal ; 32(7): 1207-18, 2012 Jul.
Article in English | MEDLINE | ID: mdl-22385024

ABSTRACT

Quantitative risk assessment proceeds by first estimating a dose-response model and then inverting this model to estimate the dose that corresponds to some prespecified level of response. The parametric form of the dose-response model often plays a large role in determining this dose. Consequently, the choice of the proper model is a major source of uncertainty when estimating such endpoints. While methods exist that attempt to incorporate the uncertainty by forming an estimate based upon all models considered, such methods may fail when the true model is on the edge of the space of models considered and cannot be formed from a weighted sum of constituent models. We propose a semiparametric model for dose-response data as well as deriving a dose estimate associated with a particular response. In this model formulation, the only restriction on the model form is that it is monotonic. We use this model to estimate the dose-response curve from a long-term cancer bioassay, as well as compare this to methods currently used to account for model uncertainty. A small simulation study is conducted showing that the method is superior to model averaging when estimating exposure that arises from a quantal-linear dose-response mechanism, and is similar to these methods when investigating nonlinear dose-response patterns.


Subject(s)
Bayes Theorem , Dose-Response Relationship, Drug , Models, Statistical , Risk Assessment/methods , Animals , Computer Simulation , Hydrocarbons, Brominated/toxicity , Lung Neoplasms/chemically induced , Rats
13.
Environ Toxicol Chem ; 31(4): 916-27, 2012 Apr.
Article in English | MEDLINE | ID: mdl-22431139

ABSTRACT

Effectively and accurately assessing the toxicity of chemicals and their impact on the environment continues to be an important concern in ecotoxicology. Single experiments conducted by a particular laboratory commonly serve as the basis of toxicity risk assessment. These laboratories often have a long history of conducting experiments using particular protocols. In the present study, a Bayesian analysis for estimating potency based on a single experiment was formulated, which then served as the basis for incorporating the experimental information from historical controls. A Bayesian hierarchical model was developed to estimate the relative inhibition concentrations (RIp) of a toxicant and flexible ways of using historical control information were suggested. The methods were illustrated using a data set produced by the test for reproduction in Ceriodaphnia dubia in which the number of young produced over three broods was recorded. In addition, simulation studies were included to compare the Bayesian methods with previously proposed estimators of potency. The Bayesian methods gave more precise RIp estimates with smaller variation and nominal coverage probability offsetting a small negative bias in the point estimate. Incorporating historical control information in the Bayesian hierarchical model effectively uses the useful information from past similar experiments when estimating the RIp, and results in potency estimates that are more precise compared to frequentist methods.


Subject(s)
Bayes Theorem , Cladocera/drug effects , Models, Theoretical , Toxicity Tests/methods , Animals , Computer Simulation , Ecotoxicology/methods , Reproduction/drug effects , Risk Assessment , Software
14.
Environ Toxicol Chem ; 31(2): 370-6, 2012 Feb.
Article in English | MEDLINE | ID: mdl-22095530

ABSTRACT

The fish toxicity assay most commonly used to establish chronic effects is the Organisation for Economic Co-operation and Development (OECD) 210, fish early-life stage test. However, the authors are not aware of any systematic analysis of the experimental design or statistical characteristics of the test since the test guideline was adopted nearly 20 years ago. Here, the authors report the results of an analysis of data compiled from OECD 210 tests conducted by industry labs. The distribution of responses observed in control treatments was analyzed, with the goal of understanding the implication of this variability on the sensitivity of the OECD 210 test guideline and providing recommendations on revised experimental design requirements of the test. Studies were confined to fathead minnows, rainbow trout, and zebrafish. Dichotomous endpoints (hatching success and posthatch survival) were examined for indications of overdispersion to evaluate whether significant chamber-to-chamber variability was present. Dichotomous and continuous (length, wet wt, dry wt) measurement endpoints were analyzed to determine minimum sample size requirements to detect differences from control responses with specified power. Results of the analysis indicated that sensitivity of the test could be improved by maximizing the number of replicate chambers per treatment concentration, increasing the acceptable level of control hatching success and larval survival compared to current levels, using wet weight measurements rather than dry weight, and focusing test efforts on species that demonstrate less variability in outcome measures. From these analyses, the authors provide evidence of the impact of expected levels of variability on the sensitivity of traditional OECD 210 studies and the implications for defining a target for future animal alternative methods for chronic toxicity testing in fish.


Subject(s)
Fishes/physiology , Toxicity Tests, Chronic/standards , Water Pollutants, Chemical/toxicity , Animals , Larva/drug effects , Larva/physiology , Sample Size , Toxicity Tests, Chronic/methods , Water Pollutants, Chemical/standards
15.
J Aging Health ; 23(4): 682-703, 2011 Jun.
Article in English | MEDLINE | ID: mdl-21183726

ABSTRACT

OBJECTIVE: This study identifies fall risk factors in an understudied population of older people who receive community-based care services. METHOD: Data were collected from enrollees of Ohio's Medicaid home- and community-based waiver program (preadmission screening system providing options and resources today [PASSPORT]). A total of 23,182 participants receiving PASSPORT services in 2005/2006 was classified as fallers and nonfallers, and a variety of risk factors for falling was analyzed using logistic regressions. RESULTS: The following factors were identified as risk factors for falling: previous fall history, older age, White race, incontinence, higher number of medications, fewer numbers of activity of daily living limitations, unsteady gait, tremor, grasping strength, and absence of supervision. DISCUSSION: Identifying risk factors for the participants of a Medicaid home- and community-based waiver program are useful for a fall risk assessment, but it would be most helpful if the community-based care service programs incorporate measurements of known fall risk factors into their regular data collection, if not already included.


Subject(s)
Accidental Falls/statistics & numerical data , Community Health Services/statistics & numerical data , Home Care Services/statistics & numerical data , Medicaid/statistics & numerical data , Activities of Daily Living , Age Factors , Aged , Aged, 80 and over , Confidence Intervals , Female , Gait , Humans , Logistic Models , Male , Middle Aged , Ohio , ROC Curve , Risk Factors , Sensitivity and Specificity , United States
16.
Environ Toxicol Chem ; 29(1): 212-9, 2010 Jan.
Article in English | MEDLINE | ID: mdl-20821437

ABSTRACT

Endpoints in aquatic toxicity tests can be measured using a variety of measurement scales including dichotomous (survival), continuous (growth) and count (number of young). A distribution is assumed for an endpoint and analyses proceed accordingly. In certain situations, the assumed distribution may be incorrect and this may lead to incorrect statistical inference. The present study considers the analysis of count effects, here motivated by the Ceriodaphnia dubia reproduction study. While the Poisson probability model is a common starting point, this distribution assumes that the mean and variance are the same. This will not be the case if there is some extraneous source of variability in the system, and in this case, the variability may exceed the mean. A computer simulation study was used to examine the impact of overdispersion or outliers on the analysis of count data. Methods that assumed Poisson or negative binomially distributed outcomes were compared to methods that accommodated this potential overdispersion using quasi-likelihood (QL) or generalized linear mixed models (GLMM). If the data were truly Poisson, the adjusted methods still performed at nominal type I error rates. In the cases of overdispersed counts, the Poisson assumed methods resulted in rejection rates that exceeded nominal levels and standard errors for regression coefficients that were too narrow. The negative binomial methods worked best in the case when the data were, in fact, negative binomial but did not maintain nominal characteristics in other situations. In general, the QL and GLMM methods performed reasonably based on the present study, although all procedures suffered some impact in the presence of potential outliers. In particular, the QL is arguably preferred because it makes fewer assumptions than the GLMM and performed well over the range of conditions considered.


Subject(s)
Cladocera/drug effects , Data Interpretation, Statistical , Water Pollutants, Chemical/toxicity , Animals , Computer Simulation , Confidence Intervals , Poisson Distribution , Reproduction/drug effects
17.
Environ Toxicol Chem ; 29(5): 1168-71, 2010 May.
Article in English | MEDLINE | ID: mdl-20821554

ABSTRACT

Smaller organisms may have too little tissue to allow assaying as individuals. To get a sufficient sample for assaying, a collection of smaller individual organisms is pooled together to produce a simple observation for modeling and analysis. When a dataset contains a mix of pooled and individual organisms, the variances of the observations are not equal. An unweighted regression method is no longer appropriate because it assumes equal precision among the observations. A weighted regression method is more appropriate and yields more precise estimates because it incorporates a weight to the pooled observations. To demonstrate the benefits of using a weighted analysis when some observations are pooled, the bias and confidence interval (CI) properties were compared using an ordinary least squares and a weighted least squares t-based confidence interval. The slope and intercept estimates were unbiased for both weighted and unweighted analyses. While CIs for the slope and intercept achieved nominal coverage, the CI lengths were smaller using a weighted analysis instead of an unweighted analysis, implying that a weighted analysis will yield greater precision.


Subject(s)
Environmental Monitoring/methods , Least-Squares Analysis , Models, Biological , Models, Statistical , Bias , Sample Size
18.
Am J Ind Med ; 52(9): 683-97, 2009 Sep.
Article in English | MEDLINE | ID: mdl-19670260

ABSTRACT

BACKGROUND: The rate of lost-time sprains and strains in private nursing homes is over three times the national average, and for back injuries, almost four times the national average. The Ohio Bureau of Workers' Compensation (BWC) has sponsored interventions that were preferentially promoted to nursing homes in 2000-2001, including training, consultation, and grants up to $40,000 for equipment purchases. METHODS: This study evaluated the impact of BWC interventions on back injury claim rates using BWC data on claims, interventions, and employer payroll for all Ohio nursing homes during 1995-2004 using Poisson regression. A subset of nursing homes was analyzed with more detailed data that allowed estimation of the impact of staffing levels and resident acuity on claim rates. Costs of interventions were compared to the associated savings in claim costs. RESULTS: A $500 equipment purchase per nursing home worker was associated with a 21% reduction in back injury rate. Assuming an equipment life of 10 years, this translates to an estimated $768 reduction in claim costs per worker, a present value of $495 with a 5% discount rate applied. Results for training courses were equivocal. Only those receiving below-median hours had a significant 19% reduction in claim rates. Injury rates did not generally decline with consultation independent of equipment purchases, although possible confounding, misclassification, and bias due to non-random management participation clouds interpretation. In nursing homes with available data, resident acuity was modestly associated with back injury risk, and the injury rate increased with resident-to-staff ratio (acting through three terms: RR = 1.50 for each additional resident per staff member; for the ratio alone, RR = 1.32, 95% CI = 1.18-1.48). In these NHs, an expenditure of $908 per resident care worker (equivalent to $500 per employee in the other model) was also associated with a 21% reduction in injury rate. However, with a resident-to-staff ratio greater than 2.0, the same expenditure was associated with a $1,643 reduction in back claim costs over 10 years per employee, a present value of $1,062 with 5% discount rate. CONCLUSIONS: Expenditures for ergonomic equipment in nursing homes by the Ohio BWC were associated with fewer worker injuries and reductions in claim costs that were similar in magnitude to expenditures. Un-estimated benefits and costs also need to be considered in assessing full health and financial impacts.


Subject(s)
Back Injuries/prevention & control , Inservice Training , Moving and Lifting Patients/instrumentation , Nursing Homes , Occupational Diseases/prevention & control , Back Injuries/economics , Humans , Moving and Lifting Patients/adverse effects , Moving and Lifting Patients/methods , Nursing Assistants/education , Occupational Diseases/economics , Ohio , Workers' Compensation , Workload
19.
Am J Public Health ; 99(8): 1400-8, 2009 Aug.
Article in English | MEDLINE | ID: mdl-19542025

ABSTRACT

OBJECTIVES: We investigated the extent to which the political economy of US states, including the relative power of organized labor, predicts rates of fatal occupational injury. METHODS: We described states' political economies with 6 contextual variables measuring social and political conditions: "right-to-work" laws, union membership density, labor grievance rates, state government debt, unemployment rates, and social wage payments. We obtained data on fatal occupational injuries from the National Traumatic Occupational Fatality surveillance system and population data from the US national census. We used Poisson regression methods to analyze relationships for the years 1980 and 1995. RESULTS: States differed notably with respect to political-economic characteristics and occupational fatality rates, although these characteristics were more homogeneous within rather than between regions. Industry and workforce composition contributed significantly to differences in state injury rates, but political-economic characteristics of states were also significantly associated with injury rates, after adjustment accounting for those factors. CONCLUSIONS: Higher rates of fatal occupational injury were associated with a state policy climate favoring business over labor, with distinct regional clustering of such state policies in the South and Northeast.


Subject(s)
Occupational Diseases/mortality , Politics , State Government , Wounds and Injuries/mortality , Economics , Employment/statistics & numerical data , Humans , Models, Statistical , Occupational Health , United States/epidemiology
20.
Risk Anal ; 29(4): 558-64, 2009 Apr.
Article in English | MEDLINE | ID: mdl-19144062

ABSTRACT

Worker populations often provide data on adverse responses associated with exposure to potential hazards. The relationship between hazard exposure levels and adverse response can be modeled and then inverted to estimate the exposure associated with some specified response level. One concern is that this endpoint may be sensitive to the concentration metric and other variables included in the model. Further, it may be that the models yielding different risk endpoints are all providing relatively similar fits. We focus on evaluating the impact of exposure on a continuous response by constructing a model-averaged benchmark concentration from a weighted average of model-specific benchmark concentrations. A method for combining the estimates based on different models is applied to lung function in a cohort of miners exposed to coal dust. In this analysis, we see that a small number of the thousands of models considered survive a filtering criterion for use in averaging. Even after filtering, the models considered yield benchmark concentrations that differ by a factor of 2 to 9 depending on the concentration metric and covariates. The model-average BMC captures this uncertainty, and provides a useful strategy for addressing model uncertainty.


Subject(s)
Benchmarking , Epidemiologic Studies , Anthracosis/epidemiology , Bayes Theorem , Humans
SELECTION OF CITATIONS
SEARCH DETAIL
...