Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 28
Filter
1.
Can J Psychiatry ; 69(7): 524-535, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38571478

ABSTRACT

OBJECTIVES: Cannabis use is common in people with early-phase psychosis (EP) and is associated with worse treatment outcomes. Few targeted interventions for cannabis use behaviour in this population exist, most focusing on abstinence, none focusing on harm reduction. Many people with EP will not seek treatment for their cannabis use with current therapeutic options. Understanding preferences for cannabis-focused harm reduction interventions may be key to improving outcomes. This study aimed to determine preferences of young adults with EP who use cannabis for cannabis-focused harm reduction interventions. METHODS: Eighty-nine young adults across Canada with EP interested in reducing cannabis-related harms were recruited. An online questionnaire combining conventional survey methodology and two unique discrete choice experiments (DCEs) was administered. One DCE focused on attributes of core harm reduction interventions (DCE 1) and the second on attributes of boosters (DCE 2). We analysed these using mixed ranked-ordered logistic regression models. Preference questions using conventional survey methodology were analysed using summary statistics. RESULTS: Preferred characteristics for cannabis-focused harm reduction interventions (DCE 1) were: shorter sessions (60 min vs. 10 min, odds ratio (OR): 0.72; P < 0.001); less frequent sessions (daily vs. monthly, OR: 0.68; P < 0.001); shorter interventions (3 months vs. 1 month, OR: 0.80; P < 0.01); technology-based interventions (vs. in-person, OR: 1.17; P < 0.05). Preferences for post-intervention boosters (DCE 2) included opting into boosters (vs. opting out, OR: 3.53; P < 0.001) and having shorter boosters (3 months vs. 1 month, OR: 0.79; P < 0.01). Nearly half of the participants preferred to reduce cannabis use as a principal intervention goal (vs. using in less harmful ways or avoiding risky situations). CONCLUSIONS: Further research is required to see if technology-based harm reduction interventions for cannabis featuring these preferences translate into greater engagement and improved outcomes in EP patients.


Subject(s)
Harm Reduction , Patient Preference , Psychotic Disorders , Humans , Male , Female , Young Adult , Cross-Sectional Studies , Adult , Psychotic Disorders/therapy , Canada , Adolescent , Marijuana Use
2.
Psychiatry Res ; 326: 115276, 2023 08.
Article in English | MEDLINE | ID: mdl-37301021

ABSTRACT

Innovative technology-based solutions have the potential to improve access to clinically proven interventions for cannabis use disorder (CUD) in individuals with first episode psychosis (FEP). High patient engagement with app-based interventions is critical for achieving optimal outcomes. 104 individuals 18 to 35 years old with FEP and CUD from three Canadian provinces completed an electronic survey to evaluate preferences for online psychological intervention intensity, participation autonomy, feedback related to cannabis use, and technology platforms and app functionalities. The development of the questionnaire was informed by a qualitative study that included patients and clinicians. We used Best-Worst Scaling (BWS) and item ranking methodologies to measure preferences. Conditional logistic regression models for BWS data revealed high preferences for moderate intervention intensity (e.g., modules with a length of 15 min) and treatment autonomy that included preferences for using technology-based interventions and receiving feedback related to cannabis use once a week. Luce regression models for rank items revealed high preferences for smartphone-based apps, video intervention components, and having access to synchronous communications with clinicians and gamification elements. Results informed the development of iCanChange (iCC), a smartphone-based intervention for the treatment of CUD in individuals with FEP that is undergoing clinical testing.


Subject(s)
Cannabis , Hallucinogens , Mobile Applications , Psychotic Disorders , Humans , Young Adult , Adolescent , Adult , Psychosocial Intervention , Canada , Psychotic Disorders/therapy , Psychotic Disorders/psychology
3.
Int J Ment Health Nurs ; 32(1): 290-313, 2023 Feb.
Article in English | MEDLINE | ID: mdl-36300644

ABSTRACT

Several challenges have been identified for patients with concurrent disorders to access adequate services and for nurses to care for them. These challenges contribute to a pressing need for continuing educational interventions, particularly within the mental health nursing workforce. To address this issue, an innovative interprofessional videoconferencing programme based on the ECHO® model (Extension for Community Healthcare Outcomes) was implemented in Quebec, Canada to support and build capacity among healthcare professionals for CD management. The aim of this prospective cohort study was to examine nurses' self-efficacy, knowledge, and attitude scores over a 12-month period. All nurses who registered in the programme between 2018 and 2020 were invited to participate in the study (N = 65). The data were collected online using a self-administered survey at baseline, after 6 months, and then 12 months following entry-to-programme. Twenty-eight nurses participated in the study (96.4% women), with a mean age of 39.1 (SD = 6.2). Compared to other professions (n = 146/174), the group of nurses also showed significant improvements in their knowledge and attitude scores, with respective effect sizes of 0.72 and -0.44 at 6 months, and 0.94 and -0.59 at 12 months. However, significant changes in self-efficacy were only found at the 12-month follow-up (P = 0.0213), among the nurses who attended more than 25% of the 20-session curriculum. ECHO is a promising intervention to improve the accessibility of evidence-based practice and to support nurses in suitably managing concurrent disorders. Further research is needed to establish the effectiveness of this educational intervention on clinical nursing practice and patient outcomes.


Subject(s)
Nurses , Self Efficacy , Humans , Female , Adult , Male , Prospective Studies , Health Personnel/education , Surveys and Questionnaires , Health Knowledge, Attitudes, Practice
4.
J Interprof Care ; 37(3): 400-409, 2023.
Article in English | MEDLINE | ID: mdl-35880772

ABSTRACT

Health-care systems around the world are striving to be patient-centered, and there is growing evidence that engaging patients and families in their care, as well as in efforts to redesign services, contributes to improved outcomes and experiences for patients and providers. This patient-oriented care movement includes efforts to improve the quality of information and communication between health-care professionals and patients as well as families and caregivers. Whiteboards have emerged as a best practice in hospitals to promote engagement and improve information and communication, yet with limited empirical evidence regarding their value to patients, families, or interprofessional teams. We introduced whiteboards on an acute medical unit at a community hospital and conducted an evaluation using a pre-post design collecting both qualitative and quantitative data. Baseline and post-implementation data were collected via qualitative interviews with patients/family and providers and using the Canadian Patient Experience Survey; focus groups were held with staff and members of the care team. Qualitative results highlighted improvements in communication between the care team and patients as well as family members. Implications for practice include attention to patient/family empowerment and safety, adherence to guidance for good communication, and support for regular training and education in the use of communication tools for members of the interprofessional team.


Subject(s)
Family , Interprofessional Relations , Humans , Canada , Patients , Caregivers , Patient Care Team , Communication
5.
J Pain ; 23(7): 1151-1166, 2022 07.
Article in English | MEDLINE | ID: mdl-35074499

ABSTRACT

This ecological momentary assessment (EMA) study examined the extent of pain intensity variability among 140 individuals with chronic low back pain and explored predictors of such variability and psychosocial and health care utilization outcomes. Individuals completed momentary pain intensity reports (0-10 numeric rating scale) several times daily for two periods of seven consecutive days, one month apart. Participants also completed online questionnaires at baseline which tapped into pain characteristics, pain-related catastrophization, kinesiophobia, activity patterns, and depression and anxiety symptoms. Questionnaires assessing quality of life and health care utilization were administered online one month after completion of the last EMA report. Data were analyzed using linear hierarchical location-scale models. Results showed that pain intensity fluctuated over the course of a week as shown by an average standard deviation of 1.2. The extent of variability in pain intensity scores was heterogeneous across participants but stable over assessment periods. Patients' baseline characteristics along with psychosocial and health care utilization outcomes were not significantly associated with pain intensity variability. We conclude that pain intensity variability differs across patients yet correlates remain elusive. There is an important gap in our knowledge of what affects this variability. Future EMA studies should replicate and extend current findings. PERSPECTIVE: This study provides evidence indicating that there is substantial variability in momentary reports of pain intensity among individuals living with chronic low back pain. However, risk and protective factors for greater lability of pain are elusive as is evidence that greater pain intensity variability results in differential health care utilization.


Subject(s)
Low Back Pain , Ecological Momentary Assessment , Humans , Low Back Pain/therapy , Pain Measurement/methods , Patient Acceptance of Health Care , Quality of Life , Surveys and Questionnaires
6.
Sci Adv ; 7(48): eabj5629, 2021 Nov 26.
Article in English | MEDLINE | ID: mdl-34826237

ABSTRACT

Despite advances in COVID-19 management, identifying patients evolving toward death remains challenging. To identify early predictors of mortality within 60 days of symptom onset (DSO), we performed immunovirological assessments on plasma from 279 individuals. On samples collected at DSO11 in a discovery cohort, high severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) viral RNA (vRNA), low receptor binding domain­specific immunoglobulin G and antibody-dependent cellular cytotoxicity, and elevated cytokines and tissue injury markers were strongly associated with mortality, including in patients on mechanical ventilation. A three-variable model of vRNA, with predefined adjustment by age and sex, robustly identified patients with fatal outcome (adjusted hazard ratio for log-transformed vRNA = 3.5). This model remained robust in independent validation and confirmation cohorts. Since plasma vRNA's predictive accuracy was maintained at earlier time points, its quantitation can help us understand disease heterogeneity and identify patients who may benefit from new therapies.

7.
Drug Alcohol Depend ; 226: 108857, 2021 09 01.
Article in English | MEDLINE | ID: mdl-34225223

ABSTRACT

BACKGROUND: Up to 74 % of people with an opioid use disorder (OUD) will experience depression in their lifetime. Understanding and addressing the concept of preference for depression treatments and clinical trial designs may serve as an important milestone in enhancing treatment and research outcomes. Our goal is to evaluate preferences for depression treatments and clinical trial designs among individuals with an OUD and comorbid depression. METHODS: We evaluated preferences for depression treatments and clinical trial designs using an online cross-sectional survey including a best-best discrete choice experiment. We recruited 165 participants from opioid agonist treatment clinics and community-based services in Calgary, Charlottetown, Edmonton, Halifax, Montreal, Ottawa, Quebec City, St. John's and Trois-Rivières, Canada. RESULTS: Psychotherapy was the most accepted (80.0 %; CI: 73.9-86.1 %) and preferred (31.5 %; CI: 24.4-38.6 %) treatment. However, there was a high variability in acceptability and preferences of depression treatments. Significant predictors of choice for depression treatments were administration mode depending on session duration (p < 0.001), access mode (p < 0.001) and treatment duration (p < 0.001). Significant predictors of choice for clinical trial designs were allocation type (p = 0.008) and monetary compensation (p = 0.033). Participants preferred participating in research compared to non-participation (p < 0.001). CONCLUSIONS: Accessibility and diversity of depression interventions, including psychotherapy, need to be enhanced in addiction services to ensure that all patients can receive their preferred treatment. Ensuring proper monetary compensation and comparing an intervention of interest with an active treatment might increase participation of depressed OUD patients in future clinical research initiative.


Subject(s)
Depression , Opioid-Related Disorders , Cross-Sectional Studies , Humans , Opioid-Related Disorders/drug therapy , Opioid-Related Disorders/epidemiology , Patient Preference , Psychotherapy , Research Design
8.
J Hand Surg Glob Online ; 2(1): 1-6, 2020 Jan.
Article in English | MEDLINE | ID: mdl-35415474

ABSTRACT

Purpose: Wide-awake local anesthesia hand surgery has many advantages over other forms of anesthesia, including faster recovery, lower cost, and improved patient safety; however, few studies compare postoperative pain and analgesic consumption after long- and short-acting anesthetics. This is important because surgeons seek to minimize opioid consumption during the opioid epidemic. Methods: This was a double-blinded, prospective, randomized, parallel design pilot study. We randomized 61 patients to receive carpal tunnel surgery with a short-lasting regional anesthetic (lidocaine, 29 patients) or a long-lasting one (bupivacaine, 32 patients). Primary outcomes were pain levels over the first and second 24 hours. Secondary outcomes were postoperative consumption of acetaminophen and opioids over the first and second 12 hours after surgery. Results: Pain intensity and acetaminophen consumption were significantly less in the bupivacaine group over the first 24 and 12 hours after surgery, respectively. The bupivacaine group consumed less opioids in the first 12 hours and delayed consumption of the first medication after surgery, but these results were not statistically significant. There was no difference in pain intensity or analgesic consumption between 24 and 48 hours after surgery. Conclusions: The use of a long-term anesthetic (bupivacaine) over a short-term one (lidocaine) in awake carpal tunnel release surgery decreases postoperative pain over the initial 12 hours after surgery and delays the initiation of analgesic consumption; however, this difference is small. The amount of opioid consumption was not significantly different between groups, but both groups consumed less than 10% of the prescribed opioids. It is important to reevaluate the need for opioids in minor hand surgery and favor the use of alternatives such as nonsteroidal anti-inflammatory drugs and acetaminophen. Type of study/level of evidence: Therapeutic I.

9.
J Matern Fetal Neonatal Med ; 33(1): 73-80, 2020 Jan.
Article in English | MEDLINE | ID: mdl-29886760

ABSTRACT

Background: A large recent study analyzed the relationship between multiple factors and neonatal outcome and in preterm births. Study variables included the reason for admission, indication for delivery, optimal steroid use, gestational age, and other potential prognostic factors. Using stepwise multivariable analysis, the only two variables independently associated with serious neonatal morbidity were gestational age and the presence of suspected intrauterine growth restriction as a reason for admission. This finding was surprising given the beneficial effects of antenatal steroids and hazards associated with some causes of preterm birth. Multivariable logistic regression techniques have limitations. Without testing for multiple interactions, linear regression will identify only individual factors with the strongest independent relationship to the outcome for the entire study group. There may not be a single "best set" of risk factors or one set that applies equally well to all subgroups. In contrast, machine learning techniques find the most predictive groupings of factors based on their frequency and strength of association, with no attempt to identify independence and no assumptions about linear relationships.Objective: To determine if machine learning techniques would identify specific clusters of conditions with different probability estimates for severe neonatal morbidity and to compare these findings to those based on the original multivariable analysis.Materials and methods: This was a secondary analysis of data collected in a multicenter, prospective study on all admissions to the neonatal intensive care unit between 2013 and 2015 in 10 hospitals. We included all patients with a singleton, stillborn, or live newborns, with a gestational age between 23 0/7 and 31 6/7 week. The composite endpoint, severe neonatal morbidity, defined by the presence of any of five outcomes: death, grade 3 or 4 intraventricular hemorrhage (IVH), and ≥28 days on ventilator, periventricular leukomalacia (PVL), or stage III necrotizing enterocolitis (NEC), was present in 238 of the 1039 study patients. We studied five explanatory variables: maternal age, parity, gestational age, admission reason, and status with respect to antenatal steroid administration. We concentrated on Classification and Regression Trees because the resulting structure defines clusters of risk factors that often bear resemblance to clinical reasoning. Model performance was measured using area under the receiver-operator characteristic curves (AUC) based on 10 repetitions of 10-fold cross-validation.Results: A hybrid technique using a combination of logistic regression and Classification and Regression Trees had a mean cross-validated AUC of 0.853. A selected point on its receiver-operator characteristic (ROC) curve corresponding to a sensitivity of 81% was associated with a specificity of 76%. Rather than a single curve representing the general relationship between gestational age and severe morbidity, this technique found seven clusters with distinct curves. Abnormal fetal testing as a reason for admission with or without growth restriction and incomplete steroid administration would place a 20-year-old patient on the highest risk curve.Conclusions: Using a relatively small database and a few simple factors known before birth it is possible to produce a more tailored estimate of the risk for severe neonatal morbidity on which clinicians can superimpose their medical judgment, experience, and intuition.


Subject(s)
Diagnostic Techniques, Obstetrical and Gynecological , Infant, Premature, Diseases/diagnosis , Machine Learning , Premature Birth/diagnosis , Adult , Female , Gestational Age , Humans , Infant , Infant Mortality , Infant, Newborn , Infant, Premature, Diseases/epidemiology , Infant, Premature, Diseases/pathology , Infant, Small for Gestational Age , Male , Morbidity , Predictive Value of Tests , Pregnancy , Premature Birth/epidemiology , Premature Birth/mortality , Probability , Prognosis , Prospective Studies , Risk Assessment , Risk Factors , Severity of Illness Index , Stillbirth/epidemiology
10.
Int Psychogeriatr ; 31(1): 59-72, 2019 01.
Article in English | MEDLINE | ID: mdl-29720281

ABSTRACT

ABSTRACTBackground:A few studies examine the time evolution of delirium in long-term care (LTC) settings. In this work, we analyze the multivariate Delirium Index (DI) time evolution in LTC settings. METHODS: The multivariate DI was measured weekly for six months in seven LTC facilities, located in Montreal and Quebec City. Data were analyzed using a hidden Markov chain/latent class model (HMC/LC). RESULTS: The analysis sample included 276 LTC residents. Four ordered latent classes were identified: fairly healthy (low "disorientation" and "memory impairment," negligible other DI symptoms), moderately ill (low "inattention" and "disorientation," medium "memory impairment"), clearly sick (low "disorganized thinking" and "altered level of consciousness," medium "inattention," "disorientation," "memory impairment" and "hypoactivity"), and very sick (low "hypoactivity," medium "altered level of consciousness," high "inattention," "disorganized thinking," "disorientation" and "memory impairment"). Four course types were also identified: stable, improvement, worsening, and non-monotone. Class order was associated with increasing cognitive impairment, frequency of both prevalent/incident delirium and dementia, mortality rate, and decreasing performance in ADL. CONCLUSION: Four ordered latent classes and four course types were found in LTC residents. These results are similar to those reported previously in acute care (AC); however, the proportion of very sick residents at enrolment was larger in LTC residents than in AC patients. In clinical settings, these findings could help identify participants with a chronic clinical disorder. Our HMC/LC approach may help understand coexistent disorders, e.g. delirium and dementia.


Subject(s)
Delirium/diagnosis , Delirium/epidemiology , Latent Class Analysis , Long-Term Care , Aged , Aged, 80 and over , Female , Humans , Male , Prospective Studies , Psychiatric Status Rating Scales , Quebec/epidemiology , Severity of Illness Index
11.
J Ultrasound Med ; 38(5): 1249-1257, 2019 May.
Article in English | MEDLINE | ID: mdl-30208243

ABSTRACT

OBJECTIVES: This study evaluates the use of ultrasound simulators for retaining and improving ultrasound skills acquired in undergraduate ultrasound training. METHODS: Fourth-year medical students (n = 19) with prior training in point-of-care sonography for shock assessment were recruited for this study. Students were randomly assigned to a study group (n = 10) that followed an undergraduate ultrasound training curriculum, then used a simulator to complete 2 self-directed practice ultrasound sessions over 4 weeks. The control group (n = 9) followed the same undergraduate ultrasound training curriculum and received no additional access to a simulator or ultrasound training. A blinded assessment of the students was performed before and after the 4-week study period to evaluate their image acquisition skills on standardized patients (practical examination). To evaluate the student's clinical understanding of pathological ultrasound images, students watched short videos of prerecorded ultrasound scans and were asked to complete a 22-point questionnaire to identify their findings (visual examination). RESULTS: All results were adjusted to pretest performance. The students in the study group performed better than those in the control group on the visual examination (80.1% versus 58.9%; P = .003) and on the practical examination (77.7% versus 57.0%; P = .105) after the 4-week study period. The score difference on the postintervention practical examinations was significantly better for the study group compared to the control group (11.6% versus -9.9%; P = .0007). CONCLUSION: The use of ultrasound simulators may be a useful tool to help previously trained medical students retain and improve point-of-care ultrasound skills and knowledge.


Subject(s)
Clinical Competence/statistics & numerical data , Education, Medical, Undergraduate/methods , Ultrasonics/education , Ultrasonics/instrumentation , Ultrasonography/instrumentation , Adult , Curriculum , Female , Humans , Male , Students, Medical
12.
J Ultrasound Med ; 37(11): 2545-2552, 2018 Nov.
Article in English | MEDLINE | ID: mdl-29574857

ABSTRACT

OBJECTIVES: This study compared the accuracy of medical students in identifying pleural effusion in hospitalized patients using the physical examination versus lung ultrasound (US). METHODS: Fourth-year medical students (n = 14) received 20 hours of general practical US training (including 2 hours of specialized lung US training) plus theoretical and video documentation. The students used the physical examination alone versus the physical examination plus lung US to document the presence or absence of pleural effusion in the right and left hemithoraces of hospitalized patients (n = 11 patients; 22 hemithoraces examined 544 times in total). The reference standard for identification of pleural effusion was a lung US examination by 2 expert point-of-care sonographers. RESULTS: The odds of correctly identifying the presence versus absence of pleural effusion was 5 times greater with lung US as an adjunct to the physical examination compared to the physical examination alone (odds ratio [OR], 5.1 from multivariate logistic regression; 95% confidence interval, 3.3-8.0). The addition of lung US to the physical examination resulted in an increase in sensitivity from 48% to 90%, in specificity from 73% to 86%, and in accuracy from 60% to 88%. The benefits of using US were greater when pleural effusion was present versus absent (OR, 10.8 versus 2.4) and when examining older versus younger patients (OR, 10.2 versus 2.8). CONCLUSIONS: These results demonstrate that medical students' ability to detect the presence or absence of pleural effusion is superior when using lung US as an adjunct to the physical examination than when using the physical examination alone.


Subject(s)
Clinical Competence/statistics & numerical data , Physical Examination/methods , Pleural Effusion/diagnosis , Students, Medical/statistics & numerical data , Adult , Aged, 80 and over , Female , Humans , Lung/diagnostic imaging , Male , Pleural Effusion/diagnostic imaging , Reproducibility of Results , Sensitivity and Specificity , Ultrasonography
13.
Ann Plast Surg ; 78(4): 403-411, 2017 Apr.
Article in English | MEDLINE | ID: mdl-28177974

ABSTRACT

BACKGROUND: The high recurrence rate of keloids has lead to the use of multiple treatment adjuncts to improve cosmetic outcomes after surgery. To date, there has been no single, standardized modality agreed upon to produce the best results. The purpose of this study was to review the radiation-based treatments (brachytherapy, electron beam and X-ray) used for keloid management and compare their outcomes. METHODS: A literature review was performed from 1942 to October 2014 using the databases: PubMed database of the National Center of Biotechnology Information, MEDLINE, Biosis, Embase, Google scholar, and Cochrane database. Articles were reviewed for case numbers, patient demographics, keloid location, follow up, radiation modality, dose, keloid recurrence, and complications. RESULTS: A total of 72 studies met the inclusion criteria representing 9048 keloids. These studies were categorized by treatment: brachytherapy, electron, or X-ray. Meta-analysis demonstrated that radiotherapy after surgery had less recurrence when compared to radiotherapy alone (22% and 37%, respectively, P = 0.005). Comparison between modalities revealed that postoperative brachytherapy yielded the lowest recurrence rate (15%) compared with X-ray and electron beam (23% and 23%, respectively; P =0.04, P = 0.1). Subgroup analysis by location demonstrated chest keloids have the highest recurrence rate. The most commonly reported side effect of radiotherapy was changes in skin pigmentation. CONCLUSIONS: The results of this study reinforce postoperative radiotherapy as effective management for keloids. Specifically, brachytherapy was the most effective of the currently used radiation modalities.


Subject(s)
Brachytherapy/methods , Keloid/radiotherapy , Surgery, Plastic/adverse effects , Wound Healing/physiology , Esthetics , Female , Humans , Keloid/prevention & control , Male , Radiotherapy/methods , Radiotherapy Dosage , Recurrence , Risk Assessment , Surgery, Plastic/methods
14.
Asian J Androl ; 19(1): 80-90, 2017.
Article in English | MEDLINE | ID: mdl-27345006

ABSTRACT

Sperm DNA damage is prevalent among infertile men and is known to influence natural reproduction. However, the impact of sperm DNA damage on assisted reproduction outcomes remains controversial. Here, we conducted a meta-analysis of studies on sperm DNA damage (assessed by SCSA, TUNEL, SCD, or Comet assay) and clinical pregnancy after IVF and/or ICSI treatment from MEDLINE, EMBASE, and PUBMED database searches for this analysis. We identified 41 articles (with a total of 56 studies) including 16 IVF studies, 24 ICSI studies, and 16 mixed (IVF + ICSI) studies. These studies measured DNA damage (by one of four assays: 23 SCSA, 18 TUNEL, 8 SCD, and 7 Comet) and included a total of 8068 treatment cycles (3734 IVF, 2282 ICSI, and 2052 mixed IVF + ICSI). The combined OR of 1.68 (95% CI: 1.49-1.89; P < 0.0001) indicates that sperm DNA damage affects clinical pregnancy following IVF and/or ICSI treatment. In addition, the combined OR estimates of IVF (16 estimates, OR = 1.65; 95% CI: 1.34-2.04; P < 0.0001), ICSI (24 estimates, OR = 1.31; 95% CI: 1.08-1.59; P = 0.0068), and mixed IVF + ICSI studies (16 estimates, OR = 2.37; 95% CI: 1.89-2.97; P < 0.0001) were also statistically significant. There is sufficient evidence in the existing literature suggesting that sperm DNA damage has a negative effect on clinical pregnancy following IVF and/or ICSI treatment.


Subject(s)
DNA Damage , Fertilization in Vitro , Infertility, Male/therapy , Pregnancy Rate , Sperm Injections, Intracytoplasmic , Spermatozoa/metabolism , Comet Assay , DNA Fragmentation , Female , Humans , In Situ Nick-End Labeling , Male , Pregnancy
15.
Int Psychogeriatr ; 29(1): 11-17, 2017 01.
Article in English | MEDLINE | ID: mdl-27576950

ABSTRACT

BACKGROUND: The delirium index (DI) is a valid measure of delirium severity. We proposed to describe longitudinal patterns of severity scores in older long-term care (LTC) residents. METHODS: A prospective cohort study of 280 residents in seven LTC facilities in Montreal and Quebec City, Canada, was conducted. DI, Barthel Index, Mini-Mental State Examination, Charlson Comorbidity Index, Cornell Scale for Depression in Dementia, dementia assessment by an MD, and prevalent or incident probable delirium defined according to the Confusion Assessment Method were completed at baseline. The DI was also assessed weekly for 6 months. Demographic characteristics were abstracted from resident charts. Cluster analysis for longitudinal data was used to describe longitudinal patterns of DI scores. RESULTS: During the 24 weeks following enrolment, 28 (10.0%) of 280 residents who had prevalent delirium and 76 (27.1%) who had incident delirium were included in our analysis. Average observation period was 18.3 weeks. Four basic types of time evolution patterns were discovered: Improvement, Worsening, Fluctuating, and Steady, including 22%, 18%, 25%, and 35%, of the residents, respectively. With the exception of the Worsening pattern, the average trajectory was stabilized at the 4th week or earlier. Poor baseline cognitive and physical function and greater severity of delirium predicted worse trajectories over 24 weeks. CONCLUSIONS: The longitudinal patterns of DI scores found in LTC residents resemble those found in an earlier study of delirium in acute care (AC) settings. However, compared to AC patients, LTC residents have a smaller DI variability over time, a less frequent Improvement pattern, and more frequent Worsening and Fluctuating patterns.


Subject(s)
Delirium/diagnosis , Delirium/epidemiology , Long-Term Care , Aged , Aged, 80 and over , Female , Humans , Male , Prospective Studies , Psychiatric Status Rating Scales , Quebec/epidemiology , Severity of Illness Index
16.
Am J Infect Control ; 44(12): 1582-1588, 2016 12 01.
Article in English | MEDLINE | ID: mdl-27397907

ABSTRACT

BACKGROUND: Environmental cleaning is a fundamental principle of infection prevention in hospitals, but its role in reducing transmission of health care-acquired pathogens has been difficult to prove experimentally. In this study we analyze the influence of cleaning previously uncleaned patient care items, grey zones (GZ), on health care-acquired transmission rates. METHODS: The intervention consisted of specific GZ cleaning by an extra cleaner (in addition to routine cleaning) on 2 structurally different acute care medical wards for a period of 6 months each, in a crossover design. Data on health care-acquired transmissions of vancomycin-resistant enterococci (VRE), methicillin-resistant Staphylococcus aureus, and Clostridium difficile were collected during both periods. Adjusted incidence rate ratios (IRRs) using Poisson regression were calculated to compare transmission of pathogens between both periods on both wards. RESULTS: During the intervention VRE transmission was significantly decreased (2-fold) on the ward where patients had fewer roommates; cleaning of GZ did not have any effect on the ward with multiple-occupancy rooms. There was no impact on methicillin-resistant S aureus transmission and only a nonsignificant decrease in transmission of C difficile. CONCLUSIONS: Our data provide evidence that targeted cleaning interventions can reduce VRE transmission when rooming conditions are optimized; such interventions can be cost-effective when the burden of VRE is significant. Enhanced cleaning interventions are less beneficial in the context of room sharing where many other factors contribute to transmission of pathogens.


Subject(s)
Cross Infection/epidemiology , Cross Infection/prevention & control , Disinfection/methods , Housekeeping, Hospital/methods , Adult , Aged , Aged, 80 and over , Clostridioides difficile/isolation & purification , Cross Infection/microbiology , Cross Infection/transmission , Cross-Over Studies , Hospitals , Humans , Male , Methicillin-Resistant Staphylococcus aureus/isolation & purification , Middle Aged , Prospective Studies , Vancomycin-Resistant Enterococci/isolation & purification
17.
Arch Gerontol Geriatr ; 58(3): 332-8, 2014.
Article in English | MEDLINE | ID: mdl-24345307

ABSTRACT

The objectives of this study were: (1) to describe the prevalence and 6-month incidence of observer-rated depression in residents age 65 and over of long-term care (LTC) facilities; (2) to describe risk factors for depression, at baseline and over time. A multisite, prospective observational study was conducted in residents aged 65 and over of 7 LTC facilities. The Cornell Scale for Depression in Dementia (CSDD) was completed by nurses monthly for 6 months. We measured demographic, medical, and functional factors at baseline and monthly intervals, using data from research assessments, nurse interviews, and chart reviews. 274 residents were recruited and completed baseline depression assessments. The prevalence of depression (CSDD score of 6+) was 19.0%. The incidence of depression among those without prevalent depression was 73.3 per 100 person-years. A delirium diagnosis, pain, and diabetes were independently associated with prevalent depression. CSDD score at baseline and development of severe cognitive impairment at follow-up were independent risk factors for incident depression. A diagnosis of delirium and uncorrected visual impairment at follow-up occurred concurrently with incident depression. The results of this study have implications for the detection and prevention of depression in LTC. Delirium diagnosis, pain and diabetes at baseline were associated with prevalent depression; depression symptoms at baseline and development of severe cognitive impairment at follow-up were risk factors for incident depression.


Subject(s)
Depression/epidemiology , Depressive Disorder/epidemiology , Long-Term Care , Aged , Aged, 80 and over , Canada , Cognition Disorders/epidemiology , Comorbidity , Delirium/epidemiology , Dementia/epidemiology , Depression/diagnosis , Depression/psychology , Depressive Disorder/diagnosis , Depressive Disorder/psychology , Female , Follow-Up Studies , Humans , Incidence , Male , Multivariate Analysis , Prevalence , Prospective Studies , Psychiatric Status Rating Scales , Quebec/epidemiology , Risk Factors , Socioeconomic Factors
18.
J Am Geriatr Soc ; 61(4): 502-11, 2013 Apr.
Article in English | MEDLINE | ID: mdl-23581909

ABSTRACT

OBJECTIVES: To identify potentially modifiable environmental factors (including number of medications) associated with changes over time in the severity of delirium symptoms and to explore the interactions between these factors and resident baseline vulnerability. DESIGN: Prospective, observational cohort study. SETTING: Seven long-term care (LTC) facilities. PARTICIPANTS: Two hundred seventy-two LTC residents aged 65 and older with and without delirium. MEASUREMENTS: Weekly assessments (for up to 6 months) of the severity of delirium symptoms using the Delirium Index (DI), environmental risk factors, and number of medications. Baseline vulnerability measures included a diagnosis of dementia and a delirium risk score. Associations between environmental factors, medications, and weekly changes in DI were analyzed using a general linear model with correlated errors. RESULTS: Six potentially modifiable environmental factors predicted weekly changes in DI (absence of reading glasses, aids to orientation, family member, and glass of water and presence of bed rails and other restraints) as did the prescription of two or more new medications. Residents with dementia appeared to be more sensitive to the effects of these factors. CONCLUSION: Six environmental factors and prescription of two or more new medications predicted changes in the severity of delirium symptoms. These risk factors are potentially modifiable through improved LTC clinical practices.


Subject(s)
Activities of Daily Living , Confusion/epidemiology , Delirium/epidemiology , Mental Health/statistics & numerical data , Residential Facilities/organization & administration , Aged , Aged, 80 and over , Canada , Cohort Studies , Female , Health Status , Humans , Long-Term Care , Male , Prospective Studies , Risk Factors , Severity of Illness Index , Social Environment
19.
Int Psychogeriatr ; 25(6): 887-94, 2013 Jun.
Article in English | MEDLINE | ID: mdl-23448799

ABSTRACT

BACKGROUND: Detection of long-term care (LTC) residents at risk of delirium may lead to prevention of this disorder. The primary objective of this study was to determine if the presence of one or more Confusion Assessment Method (CAM) core symptoms of delirium at baseline assessment predicts incident delirium. Secondary objectives were to determine if the number or the type of symptoms predict incident delirium. METHODS: The study was a secondary analysis of data collected for a prospective study of delirium among older residents of seven LTC facilities in Montreal and Quebec City, Canada. The Mini-Mental State Exam (MMSE), CAM, Delirium Index (DI), Hierarchic Dementia Scale, Barthel Index, and Cornell Scale for Depression were completed at baseline. The MMSE, CAM, and DI were repeated weekly for six months. Multivariate Cox regression models were used to determine if baseline symptoms predict incident delirium. RESULTS: Of 273 residents, 40 (14.7%) developed incident delirium. Mean (SD) time to onset of delirium was 10.8 (7.4) weeks. When one or more CAM core symptoms were present at baseline, the Hazard Ratio (HR) for incident delirium was 3.5 (95% CI = 1.4, 8.9). The HRs for number of symptoms present ranged from 2.9 (95% CI = 1.0, 8.3) for one symptom to 3.8 (95% CI = 1.3, 11.0) for three symptoms. The HR for one type of symptom, fluctuation, was 2.2 (95% CI = 1.2, 4.2). CONCLUSION: The presence of CAM core symptoms at baseline assessment predicts incident delirium in older LTC residents. These findings have potentially important implications for clinical practice and research in LTC settings.


Subject(s)
Confusion/complications , Delirium/epidemiology , Dementia/complications , Long-Term Care , Aged , Aged, 80 and over , Canada/epidemiology , Confusion/epidemiology , Dementia/diagnosis , Dementia/epidemiology , Female , Follow-Up Studies , Geriatric Assessment/statistics & numerical data , Humans , Incidence , Male , Predictive Value of Tests , Proportional Hazards Models , Prospective Studies , Residential Facilities , Risk Factors
20.
J Am Geriatr Soc ; 60(12): 2302-7, 2012 Dec.
Article in English | MEDLINE | ID: mdl-23194147

ABSTRACT

OBJECTIVE: To describe Confusion Assessment Method (CAM) core symptoms of delirium occurring before and after incident episodes of delirium in older long-term care (LTC) residents. A secondary objective was to describe the mean number of symptoms before and after episodes by dementia status. DESIGN: Secondary analysis of data collected for a prospective cohort study of delirium, with repeated weekly assessments for up to 6 months. SETTING: Seven LTC facilities in Montreal and Quebec City, Canada. PARTICIPANTS: Forty-one older LTC residents who had at least one CAM-defined incident episode of delirium. MEASUREMENTS: The Mini-Mental State Examination (MMSE), CAM, Delirium Index (DI), Hierarchic Dementia Scale, Barthel Index, and Cornell Scale for Depression were completed at baseline. The MMSE, CAM, and DI were repeated weekly for 6 months. The frequency, mean number, type, and duration of CAM core symptoms of delirium occurring before and after incident episodes were examined using descriptive statistics, frequency analysis, and survival analysis. RESULTS: CAM core symptoms of delirium preceded 38 (92.7%) episodes of delirium for many weeks; core symptoms followed 37 (90.2%) episodes for many weeks. Symptoms of inattention and disorganized thinking occurred most commonly. The mean number of symptoms was higher in residents with dementia but not significantly so. CONCLUSION: CAM core symptoms of delirium were frequent and protracted before and after most incident episodes of delirium in LTC residents with and without dementia. If replicated, these findings have potentially important implications for clinical practice and research in LTC settings.


Subject(s)
Delirium/diagnosis , Long-Term Care , Aged, 80 and over , Confusion/diagnosis , Female , Humans , Male , Mental Status Schedule , Recurrence , Residential Facilities
SELECTION OF CITATIONS
SEARCH DETAIL
...