Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 42
Filter
1.
J Phys Condens Matter ; 36(12)2023 Dec 15.
Article in English | MEDLINE | ID: mdl-38056003

ABSTRACT

We report the properties of an A-site spinel magnet, CoAl2-xGaxO4, and analyze its anomalous, low-temperature magnetic behavior, which is derived from inherent, magnetically frustrated interactions. Rietveld analysis of the x-ray diffraction profile for CoAl2-xGaxO4revealed that the metallic ions were randomly distributed in the tetrahedral (A-) and octahedral (B-) sites in the cubic spinel structure. The inversion parameterηcould be controlled by varying the gallium (Ga) composition in the range 0.055 ⩽η⩽ 0.664. The composition-induced Néel-to-spin-glass (NSG) transition occurred between 0.05 ⩽η⩽ 0.08 and was verified by measurements of DC-AC susceptibilitiesχand thermoremanent magnetization (TRM) below the Néel transition temperatureTN. The relaxation rate and derivative with respect to temperature of TRM increased at bothTNand the spin glass (SG) transition temperatureTSG. The TRM decayed rapidly above and below these transitions. TRM was highly sensitive to macroscopic magnetic transitions that occurred in both the Néel and SG phases of CoAl2-xGaxO4. In the vicinity of the NSG boundary, there was a maximum of the TRM relaxation rate atTmax

3.
J Hosp Infect ; 100(3): 280-298, 2018 Nov.
Article in English | MEDLINE | ID: mdl-30369423

ABSTRACT

BACKGROUND: National responses to healthcare-associated infections vary between high-income countries, but, when analysed for contextual comparability, interventions can be assessed for transferability. AIM: To identify learning from country-level approaches to addressing meticillin-resistant Staphylococcus aureus (MRSA) in Japan and England. METHODS: A longitudinal analysis (2000-2017), comparing epidemiological trends and policy interventions. Data from 441 textual sources concerning infection prevention and control (IPC), surveillance, and antimicrobial stewardship interventions were systematically coded for: (a) type: mandatory requirements, recommendations, or national campaigns; (b) method: restrictive, persuasive, structural in nature; (c) level of implementation: macro (national), meso (organizational), micro (individual) levels. Healthcare organizational structures and role of media were also assessed. FINDINGS: In England significant reduction has been achieved in number of reported MRSA bloodstream infections. In Japan, in spite of reductions, MRSA remains a predominant infection. Both countries face new threats in the emergence of drug-resistant Escherichia coli. England has focused on national mandatory and structural interventions, supported by a combination of outcomes-based incentives and punitive mechanisms, and multi-disciplinary IPC hospital teams. Japan has focused on (non-mandatory) recommendations and primarily persuasive interventions, supported by process-based incentives, with voluntary surveillance. Areas for development in Japan include resourcing of dedicated data management support and implementation of national campaigns for healthcare professionals and the public. CONCLUSION: Policy interventions need to be relevant to local epidemiological trends, while acceptable within the health system, culture, and public expectations. Cross-national learning can help inform the right mix of interventions to create sustainable and resilient systems for future infection and economic challenges.


Subject(s)
Communicable Disease Control/methods , Disease Transmission, Infectious/prevention & control , Health Policy , Methicillin-Resistant Staphylococcus aureus/isolation & purification , Staphylococcal Infections/epidemiology , Staphylococcal Infections/prevention & control , Bacteremia/epidemiology , Bacteremia/microbiology , Bacteremia/prevention & control , Communicable Disease Control/organization & administration , Cross Infection/epidemiology , Cross Infection/microbiology , Cross Infection/prevention & control , England/epidemiology , Japan/epidemiology , Staphylococcal Infections/microbiology
4.
World J Surg ; 42(3): 758-765, 2018 03.
Article in English | MEDLINE | ID: mdl-28920145

ABSTRACT

BACKGROUND: Many perforated peptic ulcers (PPUs) require surgical repair due to diffuse peritonitis. However, few studies have examined the clinical effects of postoperative drainage after PPU repair. This study aimed to investigate the drain insertion rates in patients who underwent PPU repair in Japan, and to clarify the impact of drain insertion on the postoperative clinical course. METHODS: A retrospective nationwide cohort study was performed using administrative claims data of patients who had undergone PPU repair between 2010 and 2016. These patients were divided into two groups based on whether or not they had received a postoperative abdominal drain. Using propensity score matching, we compared the incidences of postoperative interventions for abdominal complications between both groups. RESULTS: A total of 4869 patients from 324 hospitals were analyzed. At the hospital level, drains were placed in all PPU repair patients in 229 (70.7%) hospitals. At the patient level, 4401 patients (90.4%) had drains inserted. The drain group was associated with a higher emergency admission rate, poorer preoperative shock status, longer anesthetic time, and a higher amount of intra-abdominal irrigation. In the propensity score-matched patients, the drain group had a significantly lower incidence of postoperative interventions than the no-drain group (1.9 vs. 5.6%; risk ratio = 0.35; 95% confidence interval 0.16-0.73; P = 0.003). CONCLUSION: Postoperative drainage was performed in the majority of patients who underwent PPU repair in Japan. Drainage following PPU repair may facilitate patient recovery by reducing the need for postoperative interventions.


Subject(s)
Drainage , Peptic Ulcer Perforation/surgery , Postoperative Complications/prevention & control , Adult , Aged , Databases, Factual , Drainage/adverse effects , Drainage/statistics & numerical data , Female , Humans , Japan , Male , Middle Aged , Postoperative Care , Postoperative Complications/etiology , Propensity Score , Retrospective Studies
5.
Acta Anaesthesiol Scand ; 60(7): 874-81, 2016 Aug.
Article in English | MEDLINE | ID: mdl-27027576

ABSTRACT

BACKGROUND: Acute kidney injury (AKI) is a common complication after liver transplantation and is associated with significant morbidity and mortality. Although clinical guidelines recommend defining AKI based on serum creatinine increase and oliguria, the validity and utility of the oliguric component of AKI definition remains largely unexplored. This study examined the incidence and the impact on clinical outcomes of oliguria meeting the urine output criterion of AKI in patients undergoing liver transplantation. The authors hypothesised that oliguria was an independent risk factor for adverse post-operative outcomes. METHODS: This study retrospectively examined 320 patients who underwent living donor liver transplantation at our centre. AKI stages were allocated according to recent guidelines based on serum creatinine or urine output within 7 days of surgery. RESULTS: The incidence of oliguria meeting the urine output criterion of AKI was 50.3%. Compared with creatinine criterion alone, incorporating oliguria into the diagnostic criteria dramatically increased the measured incidence of AKI from 39.7% to 62.2%. Compared with patients diagnosed without AKI using either criterion, oliguric patients without serum creatinine increase had significantly longer intensive care unit stays (median: 5 vs. 4 days, P = 0.016), longer hospital stays (median: 60 vs. 49 days, P = 0.014) and lower chronic kidney disease-free survival rate on post-operative day 90 (54.2% vs. 73.3%, P = 0.008). CONCLUSION: Oliguria is common after liver transplantation, and incorporating oliguria into the diagnostic criteria dramatically increases the measured incidence of AKI. Oliguria without serum creatinine increase was significantly associated with adverse post-operative outcomes.


Subject(s)
Acute Kidney Injury/epidemiology , Creatinine/blood , Liver Transplantation/statistics & numerical data , Oliguria/epidemiology , Postoperative Complications/epidemiology , Acute Kidney Injury/blood , Adolescent , Adult , Aged , Comorbidity , Female , Humans , Incidence , Japan/epidemiology , Length of Stay/statistics & numerical data , Living Donors/statistics & numerical data , Male , Middle Aged , Oliguria/blood , Postoperative Complications/blood , Retrospective Studies , Risk Factors , Survival Rate , Young Adult
6.
Infection ; 39(3): 185-99, 2011 Jun.
Article in English | MEDLINE | ID: mdl-21424853

ABSTRACT

Hospital-acquired infections (HAIs) present a substantial problem for healthcare providers, with a relatively high frequency of occurrence and considerable damage caused. There has been an increase in the number of cost-effectiveness and cost-savings analyses of HAI control measures, and the quantification of the cost of HAI (COHAI) is necessary for such calculations. While recent guidelines allow researchers to utilize COHAI estimates from existing published literature when evaluating the economic impact of HAI control measures, it has been observed that the results of economic evaluations may not be directly applied to other jurisdictions due to differences in the context and circumstances in which the original results were produced. The aims of this study were to conduct a systematic review of published studies that have produced COHAI estimates from 1980 to 2006 and to evaluate the quality of these estimates from the perspective of transferability. From a total of 89 publications, only eight papers (9.0%) had a high level of transferability in which all components of costs were described, data for costs in each component were reported, and unit costs were estimated with actual costing. We also did not observe a higher citation level for studies with high levels of transferability. We feel that, in order to ensure an appropriate contribution to the infection control program decision-making process, it is essential for researchers who estimate COHAI, analysts who use COHAI estimates for decision-making, as well as relevant journal reviewers and editors to recognize the importance of a transferability paradigm.


Subject(s)
Cross Infection/economics , Health Care Costs , Infection Control/economics , Decision Making , Evaluation Studies as Topic , Humans
7.
J Hosp Infect ; 77(4): 316-20, 2011 Apr.
Article in English | MEDLINE | ID: mdl-21277647

ABSTRACT

Despite its potential for use in large-scale analyses, previous attempts to utilise administrative data to identify healthcare-associated infections (HAI) have been shown to be unsuccessful. In this study, we validate the accuracy of a novel method of HAI identification based on antibiotic utilisation patterns derived from administrative data. We contemporaneously and independently identified HAIs using both chart review analysis and our method from four Japanese hospitals (N=584). The accuracy of our method was quantified using sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) relative to chart review analysis. We also analysed the inter-rater agreement between both identification methods using Cohen's kappa coefficient. Our method showed a sensitivity of 0.93 (95% CI: 0.87-0.96), specificity of 0.91 (0.89-0.94), PPV of 0.75 (0.68-0.81) and NPV of 0.98 (0.96-0.99). A kappa coefficient of 0.78 indicated a relatively high level of agreement between the two methods. Our results show that our method has sufficient validity for identification of HAIs in large groups of patients, though the relatively lower PPV may imply limited utilisation in the pinpointing of individual infections. Our method may have applications in large-scale HAI identification, risk-adjusted multicentre studies involving cost of illness, or even as the starting point of future cost-effectiveness analyses of HAI control measures.


Subject(s)
Anti-Bacterial Agents/therapeutic use , Cross Infection/diagnosis , Drug Utilization/statistics & numerical data , Epidemiologic Methods , Adult , Aged , Aged, 80 and over , Female , Hospitals , Humans , Japan , Male , Middle Aged , Predictive Value of Tests , Sensitivity and Specificity
8.
J Hosp Infect ; 77(2): 93-105, 2011 Feb.
Article in English | MEDLINE | ID: mdl-21145131

ABSTRACT

Quantifying the additional costs of hospital-acquired infections (COHAI) is essential for developing cost-effective infection control measures. The methodological approaches to estimate these costs include case reviews, matched comparisons and regression analyses. The choice of cost estimation methodologies can affect the accuracy of the resulting estimates, however, with regression analyses generally able to avoid the bias pitfalls of the other methods. The objective of this study was to elucidate the distributions and trends in cost estimation methodologies in published studies that have produced COHAI estimates. We conducted systematic searches of peer-reviewed publications that produced cost estimates attributable to hospital-acquired infection in MEDLINE from 1980 to 2006. Shifts in methodologies at 10-year intervals were analysed using Fisher's exact test. The most frequent method of COHAI estimation methodology was multiple matched comparisons (59.6%), followed by regression models (25.8%), and case reviews (7.9%). There were significant increases in studies that used regression models and decreases in matched comparisons through the 1980s, 1990s and post-2000 (P = 0.033). Whereas regression analyses have become more frequently used for COHAI estimations in recent years, matched comparisons are still used in more than half of COHAI estimation studies. Researchers need to be more discerning in the selection of methodologies for their analyses, and comparative analyses are needed to identify more accurate estimation methods. This review provides a resource for analysts to overview the distribution, trends, advantages and pitfalls of the various existing COHAI estimation methodologies.


Subject(s)
Cross Infection/economics , Costs and Cost Analysis/methods , Cross Infection/classification , Health Care Costs , Humans
9.
Phys Rev Lett ; 105(1): 017403, 2010 Jul 02.
Article in English | MEDLINE | ID: mdl-20867476

ABSTRACT

Through magnetic linear dichroism spectroscopy, the magnetic susceptibility anisotropy of metallic single-walled carbon nanotubes has been extracted and found to be 2-4 times greater than values for semiconducting nanotubes. This large anisotropy can be understood in terms of large orbital paramagnetism of metallic nanotubes arising from the Aharonov-Bohm-phase-induced gap opening in a parallel field, and our calculations quantitatively reproduce these results. We also compare our values with previous work for semiconducting nanotubes, which confirm that the magnetic susceptibility anisotropy does not increase linearly with the diameter for small-diameter nanotubes.

10.
Qual Saf Health Care ; 19(2): 122-7, 2010 Apr.
Article in English | MEDLINE | ID: mdl-20351160

ABSTRACT

BACKGROUND: Incident reporting is a promising tool to enhance patient safety, but few empirical studies have been conducted to identify factors that increase the number of incident reports. Objective To evaluate how the number of incident reports are related to system-level activities and reporting design. METHODS: A questionnaire survey was administered to all 1039 teaching hospitals in Japan. Items on the survey included number of reported incidents; reporting design of incidents; and status for system-level activities, including assignment of safety managers, conferences, ward rounds by peers, and staff education. Staff education encompasses many aspects of patient safety and is not limited to incident reporting. Poisson regression models were used to determine whether these activities and design of reporting method increase incident reports filed by physicians and nurses. RESULTS: Educational activities were significantly associated with reporting by physicians (53% increase, p<0.001) but had no significant effect on nurse-generated reports. More reports were submitted by physicians and nurses in hospitals where time involved with filing a report was short (p<0.05). The impact of online reporting was limited to a 26% increase in physicians' reports (p<0.05). CONCLUSION: In accordance with the suggestions by previous studies that examined staff perceptions and attitudes, this study empirically demonstrated that to decrease burden to reporting and to implement staff educations may improve incident reporting.


Subject(s)
Medical Staff, Hospital , Nursing Staff, Hospital , Risk Management/statistics & numerical data , Attitude of Health Personnel , Hospitals, Teaching , Humans , Inservice Training , Japan , Medical Staff, Hospital/education , Nursing Staff, Hospital/education , Safety , Surveys and Questionnaires , Systems Analysis
11.
Qual Saf Health Care ; 19(6): e10, 2010 Dec.
Article in English | MEDLINE | ID: mdl-20194219

ABSTRACT

BACKGROUND: Delays in reporting of medical errors may signal deficiencies in the performance of hospital-based incident reporting. We sought to understand the characteristics of hospitals, providers and patient injuries that affect such delays. SETTING AND METHODS: All incident reports filed between May 2004 and August 2005 at the Kyoto University Hospital (KUH) in Japan and the Brigham and Women's Hospital (BWH) in the USA were evaluated. Lag time between each event and the submission of an incident report were computed. Multivariable Poisson regression with overdispersion, to control for previously described confounding factors and identify independent predictors of delays, was used. RESULTS: Unadjusted lag times were significantly longer for physicians than other reporters (3.6 vs 1.8 days, p < 0.0001), longer for major than minor events (4.1 vs 1.9 days, p = 0.0006) and longer at KUH than at BWH (3.1 vs 1.0 days, p < 0.0001). In multivariable analysis, lag times at KUH remained nearly three times longer than at BWH (incidence-rate ratio 2.95, 95% CI 2.84 to 3.06, p < 0.0001). CONCLUSIONS: Lag time provides a novel and useful metric for evaluating the performance of hospital-based incident reporting systems. Across two very different health systems, physicians reported far fewer events, with significant delays compared with other providers. Even after controlling for important confounding factors, lag times at KUH were nearly triple those at BWH, suggesting significant differences in the performance of their reporting systems, potentially attributable to either the ease of online reporting at BWH or to the greater attention to patient safety reporting in that hospital.


Subject(s)
Academic Medical Centers , Mandatory Reporting , Medical Errors , Humans , Japan , Poisson Distribution , Time Factors , United States
12.
Vox Sang ; 98(4): 538-46, 2010 May.
Article in English | MEDLINE | ID: mdl-20002605

ABSTRACT

BACKGROUND AND OBJECTIVES: Continuous monitoring of blood use and feedback on transfusions are effective in decreasing inappropriate blood transfusions. However, traditional methods of monitoring have practical challenges, such as the limited availability of experts and funding. Administrative data including a patient classification system may be employed for risk-adjusted assessment of hospital-wide blood use. MATERIALS AND METHODS: We conducted an audit of blood use at two hospitals and determined proportions of appropriate blood use at each hospital. We then used administrative data of 587,045 cases provided by 73 hospitals to develop two mathematical models to calculate risk-adjusted use of blood products. The first model is a logistic regression model to predict the percentage of transfused patients. Patient demographics, surgery and diagnostic groups were utilized as predictors of transfusion. The second model is a case-mix adjusted model which predicts hospital-wide use of units of blood products from the distribution of diagnosis-related groups. For each model, the observed to expected (O/E) ratio of blood use in each hospital was calculated. We compared resultant ratios with proportions of appropriate blood use in two of the hospitals studied. RESULTS: Both models showed good prediction abilities. O/E ratios calculated using the two models were relevant to proportions of appropriate transfusions. CONCLUSIONS: Risk-adjusted assessments of blood product use based on administrative data allow hospital-wide evaluation of transfusion use. Comparing blood use between different hospitals contributes toward establishing appropriate transfusion practices.


Subject(s)
Blood Donors/statistics & numerical data , Blood Transfusion/statistics & numerical data , Blood Transfusion/standards , Hospitals , Humans , Japan , Length of Stay , Logistic Models , Medical Audit/methods , Models, Statistical , Outcome Assessment, Health Care , Retrospective Studies , Risk Assessment , Transfusion Reaction
13.
Phys Rev Lett ; 102(15): 156403, 2009 Apr 17.
Article in English | MEDLINE | ID: mdl-19518659

ABSTRACT

To elucidate the underlying nature of the hidden order (HO) state in heavy-fermion compound URu(2)Si(2), we measure electrical transport properties of ultraclean crystals in a high field, low temperature regime. Unlike previous studies, the present system with much less impurity scattering resolves a distinct anomaly of the Hall resistivity at H;{*} = 22.5 T, well below the destruction field of the HO phase = or approximately 36 T. In addition, a novel quantum oscillation appears above a magnetic field slightly below H;{*}. These results indicate an abrupt reconstruction of the Fermi surface, which implies a possible phase transition well within the HO phase caused by a band-dependent destruction of the HO parameter.

14.
Clin Ter ; 159(3): 155-63, 2008.
Article in English | MEDLINE | ID: mdl-18594744

ABSTRACT

AIMS: There are several literatures on outcome variations between patients treated with an open appendectomy (OA) and a laparoscopic appendectomy (LA). However, there are no studies assessing differences in cost and outcome that adjust for age and hospital function or region. This study examines the differences in cost and procedure-related complications of OA and LA procedures. MATERIALS AND METHODS: This study contains 1703 appendectomy patients treated for appendicitis in 76 academic hospitals and 80 community hospitals. Demographic variables, clinical variables, length of stay (LOS), total charges (TC; US$) and complication rates were analyzed for both OA and LA procedures. The specific contributions of LA to LOS, TC, and complication rate were identified using multivariate analysis. RESULTS: 1469 (86.3%) patients underwent OA and 234 (13.7%) underwent LA. Complicated appendicitis was diagnosed in 13.1% of OA cases and 15.4% of LA cases. The complication rates were 3.4% in OA and 2.6% in LA (p=0.504). There were significant differences in LOS and TC by severity of appendicitis and by procedure type. After risk adjustment for the other study variables, LA was associated with a higher TC than OA ($1458, p0.001). However there were no significant differences in LOS or complication rates between the two treatment groups. CONCLUSIONS: This study suggests that LA increases cost, but has no significant impact on LOS or complication rates. However, other outcomes such as quality of life or subgroup analysis for obese patients are needed for a more complete economic analysis of OA and LA.


Subject(s)
Appendectomy/economics , Appendectomy/methods , Appendicitis/surgery , Laparoscopy/economics , Adolescent , Adult , Aged , Costs and Cost Analysis , Female , Humans , Male , Middle Aged , Young Adult
15.
J Int Med Res ; 35(5): 590-6, 2007.
Article in English | MEDLINE | ID: mdl-17900397

ABSTRACT

This study aimed to develop a new risk-adjustment method to assess acute myocardial infarction (AMI) in-hospital mortality. Risk-adjustment was based on variables obtained from administrative data from Japanese hospitals, and included factors such as age, gender, primary diagnosis and co-morbidity. The infarct location was determined using the criteria of the International Classification of Diseases (10th version). Potential comorbidity risk factors for mortality were selected based on previous studies and their critical influence analysed to identify major co-morbidities. The remaining minor co-morbidities were then divided into two groups based on their medical implications. The major co-morbidities included shock, pneumonia, cancer and chronic renal failure. The two minor co-morbidity groups also demonstrated a substantial impact on mortality. The model was then used to assess clinical performance in the participating hospitals. Our model reliably employed the available data for the risk-adjustment of AMI mortality and provides a new approach to evaluating clinical performance.


Subject(s)
Hospital Mortality , Models, Statistical , Myocardial Infarction/mortality , Aged , Aged, 80 and over , Female , Humans , Japan , Male , Middle Aged , Risk Adjustment
16.
Chem Senses ; 26(8): 1023-7, 2001 Oct.
Article in English | MEDLINE | ID: mdl-11595679

ABSTRACT

The whole-cell, patch clamp [corrected] method was applied to olfactory receptor cells in slice preparations made from bullfrog olfactory epithelium. Under voltage-clamp conditions, olfactory receptor cells showed a transient inward current followed by a steady outward current in response to depolarizing voltage steps, as has been shown in the isolated preparation. The input resistance was 5.4 +/- 3.9 GOmega and capacitance 21.9 +/- 9.7 pF. Under current-clamp conditions, depolarization of cells by current injection induced action potentials. In 13 out of 20, spike generation was repetitive with a maximum frequency of 24 Hz. The frequency of the repetitive discharges increased as the injected current was increased. The relationship between the size of the injected current and firing frequency could be well fitted by the Michaelis-Menten equation, indicating that the spike generation site lacks the non-linear boosting system. The slice preparation developed here would provide a powerful tool to study the spike encoding system of the olfactory receptor cells.


Subject(s)
Electrophysiology/methods , Olfactory Receptor Neurons/physiology , Receptors, Odorant/physiology , Smell , Animals , Kinetics , Microscopy, Fluorescence , Patch-Clamp Techniques , Rana catesbeiana
17.
Nihon Geka Gakkai Zasshi ; 101(10): 697-702, 2000 Oct.
Article in Japanese | MEDLINE | ID: mdl-11107593

ABSTRACT

With advances in technology, day surgery has become more efficient and has expanded remarkably due to the policies and economic incentives in some countries. In addition, day surgery could potentially serve as a model of explicit accountability for quality assurance and institutional processes for continuous improvement. It is recommended that Japan adapt its policies and systems to facilitate day surgery after a thorough analysis of the health effects and cost structure. Cost shifts to other services and parties should be considered carefully from a long-term, comprehensive perspective. It could be socially beneficial to subsidize start-up costs for the establishment of day surgery units, since significant capital and human resources are required for quality assurance. The encouragement of day surgery could be a driving force for the improvement of clinical technology and patient quality of life. It would foster collaboration between health service providers, including during preparation and follow-up, and allow patients to participate as partners in clinical processes and decisions. To ensure constant readiness, day surgery environments should be equipped with multisite, standardized databases on clinical and economic performance. An expansion of day surgery facilities could lead to the development of a new mechanism of professional quality improvement and to a new health insurance reimbursement system based on clinical achievements and resources.


Subject(s)
Ambulatory Surgical Procedures/economics , Economics, Medical , Health Policy/economics , Ambulatory Surgical Procedures/trends , Costs and Cost Analysis , Economics, Medical/organization & administration , Humans , Patient Satisfaction/economics , Quality Assurance, Health Care
18.
Int J Qual Health Care ; 12(5): 395-401, 2000 Oct.
Article in English | MEDLINE | ID: mdl-11079219

ABSTRACT

OBJECTIVE: The objective of this study was to detect whether there was any difference among the characteristics of patient satisfaction between two patient emphasis groups: patients demanding technical elements of hospital care and patients demanding interpersonal elements. DESIGN AND SETTING: The sample for this study was drawn from in-patients discharged from 77 voluntarily participating hospitals throughout Japan. The relationship between overall satisfaction with hospital care and patient satisfaction, and the evaluation of a hospital's reputation, was explored by stepwise multiple regression analysis of 33 variables relevant to aspects of hospital care for each patient group. RESULTS: In the interpersonal emphasis (IE) group, 'nurse's kindness and warmth' was associated significantly with overall satisfaction, while 'skill of nursing care' and 'nurse's explanation' were significant predictors of overall satisfaction in the technical emphasis (TE) group. On the other hand, 'doctor's clinical competence', 'recovery from distress and anxiety', and items pertaining to the hospital's reputation were significantly related to overall satisfaction in both emphasis groups. CONCLUSION: For overall patient satisfaction, it is essential to satisfy specific items related to the aspect of hospital care emphasized by the patient. Specific significant predictors of overall satisfaction (e.g. 'doctor's clinical competence') were indispensable measures of professional performance in hospital care, irrespective of the patients' emphasis. A positive perception of hospital reputation items might increase overall patient satisfaction with Japanese hospitals.


Subject(s)
Hospital Administration/standards , Hospital-Patient Relations , Outcome Assessment, Health Care/methods , Patient Satisfaction/statistics & numerical data , Chi-Square Distribution , Health Services Needs and Demand , Hospital Administration/statistics & numerical data , Hospitals, Private/standards , Hospitals, Public/standards , Humans , Japan , Professional-Patient Relations , Public Relations , Quality of Health Care , Regression Analysis
19.
Ind Health ; 37(2): 237-42, 1999 Apr.
Article in English | MEDLINE | ID: mdl-10319572

ABSTRACT

We conducted a randomized controlled trial (RCT) to examine the effects of mailed advice on reducing psychological distress, blood pressure, serum lipids, and sick leave of workers employed in a manufacturing plant in Japan. Those who indicated higher psychological distress (defined as having GHQ scores of three or greater) in the baseline questionnaire survey (n = 226) were randomly assigned to an intervention group or a control group. Individualized letters were sent to the subjects of the intervention group, informing them of their stress levels and recommending an improvement in daily habits and other behaviors to reduce stress. Eighty-one and 77 subjects in the intervention and control groups, respectively, responded to the one-year follow-up survey. No significant intervention effect was observed for the GHQ scores, blood pressure, serum lipids, or sick leave (p > 0.05). The intervention effect was marginally significant for changes in regular breakfasts and daily alcohol consumption (p = 0.09). The intervention effect was marginally significant for the GHQ scores among those who initially did not eat breakfast regularly (p = 0.06). The study suggests that only sending mailed advice is not an effective measure for worksite stress reduction. Mailed advice which focuses on a particular subgroup (e.g., those who do not eat breakfast regularly) may be more effective.


Subject(s)
Burnout, Professional/prevention & control , Correspondence as Topic , Health Education/methods , Absenteeism , Adult , Blood Pressure , Burnout, Professional/blood , Burnout, Professional/diagnosis , Burnout, Professional/physiopathology , Cholesterol/blood , Female , Follow-Up Studies , Health Behavior , Health Knowledge, Attitudes, Practice , Humans , Japan , Male , Middle Aged , Surveys and Questionnaires , Triglycerides/blood
20.
Environ Health Prev Med ; 4(1): 39-48, 1999 Apr.
Article in English | MEDLINE | ID: mdl-21432170

ABSTRACT

A trial investigation of subjects gathered for annual health checkups was performed to detect domains of quality of life in the healthy public, and to explore the changes of their demographic characteristics for the possibility of engaging them in health service activities in the community. The eligible 1,096 subjects aged 30-79 years were investigated. The period of this survey was from September to December, 1997. The subjects were questioned using ten quality of life domains which were preliminarily prepared and had been assumed to be most important in the subjects' lives in relation to the order of priority, importance, and satisfaction levels. The first most important domain in both the male and female subjects' lives was personal health, followed by relationships with family, though the mean importance scores for their personal health and relationships with family were almost equivalent. The mean scores for work abruptly decreased in males over 60 years of age. Also, the first large and the second relatively small principal components were extracted through principal components analysis. The proposed ten domains of quality of life are most likely valid and reliable in terms of the results analyzed and the comparison with a referred study. Relationships with family is an effective cue for health service activities in the community, and the significance of work on quality of life in the healthy public will have to be taken into account separately, especially in males.

SELECTION OF CITATIONS
SEARCH DETAIL
...