Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 11 de 11
Filter
1.
Yearb Med Inform ; 31(1): 33-39, 2022 Aug.
Article in English | MEDLINE | ID: mdl-35654424

ABSTRACT

OBJECTIVES: Patient portals are increasingly implemented to improve patient involvement and engagement. We here seek to provide an overview of ways to mitigate existing concerns that these technologies increase inequity and bias and do not reach those who could benefit most from them. METHODS: Based on the current literature, we review the limitations of existing evaluations of patient portals in relation to addressing health equity, literacy and bias; outline challenges evaluators face when conducting such evaluations; and suggest methodological approaches that may address existing shortcomings. RESULTS: Various stakeholder needs should be addressed before deploying patient portals, involving vulnerable groups in user-centred design, and studying unanticipated consequences and impacts of information systems in use over time. CONCLUSIONS: Formative approaches to evaluation can help to address existing shortcomings and facilitate the development and implementation of patient portals in an equitable way thereby promoting the creation of resilient health systems.


Subject(s)
Health Equity , Patient Portals , Humans , Patient Participation , Bias
2.
Yearb Med Inform ; 30(1): 56-60, 2021 Aug.
Article in English | MEDLINE | ID: mdl-33882604

ABSTRACT

OBJECTIVES: To highlight the role of technology assessment in the management of the COVID-19 pandemic. METHOD: An overview of existing research and evaluation approaches along with expert perspectives drawn from the International Medical Informatics Association (IMIA) Working Group on Technology Assessment and Quality Development in Health Informatics and the European Federation for Medical Informatics (EFMI) Working Group for Assessment of Health Information Systems. RESULTS: Evaluation of digital health technologies for COVID-19 should be based on their technical maturity as well as the scale of implementation. For mature technologies like telehealth whose efficacy has been previously demonstrated, pragmatic, rapid evaluation using the complex systems paradigm which accounts for multiple sociotechnical factors, might be more suitable to examine their effectiveness and emerging safety concerns in new settings. New technologies, particularly those intended for use on a large scale such as digital contract tracing, will require assessment of their usability as well as performance prior to deployment, after which evaluation should shift to using a complex systems paradigm to examine the value of information provided. The success of a digital health technology is dependent on the value of information it provides relative to the sociotechnical context of the setting where it is implemented. CONCLUSION: Commitment to evaluation using the evidence-based medicine and complex systems paradigms will be critical to ensuring safe and effective use of digital health technologies for COVID-19 and future pandemics. There is an inherent tension between evaluation and the imperative to urgently deploy solutions that needs to be negotiated.


Subject(s)
COVID-19 , Medical Informatics , Technology Assessment, Biomedical , Humans
3.
Crit Care Med ; 47(8): e662-e668, 2019 08.
Article in English | MEDLINE | ID: mdl-31135497

ABSTRACT

OBJECTIVES: To compare methods to adjust for confounding by disease severity during multicenter intervention studies in ICU, when different disease severity measures are collected across centers. DESIGN: In silico simulation study using national registry data. SETTING: Twenty mixed ICUs in The Netherlands. SUBJECTS: Fifty-five-thousand six-hundred fifty-five ICU admissions between January 1, 2011, and January 1, 2016. INTERVENTIONS: None. MEASUREMENTS AND MAIN RESULTS: To mimic an intervention study with confounding, a fictitious treatment variable was simulated whose effect on the outcome was confounded by Acute Physiology and Chronic Health Evaluation IV predicted mortality (a common measure for disease severity). Diverse, realistic scenarios were investigated where the availability of disease severity measures (i.e., Acute Physiology and Chronic Health Evaluation IV, Acute Physiology and Chronic Health Evaluation II, and Simplified Acute Physiology Score II scores) varied across centers. For each scenario, eight different methods to adjust for confounding were used to obtain an estimate of the (fictitious) treatment effect. These were compared in terms of relative (%) and absolute (odds ratio) bias to a reference scenario where the treatment effect was estimated following correction for the Acute Physiology and Chronic Health Evaluation IV scores from all centers. Complete neglect of differences in disease severity measures across centers resulted in bias ranging from 10.2% to 173.6% across scenarios, and no commonly used methodology-such as two-stage modeling or score standardization-was able to effectively eliminate bias. In scenarios where some of the included centers had (only) Acute Physiology and Chronic Health Evaluation II or Simplified Acute Physiology Score II available (and not Acute Physiology and Chronic Health Evaluation IV), either restriction of the analysis to Acute Physiology and Chronic Health Evaluation IV centers alone or multiple imputation of Acute Physiology and Chronic Health Evaluation IV scores resulted in the least amount of relative bias (0.0% and 5.1% for Acute Physiology and Chronic Health Evaluation II, respectively, and 0.0% and 4.6% for Simplified Acute Physiology Score II, respectively). In scenarios where some centers used Acute Physiology and Chronic Health Evaluation II, regression calibration yielded low relative bias too (relative bias, 12.4%); this was not true if these same centers only had Simplified Acute Physiology Score II available (relative bias, 54.8%). CONCLUSIONS: When different disease severity measures are available across centers, the performance of various methods to control for confounding by disease severity may show important differences. When planning multicenter studies, researchers should make contingency plans to limit the use of or properly incorporate different disease measures across centers in the statistical analysis.


Subject(s)
Intensive Care Units , Severity of Illness Index , Simplified Acute Physiology Score , APACHE , Databases, Factual , Humans , Netherlands , Outcome Assessment, Health Care , Patient Admission
4.
Yearb Med Inform ; 28(1): 128-134, 2019 Aug.
Article in English | MEDLINE | ID: mdl-31022752

ABSTRACT

OBJECTIVES: This paper draws attention to: i) key considerations for evaluating artificial intelligence (AI) enabled clinical decision support; and ii) challenges and practical implications of AI design, development, selection, use, and ongoing surveillance. METHOD: A narrative review of existing research and evaluation approaches along with expert perspectives drawn from the International Medical Informatics Association (IMIA) Working Group on Technology Assessment and Quality Development in Health Informatics and the European Federation for Medical Informatics (EFMI) Working Group for Assessment of Health Information Systems. RESULTS: There is a rich history and tradition of evaluating AI in healthcare. While evaluators can learn from past efforts, and build on best practice evaluation frameworks and methodologies, questions remain about how to evaluate the safety and effectiveness of AI that dynamically harness vast amounts of genomic, biomarker, phenotype, electronic record, and care delivery data from across health systems. This paper first provides a historical perspective about the evaluation of AI in healthcare. It then examines key challenges of evaluating AI-enabled clinical decision support during design, development, selection, use, and ongoing surveillance. Practical aspects of evaluating AI in healthcare, including approaches to evaluation and indicators to monitor AI are also discussed. CONCLUSION: Commitment to rigorous initial and ongoing evaluation will be critical to ensuring the safe and effective integration of AI in complex sociotechnical settings. Specific enhancements that are required for the new generation of AI-enabled clinical decision support will emerge through practical application.


Subject(s)
Artificial Intelligence , Decision Support Systems, Clinical , Evaluation Studies as Topic , Machine Learning , Program Evaluation/methods
5.
Stud Health Technol Inform ; 222: 304-11, 2016.
Article in English | MEDLINE | ID: mdl-27198112

ABSTRACT

Progress in science is based on evidence from well-designed studies. However, publication quality of health IT evaluation studies is often low, making exploitation of published evidence within systematic reviews and meta-analysis a challenging task. Consequently, reporting guidelines have been published and recommended to be used. After a short overview of publication guidelines relevant for health IT evaluation studies (such as CONSORT and PRISMA), the STARE-HI guidelines for publishing health IT evaluation studies are presented. Health IT evaluation publications should take into account published guidelines, to improve the quality of published evidence. Publication guidelines, in line with addressing publication bias and low study quality, help strengthening the evidence available in the public domain to enable effective evidence-based health informatics.


Subject(s)
Evaluation Studies as Topic , Guidelines as Topic , Humans , Medical Informatics , Periodicals as Topic , Research Report/standards
6.
Stud Health Technol Inform ; 222: 324-35, 2016.
Article in English | MEDLINE | ID: mdl-27198114

ABSTRACT

Low and middle income countries (LMICs) bear a disproportionate burden of major global health challenges. Health IT could be a promising solution in these settings but LMICs have the weakest evidence of application of health IT to enhance quality of care. Various systematic reviews show significant challenges in the implementation and evaluation of health IT. Key barriers to implementation include lack of adequate infrastructure, inadequate and poorly trained health workers, lack of appropriate legislation and policies and inadequate financial 333indicating the early state of generation of evidence to demonstrate the effectiveness of health IT in improving health outcomes and processes. The implementation challenges need to be addressed. The introduction of new guidelines such as GEP-HI and STARE-HI, as well as models for evaluation such as SEIPS, and the prioritization of evaluations in eHealth strategies of LMICs provide an opportunity to focus on strategic concepts that transform the demands of a modern integrated health care system into solutions that are secure, efficient and sustainable.


Subject(s)
Developing Countries , Evaluation Studies as Topic , Medical Informatics/organization & administration , Guidelines as Topic , Health Personnel/standards , Humans , Medical Informatics/economics , Medical Informatics/legislation & jurisprudence , Medical Informatics/methods , Review Literature as Topic , Telemedicine/methods
9.
Eur J Public Health ; 24(1): 73-8, 2014 Feb.
Article in English | MEDLINE | ID: mdl-23543677

ABSTRACT

RESEARCH OBJECTIVE: Reliable and unambiguously defined performance indicators are fundamental to objective and comparable measurements of hospitals' quality of care. In two separate case studies (intensive care and breast cancer care), we investigated if differences in definition interpretation of performance indicators affected the indicator scores. DESIGN: Information about possible definition interpretations was obtained by a short telephone survey and a Web survey. We quantified the interpretation differences using a patient-level dataset from a national clinical registry (Case I) and a hospital's local database (Case II). In Case II, there was additional textual information available about the patients' status, which was reviewed to get more insight into the origin of the differences. PARTICIPANTS: For Case I, we investigated 15 596 admissions of 33 intensive care units in 2009. Case II consisted of 144 admitted patients with a breast tumour surgically treated in one hospital in 2009. RESULTS: In both cases, hospitals reported different interpretations of the indicators, which lead to significant differences in the indicator values. Case II revealed that these differences could be explained by patient-related factors such as severe comorbidity and patients' individual preference in surgery date. CONCLUSIONS: With this article, we hope to increase the awareness on pitfalls regarding the indicator definitions and the quality of the underlying data. To enable objective and comparable measurements of hospitals' quality of care, organizations that request performance information should formalize the indicators they use, including standardization of all data elements of which the indicator is composed (procedures, diagnoses).


Subject(s)
Hospitals/standards , Quality Indicators, Health Care/standards , Academic Medical Centers/standards , Academic Medical Centers/statistics & numerical data , Breast Neoplasms/surgery , Female , Health Care Surveys , Hospital Bed Capacity , Hospitals, Teaching/standards , Hospitals, Teaching/statistics & numerical data , Humans , Intensive Care Units/standards , Intensive Care Units/statistics & numerical data , Netherlands/epidemiology , Quality Indicators, Health Care/statistics & numerical data , Quality of Health Care/standards , Quality of Health Care/statistics & numerical data , Registries , Research Design/standards , Research Design/statistics & numerical data , Respiration, Artificial/standards , Respiration, Artificial/statistics & numerical data , Time Factors
10.
J Telemed Telecare ; 16(8): 447-53, 2010.
Article in English | MEDLINE | ID: mdl-20921289

ABSTRACT

Tertiary teledermatology (TTD), where a general dermatologist consults a specialized dermatologist on difficult cases, is a relatively new telemedicine service. We evaluated TTD in a Dutch university hospital, where 13 general dermatologists used TTD to consult 11 specialized dermatologists and two residents at the university medical centre. We measured the avoided referrals to the university centre, the usability of the system and the user acceptance of it. During a three-month study, general dermatologists consulted via TTD 28 times. In 17 of the consultations (61%), the general dermatologists would have referred their patients to the university centre if teledermatology had not been available. Referral was not necessary after teledermatology for 12 of these 17 consultations (71%). The mean usability score (0-100) of all the users was 80. All dermatologists were satisfied with TTD (mean satisfaction of 7.6 on a 10-point scale) and acceptance was high. The baseline measurements showed that half of tertiary referrals were suitable for TTD. These results suggest that TTD reduces unnecessary physical referrals and that users are satisfied with it. A large-scale evaluation is now required.


Subject(s)
Attitude of Health Personnel , Dermatology , Remote Consultation , Skin Diseases/diagnosis , Adult , Aged , Aged, 80 and over , Dermatology/methods , Female , Humans , Male , Middle Aged , Netherlands , Pilot Projects , Telemedicine , Young Adult
11.
Telemed J E Health ; 16(1): 56-62, 2010.
Article in English | MEDLINE | ID: mdl-20064068

ABSTRACT

Telemedicine is becoming widely used in healthcare. Dermatology, because of its visual character, is especially suitable for telemedicine applications. Most common is teledermatology between general practitioners and dermatologists (secondary teledermatology). Another form of the teledermatology process is communication among dermatologists (tertiary teledermatology). The objective of this systematic review is to give an overview of studies on tertiary teledermatology with emphasis on the categories of use. A systematic literature search on tertiary teledermatology studies used all databases of the Cochrane Library, MEDLINE (1966-November 2007) and EMBASE (1980-November 2007). Categories of use were identified for all included articles and the modalities of tertiary teledermatology were extracted, together with technology, the setting the outcome measures, and their results. The search resulted in 1,377 publications, of which 11 were included. Four categories of use were found: getting an expert opinion from a specialized, often academic dermatologist (6/11); resident training (2/11); continuing medical education (4/11); and second opinion from a nonspecialized dermatologist (2/11). Three modalities were found: a teledermatology consultation application (7/11), a Web site (2/11), and an e-mail list (1/11). The majority (7/11) used store-and-forward, and 3/11 used store-and-forward and real-time. Outcome measures mentioned were learning effect (6), costs (5), diagnostic accuracy (1), validity (2) and reliability (2), patient and physician satisfaction (1), and efficiency improvement (3). Tertiary teledermatology's main category of use is getting an expert opinion from a specialized, often academic dermatologist. Tertiary teledermatology research is still in early development. Future research should focus on identifying the scale of tertiary teledermatology and on what modality of teledermatology is most suited for what purpose in communication among dermatologists.


Subject(s)
Dermatology , Interprofessional Relations , Telemedicine/statistics & numerical data , Education, Medical, Continuing/methods , Humans , Remote Consultation/statistics & numerical data , Staff Development/methods
SELECTION OF CITATIONS
SEARCH DETAIL
...