Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 111
Filter
1.
BMC Med ; 22(1): 212, 2024 May 29.
Article in English | MEDLINE | ID: mdl-38807210

ABSTRACT

BACKGROUND: To examine the effectiveness and safety of a data sharing and comprehensive management platform for institutionalized older patients. METHODS: We applied information technology-supported integrated health service platform to patients who live at long-term care hospitals (LTCHs) and nursing homes (NHs) with cluster randomized controlled study. We enrolled 555 patients aged 65 or older (461 from 7 LTCHs, 94 from 5 NHs). For the intervention group, a tablet-based platform comprising comprehensive geriatric assessment, disease management, potentially inappropriate medication (PIM) management, rehabilitation program, and screening for adverse events and warning alarms were provided for physicians or nurses. The control group was managed with usual care. Co-primary outcomes were (1) control rate of hypertension and diabetes, (2) medication adjustment (PIM prescription rate, proportion of polypharmacy), and (3) combination of potential quality-of-care problems (composite quality indicator) from the interRAI assessment system which assessed after 3-month of intervention. RESULTS: We screened 1119 patients and included 555 patients (control; 289, intervention; 266) for analysis. Patients allocated to the intervention group had better cognitive function and took less medications and PIMs at baseline. The diabetes control rate (OR = 2.61, 95% CI 1.37-4.99, p = 0.0035), discontinuation of PIM (OR = 4.65, 95% CI 2.41-8.97, p < 0.0001), reduction of medication in patients with polypharmacy (OR = 1.98, 95% CI 1.24-3.16, p = 0.0042), and number of PIMs use (ꞵ = - 0.27, p < 0.0001) improved significantly in the intervention group. There was no significant difference in hypertension control rate (OR = 0.54, 95% CI 0.20-1.43, p = 0.2129), proportion of polypharmacy (OR = 1.40, 95% CI 0.75-2.60, p = 0.2863), and improvement of composite quality indicators (ꞵ = 0.03, p = 0.2094). For secondary outcomes, cognitive and motor function, quality of life, and unplanned hospitalization were not different significantly between groups. CONCLUSIONS: The information technology-supported integrated health service effectively reduced PIM use and controlled diabetes among older patients in LTCH or NH without functional decline or increase of healthcare utilization. TRIAL REGISTRATION: Clinical Research Information Service, KCT0004360. Registered on 21 October 2019.


Subject(s)
Delivery of Health Care, Integrated , Long-Term Care , Humans , Aged , Male , Female , Aged, 80 and over , Long-Term Care/methods , Information Technology , Nursing Homes , Polypharmacy
3.
J Clin Med ; 13(10)2024 May 15.
Article in English | MEDLINE | ID: mdl-38792452

ABSTRACT

Background/Objectives: There have been widespread reports of persistent symptoms in both children and adults after SARS-CoV-2 infection, giving rise to debates on whether it should be regarded as a separate clinical entity from other postviral syndromes. This study aimed to characterize the clinical presentation of post-acute symptoms and conditions in the Korean pediatric and adult populations. Methods: A retrospective analysis was performed using a national, population-based database, which was encoded using the Observational Medical Outcomes Partnership (OMOP) Common Data Model (CDM). We compared individuals diagnosed with SARS-CoV-2 to those diagnosed with influenza, focusing on the risk of developing prespecified symptoms and conditions commonly associated with the post-acute sequelae of COVID-19. Results: Propensity score matching yielded 1,656 adult and 343 pediatric SARS-CoV-2 and influenza pairs. Ninety days after diagnosis, no symptoms were found to have elevated risk in either adults or children when compared with influenza controls. Conversely, at 1 day after diagnosis, adults with SARS-CoV-2 exhibited a significantly higher risk of developing abnormal liver function tests, cardiorespiratory symptoms, constipation, cough, thrombophlebitis/thromboembolism, and pneumonia. In contrast, children diagnosed with SARS-CoV-2 did not show an increased risk for any symptoms during either acute or post-acute phases. Conclusions: In the acute phase after infection, SARS-CoV-2 is associated with an elevated risk of certain symptoms in adults. The risk of developing post-acute COVID-19 sequelae is not significantly different from that of having postviral symptoms in children in both the acute and post-acute phases, and in adults in the post-acute phase. These observations warrant further validation through studies, including the severity of initial illness, vaccination status, and variant types.

4.
Sci Rep ; 14(1): 11085, 2024 05 15.
Article in English | MEDLINE | ID: mdl-38750084

ABSTRACT

We developed artificial intelligence models to predict the brain metastasis (BM) treatment response after stereotactic radiosurgery (SRS) using longitudinal magnetic resonance imaging (MRI) data and evaluated prediction accuracy changes according to the number of sequential MRI scans. We included four sequential MRI scans for 194 patients with BM and 369 target lesions for the Developmental dataset. The data were randomly split (8:2 ratio) for training and testing. For external validation, 172 MRI scans from 43 patients with BM and 62 target lesions were additionally enrolled. The maximum axial diameter (Dmax), radiomics, and deep learning (DL) models were generated for comparison. We evaluated the simple convolutional neural network (CNN) model and a gated recurrent unit (Conv-GRU)-based CNN model in the DL arm. The Conv-GRU model performed superior to the simple CNN models. For both datasets, the area under the curve (AUC) was significantly higher for the two-dimensional (2D) Conv-GRU model than for the 3D Conv-GRU, Dmax, and radiomics models. The accuracy of the 2D Conv-GRU model increased with the number of follow-up studies. In conclusion, using longitudinal MRI data, the 2D Conv-GRU model outperformed all other models in predicting the treatment response after SRS of BM.


Subject(s)
Brain Neoplasms , Deep Learning , Magnetic Resonance Imaging , Radiosurgery , Humans , Brain Neoplasms/secondary , Brain Neoplasms/diagnostic imaging , Brain Neoplasms/surgery , Brain Neoplasms/radiotherapy , Magnetic Resonance Imaging/methods , Radiosurgery/methods , Female , Male , Middle Aged , Aged , Treatment Outcome , Neural Networks, Computer , Longitudinal Studies , Adult , Aged, 80 and over , Radiomics
5.
JMIR Med Inform ; 12: e53079, 2024 Mar 20.
Article in English | MEDLINE | ID: mdl-38533775

ABSTRACT

Background: Timely and comprehensive collection of a patient's medication history in the emergency department (ED) is crucial for optimizing health care delivery. The implementation of a medication history sharing program, titled "Patient's In-home Medications at a Glance," in a tertiary teaching hospital aimed to efficiently collect and display nationwide medication histories for patients' initial hospital visits. Objective: As an evaluation was necessary to provide a balanced picture of the program, we aimed to evaluate both care process outcomes and humanistic outcomes encompassing end-user experience of physicians and pharmacists. Methods: We conducted a cohort study and a cross-sectional study to evaluate both outcomes. To evaluate the care process, we measured the time from the first ED assessment to urgent percutaneous coronary intervention (PCI) initiation from electronic health records. To assess end-user experience, we developed a 22-item questionnaire using a 5-point Likert scale, including 5 domains: information quality, system quality, service quality, user satisfaction, and intention to reuse. This questionnaire was validated and distributed to physicians and pharmacists. The Mann-Whiteny U test was used to analyze the PCI initiation time, and structural equation modeling was used to assess factors affecting end-user experience. Results: The time from the first ED assessment to urgent PCI initiation at the ED was significantly decreased using the patient medication history program (mean rank 42.14 min vs 28.72 min; Mann-Whitney U=346; P=.03). A total of 112 physicians and pharmacists participated in the survey. Among the 5 domains, "intention to reuse" received the highest score (mean 4.77, SD 0.37), followed by "user satisfaction" (mean 4.56, SD 0.49), while "service quality" received the lowest score (mean 3.87, SD 0.79). "User satisfaction" was significantly associated with "information quality" and "intention to reuse." Conclusions: Timely and complete retrieval using a medication history-sharing program led to an improved care process by expediting critical decision-making in the ED, thereby contributing to value-based health care delivery in a real-world setting. The experiences of end users, including physicians and pharmacists, indicated satisfaction with the program regarding information quality and their intention to reuse.

6.
Sci Rep ; 14(1): 2081, 2024 01 24.
Article in English | MEDLINE | ID: mdl-38267451

ABSTRACT

Metformin is the primary treatment for type 2 diabetes mellitus (T2DM) due to its effectiveness in improving clinical outcomes in patients with preserved renal function, however, the evidence on the effectiveness of metformin in various renal functions is lacking. We performed a retrospective, multicenter, observational study used data of patients with T2DM obtained from three tertiary hospitals' databases. Patients given metformin within run-in periods and with at least one additional prescription formed the metformin cohort. A control cohort comprised those prescribed oral hypoglycemic agents other than metformin and never subsequently received a metformin prescription within observation period. For patients without diabetic nephropathy (DN), the outcomes included events of DN, major adverse cardiovascular events (MACE), and major adverse kidney events (MAKE). After 1:1 propensity matching, 1994 individuals each were selected for the metformin and control cohorts among T2DM patients without baseline DN. The incidence rate ratios (IRR) for DN, MACEs, and MAKEs between cohorts were 1.06 (95% CI 0.96-1.17), 0.76 (0.64-0.92), and 0.45 (0.33-0.62), respectively. In cohorts with renal function of CKD 3A, 3B, and 4, summarized IRRs of MACEs and MAKEs were 0.70 (0.57-0.87) and 0.39 (0.35-0.43) in CKD 3A, 0.83 (0.74-0.93) and 0.44 (0.40-0.48) in CKD 3B, and 0.71 (0.60-0.85) and 0.45 (0.39-0.51) in CKD 4. Our research indicates that metformin use in T2DM patients across various renal functions consistently correlates with a decreased risk of overt DN, MACE, and MAKE.


Subject(s)
Diabetes Mellitus, Type 2 , Diabetic Nephropathies , Metformin , Myristica , Renal Insufficiency, Chronic , Humans , Retrospective Studies , Diabetes Mellitus, Type 2/complications , Diabetes Mellitus, Type 2/drug therapy , Metformin/therapeutic use , Kidney , Diabetic Nephropathies/drug therapy
7.
JMIR Med Inform ; 11: e53058, 2023 Dec 06.
Article in English | MEDLINE | ID: mdl-38055320

ABSTRACT

BACKGROUND: Patients with lung cancer are among the most frequent visitors to emergency departments due to cancer-related problems, and the prognosis for those who seek emergency care is dismal. Given that patients with lung cancer frequently visit health care facilities for treatment or follow-up, the ability to predict emergency department visits based on clinical information gleaned from their routine visits would enhance hospital resource utilization and patient outcomes. OBJECTIVE: This study proposed a machine learning-based prediction model to identify risk factors for emergency department visits by patients with lung cancer. METHODS: This was a retrospective observational study of patients with lung cancer diagnosed at Seoul National University Bundang Hospital, a tertiary general hospital in South Korea, between January 2010 and December 2017. The primary outcome was an emergency department visit within 30 days of an outpatient visit. This study developed a machine learning-based prediction model using a common data model. In addition, the importance of features that influenced the decision-making of the model output was analyzed to identify significant clinical factors. RESULTS: The model with the best performance demonstrated an area under the receiver operating characteristic curve of 0.73 in its ability to predict the attendance of patients with lung cancer in emergency departments. The frequency of recent visits to the emergency department and several laboratory test results that are typically collected during cancer treatment follow-up visits were revealed as influencing factors for the model output. CONCLUSIONS: This study developed a machine learning-based risk prediction model using a common data model and identified influencing factors for emergency department visits by patients with lung cancer. The predictive model contributes to the efficiency of resource utilization and health care service quality by facilitating the identification and early intervention of high-risk patients. This study demonstrated the possibility of collaborative research among different institutions using the common data model for precision medicine in lung cancer.

8.
BMC Med Ethics ; 24(1): 107, 2023 12 01.
Article in English | MEDLINE | ID: mdl-38041034

ABSTRACT

BACKGROUND: Conventional consent practices face ethical challenges in continuously evolving digital health environments due to their static, one-time nature. Dynamic consent offers a promising solution, providing adaptability and flexibility to address these ethical concerns. However, due to the immaturity of the concept and accompanying technology, dynamic consent has not yet been widely used in practice. This study aims to identify the facilitators of and barriers to adopting dynamic consent in real-world scenarios. METHODS: This scoping review, conducted in December 2022, adhered to the PRISMA Extension for Scoping Reviews guidelines, focusing on dynamic consent within the health domain. A comprehensive search across Web of Science, PubMed, and Scopus yielded 22 selected articles based on predefined inclusion and exclusion criteria. RESULTS: The facilitators for the adoption of dynamic consent in digital health ecosystems were the provision of multiple consent modalities, personalized alternatives, continuous communication, and the dissemination of up-to-date information. Nevertheless, several barriers, such as consent fatigue, the digital divide, complexities in system implementation, and privacy and security concerns, needed to be addressed. This study also investigated current technological advancements and suggested considerations for further research aimed at resolving the remaining challenges surrounding dynamic consent. CONCLUSIONS: Dynamic consent emerges as an ethically advantageous method for digital health ecosystems, driven by its adaptability and support for continuous, two-way communication between data subjects and consumers. Ethical implementation in real-world settings requires the development of a robust technical framework capable of accommodating the diverse needs of stakeholders, thereby ensuring ethical integrity and data privacy in the evolving digital health landscape.


Subject(s)
Communication , Ecosystem , Humans , Privacy , Technology , Informed Consent
9.
JMIR Public Health Surveill ; 9: e49852, 2023 Dec 08.
Article in English | MEDLINE | ID: mdl-38064251

ABSTRACT

BACKGROUND: Exudative age-related macular degeneration (AMD), one of the leading causes of blindness, requires expensive drugs such as anti-vascular endothelial growth factor (VEGF) agents. The long-term regular use of effective but expensive drugs causes an economic burden for patients with exudative AMD. However, there are no studies on the long-term patient-centered economic burden of exudative AMD after reimbursement of anti-VEGFs. OBJECTIVE: This study aimed to evaluate the patient-centered economic burden of exudative AMD for 2 years, including nonreimbursement and out-of-pocket costs, compared with nonexudative AMD using the Observational Medical Outcomes Partnership (OMOP) Common Data Model (CDM). METHODS: This retrospective cohort study was conducted using the OMOP CDM, which included 2,006,478 patients who visited Seoul National University Bundang Hospital from June 2003 to July 2019. We defined the exudative AMD group as patients aged >50 years with a diagnosis of exudative AMD and a prescription for anti-VEGFs or verteporfin. The control group was defined as patients aged >50 years without a diagnosis of exudative AMD or a prescription for anti-VEGFs or verteporfin. To adjust for selection bias, controls were matched by propensity scores using regularized logistic regression with a Laplace prior. We measured any medical cost occurring in the hospital as the economic burden of exudative AMD during a 2-year follow-up period using 4 categories: total medical cost, reimbursement cost, nonreimbursement cost, and out-of-pocket cost. To estimate the average cost by adjusting the confounding variable and overcoming the positive skewness of costs, we used an exponential conditional model with a generalized linear model. RESULTS: We identified 931 patients with exudative AMD and matched 783 (84.1%) with 2918 patients with nonexudative AMD. In the exponential conditional model, the total medical, reimbursement, nonreimbursement, and out-of-pocket incremental costs were estimated at US $3426, US $3130, US $366, and US $561, respectively, in the first year and US $1829, US $1461, US $373, and US $507, respectively, in the second year. All incremental costs in the exudative AMD group were 1.89 to 4.25 and 3.50 to 5.09 times higher in the first and second year, respectively, than those in the control group (P<.001 in all cases). CONCLUSIONS: Exudative AMD had a significantly greater economic impact (P<.001) for 2 years on reimbursement, nonreimbursement, and out-of-pocket costs than nonexudative AMD after adjusting for baseline demographic and clinical characteristics using the OMOP CDM. Although economic policies could relieve the economic burden of patients with exudative AMD over time, the out-of-pocket cost of exudative AMD was still higher than that of nonexudative AMD for 2 years. Our findings support the need for expanding reimbursement strategies for patients with exudative AMD given the significant economic burden faced by patients with incurable and fatal diseases both in South Korea and worldwide.


Subject(s)
Financial Stress , Macular Degeneration , Humans , Macular Degeneration/epidemiology , Macular Degeneration/diagnosis , Patient-Centered Care , Retrospective Studies , Verteporfin , Middle Aged
10.
J Med Internet Res ; 25: e42259, 2023 11 13.
Article in English | MEDLINE | ID: mdl-37955965

ABSTRACT

BACKGROUND: Older adults are at an increased risk of postoperative morbidity. Numerous risk stratification tools exist, but effort and manpower are required. OBJECTIVE: This study aimed to develop a predictive model of postoperative adverse outcomes in older patients following general surgery with an open-source, patient-level prediction from the Observational Health Data Sciences and Informatics for internal and external validation. METHODS: We used the Observational Medical Outcomes Partnership common data model and machine learning algorithms. The primary outcome was a composite of 90-day postoperative all-cause mortality and emergency department visits. Secondary outcomes were postoperative delirium, prolonged postoperative stay (≥75th percentile), and prolonged hospital stay (≥21 days). An 80% versus 20% split of the data from the Seoul National University Bundang Hospital (SNUBH) and Seoul National University Hospital (SNUH) common data model was used for model training and testing versus external validation. Model performance was evaluated using the area under the receiver operating characteristic curve (AUC) with a 95% CI. RESULTS: Data from 27,197 (SNUBH) and 32,857 (SNUH) patients were analyzed. Compared to the random forest, Adaboost, and decision tree models, the least absolute shrinkage and selection operator logistic regression model showed good internal discriminative accuracy (internal AUC 0.723, 95% CI 0.701-0.744) and transportability (external AUC 0.703, 95% CI 0.692-0.714) for the primary outcome. The model also possessed good internal and external AUCs for postoperative delirium (internal AUC 0.754, 95% CI 0.713-0.794; external AUC 0.750, 95% CI 0.727-0.772), prolonged postoperative stay (internal AUC 0.813, 95% CI 0.800-0.825; external AUC 0.747, 95% CI 0.741-0.753), and prolonged hospital stay (internal AUC 0.770, 95% CI 0.749-0.792; external AUC 0.707, 95% CI 0.696-0.718). Compared with age or the Charlson comorbidity index, the model showed better prediction performance. CONCLUSIONS: The derived model shall assist clinicians and patients in understanding the individualized risks and benefits of surgery.


Subject(s)
Emergence Delirium , Humans , Aged , Prognosis , Retrospective Studies , Algorithms , Machine Learning
11.
Int J Med Inform ; 178: 105192, 2023 10.
Article in English | MEDLINE | ID: mdl-37619396

ABSTRACT

Successful early extubation has advantages not only in terms of short-term respiratory morbidities and survival but also in terms of long-term neurodevelopmental outcomes in preterm infants. However, no consensus exists regarding the optimal protocol or guidelines for extubation readiness in preterm infants. Therefore, the decision to extubate preterm infants was almost entirely at the attending physician's discretion. We identified robust and quantitative predictors of success or failure of the first planned extubation attempt before 36 weeks of post-menstrual age in preterm infants (<32 weeks gestational age) and developed a prediction model for evaluating extubation readiness using these predictors. Extubation success was defined as the absence of reintubation within 72 h after extubation. This observational cohort study used data from preterm infants admitted to the neonatal intensive care unit of Seoul National University Bundang Hospital in South Korea between July 2003 and June 2019 to identify predictors and develop and test a predictive model for extubation readiness. Data from preterm infants included in the Medical Informative Medicine for Intensive Care (MIMIC-III) database between 2001 and 2008 were used for external validation. From a machine learning model using predictors such as demographics, periodic vital signs, ventilator settings, and respiratory indices, the area under the receiver operating characteristic curve and average precision of our model were 0.805 (95% confidence interval [CI], 0.802-0.809) and 0.917, respectively in the internal validation and 0.715 (95% CI, 0.713-0.717) and 0.838, respectively in the external validation. Our prediction model (NExt-Predictor) demonstrated high performance in assessing extubation readiness in both internal and external validations.


Subject(s)
Airway Extubation , Infant, Premature , Infant , Infant, Newborn , Humans , Airway Extubation/methods , Cohort Studies , Intensive Care Units, Neonatal , Vital Signs
12.
Sci Rep ; 13(1): 14212, 2023 08 30.
Article in English | MEDLINE | ID: mdl-37648772

ABSTRACT

Whereas lifestyle-related factors are recognized as snoring risk factors, the role of genetics in snoring remains uncertain. One way to measure the impact of genetic risk is through the use of a polygenic risk score (PRS). In this study, we aimed to investigate whether genetics plays a role in snoring after adjusting for lifestyle factors. Since the effect of polygenic risks may differ across ethnic groups, we calculated the PRS for snoring from the UK Biobank and applied it to a Korean cohort. We sought to evaluate the reproducibility of the UK Biobank PRS for snoring in the Korean cohort and to investigate the interaction of lifestyle factors and genetic risk on snoring in the Korean population. In this study, we utilized a Korean cohort obtained from the Korean Genome Epidemiology Study (KoGES). We computed the snoring PRS for the Korean cohort based on the UK Biobank PRS. We investigated the relationship between polygenic risks and snoring while controlling for lifestyle factors, including sex, age, body mass index (BMI), alcohol consumption, smoking, physical activity, and sleep time. Additionally, we analyzed the interaction of each lifestyle factor and the genetic odds of snoring. We included 3526 snorers and 1939 nonsnorers from the KoGES cohort and found that the PRS, a polygenic risk factor, was an independent factor for snoring after adjusting for lifestyle factors. In addition, among lifestyle factors, higher BMI, male sex, and older age were the strongest lifestyle factors for snoring. In addition, the highest adjusted odds ratio for snoring was higher BMI (OR 1.98, 95% CI 1.76-2.23), followed by male sex (OR 1.54, 95% CI 1.28-1.86), older age (OR 1.23, 95% CI 1.03-1.35), polygenic risks such as higher PRS (OR 1.18, 95% CI 1.08-1.29), drinking behavior (OR 1.18, 95% CI 1.03-1.35), late sleep mid-time (OR 1.17, 95% CI 1.02-1.33), smoking behavior (OR 0.99, 95% CI 0.82-1.19), and lower physical activity (OR 0.92, 95% CI 0.85-1.00). Our study identified that the UK Biobank PRS for snoring was reproducible in the Korean cohort and that genetic risk served as an independent risk factor for snoring in the Korean population. These findings may help to develop personalized approaches to reduce snoring in individuals with high genetic risk.


Subject(s)
Life Style , Snoring , Male , Humans , Reproducibility of Results , Snoring/epidemiology , Snoring/genetics , Risk Factors , Republic of Korea/epidemiology
13.
Healthc Inform Res ; 29(3): 209-217, 2023 Jul.
Article in English | MEDLINE | ID: mdl-37591676

ABSTRACT

OBJECTIVES: In the era of the Fourth Industrial Revolution, where an ecosystem is being developed to enhance the quality of healthcare services by applying information and communication technologies, systematic and sustainable data management is essential for medical institutions. In this study, we assessed the data management status and emerging concerns of three medical institutions, while also examining future directions for seamless data management. METHODS: To evaluate the data management status, we examined data types, capacities, infrastructure, backup methods, and related organizations. We also discussed challenges, such as resource and infrastructure issues, problems related to government regulations, and considerations for future data management. RESULTS: Hospitals are grappling with the increasing data storage space and a shortage of management personnel due to costs and project termination, which necessitates countermeasures and support. Data management regulations on the destruction or maintenance of medical records are needed, and institutional consideration for secondary utilization such as long-term treatment or research is required. Government-level guidelines for facilitating hospital data sharing and mobile patient services should be developed. Additionally, hospital executives at the organizational level need to make efforts to facilitate the clinical validation of artificial intelligence software. CONCLUSIONS: This analysis of the current status and emerging issues of data management reveals potential solutions and sets the stage for future organizational and policy directions. If medical big data is systematically managed, accumulated over time, and strategically monetized, it has the potential to create new value.

14.
Sci Rep ; 13(1): 12018, 2023 07 25.
Article in English | MEDLINE | ID: mdl-37491504

ABSTRACT

Accurate and reliable detection of intracranial aneurysms is vital for subsequent treatment to prevent bleeding. However, the detection of intracranial aneurysms can be time-consuming and even challenging, and there is great variability among experts, especially in the case of small aneurysms. This study aimed to detect intracranial aneurysms accurately using a convolutional neural network (CNN) with 3D time-of-flight magnetic resonance angiography (TOF-MRA). A total of 154 3D TOF-MRA datasets with intracranial aneurysms were acquired, and the gold standards were manually drawn by neuroradiologists. We also obtained 113 subjects from a public dataset for external validation. These angiograms were pre-processed by using skull-stripping, signal intensity normalization, and N4 bias correction. The 3D patches along the vessel skeleton from MRA were extracted. Values of the ratio between the aneurysmal and the normal patches ranged from 1:1 to 1:5. The semantic segmentation on intracranial aneurysms was trained using a 3D U-Net with an auxiliary classifier to overcome the imbalance in patches. The proposed method achieved an accuracy of 0.910 in internal validation and external validation accuracy of 0.883 with a 2:1 ratio of normal to aneurysmal patches. This multi-task learning method showed that the aneurysm segmentation performance was sufficient to be helpful in an actual clinical setting.


Subject(s)
Intracranial Aneurysm , Magnetic Resonance Angiography , Humans , Magnetic Resonance Angiography/methods , Intracranial Aneurysm/diagnostic imaging , Intracranial Aneurysm/therapy , Semantics , Imaging, Three-Dimensional/methods , Sensitivity and Specificity , Brain/diagnostic imaging
15.
BMC Endocr Disord ; 23(1): 143, 2023 Jul 10.
Article in English | MEDLINE | ID: mdl-37430289

ABSTRACT

BACKGROUND: Diabetes mellitus (DM) is a well-established risk factor for the progression of degenerative aortic stenosis (AS). However, no study has investigated the impact of glycemic control on the rate of AS progression. We aimed to assess the association between the degree of glycemic control and the AS progression, using an electronic health record-based common data model (CDM). METHODS: We identified patients with mild AS (aortic valve [AV] maximal velocity [Vpeak] 2.0-3.0 m/sec) or moderate AS (Vpeak 3.0-4.0 m/sec) at baseline, and follow-up echocardiography performed at an interval of ≥ 6 months, using the CDM of a tertiary hospital database. Patients were divided into 3 groups: no DM (n = 1,027), well-controlled DM (mean glycated hemoglobin [HbA1c] < 7.0% during the study period; n = 193), and poorly controlled DM (mean HbA1c ≥ 7.0% during the study period; n = 144). The primary outcome was the AS progression rate, calculated as the annualized change in the Vpeak (△Vpeak/year). RESULTS: Among the total study population (n = 1,364), the median age was 74 (IQR 65-80) years, 47% were male, the median HbA1c was 6.1% (IQR 5.6-6.9), and the median Vpeak was 2.5 m/sec (IQR 2.2-2.9). During follow-up (median 18.4 months), 16.1% of the 1,031 patients with mild AS at baseline progressed to moderate AS, and 1.8% progressed to severe AS. Among the 333 patients with moderate AS, 36.3% progressed to severe AS. The mean HbA1c level during follow-up showed a positive relationship with the AS progression rate (ß = 2.620; 95% confidence interval [CI] 0.732-4.507; p = 0.007); a 1%-unit increase in HbA1c was associated with a 27% higher risk of accelerated AS progression defined as △Vpeak/year values > 0.2 m/sec/year (adjusted OR = 1.267 per 1%-unit increase in HbA1c; 95% CI 1.106-1.453; p < 0.001), and HbA1c ≥ 7.0% was significantly associated with an accelerated AS progression (adjusted odds ratio = 1.524; 95% CI 1.010-2.285; p = 0.043). This association between the degree of glycemic control and AS progression rate was observed regardless of the baseline AS severity. CONCLUSION: In patients with mild to moderate AS, the presence of DM, as well as the degree of glycemic control, is significantly associated with accelerated AS progression.


Subject(s)
Aortic Valve Stenosis , Autoimmune Diseases , Glycemic Control , Aged , Female , Humans , Male , Aortic Valve Stenosis/diagnostic imaging , Cohort Studies , Glycated Hemoglobin
16.
PLoS One ; 18(1): e0279641, 2023.
Article in English | MEDLINE | ID: mdl-36595527

ABSTRACT

BACKGROUND AND OBJECTIVE: Recently, Electronic Health Records (EHR) are increasingly being converted to Common Data Models (CDMs), a database schema designed to provide standardized vocabularies to facilitate collaborative observational research. To date, however, rare attempts exist to leverage CDM data for healthcare process mining, a technique to derive process-related knowledge (e.g., process model) from event logs. This paper presents a method to extract, construct, and analyze event logs from the Observational Medical Outcomes Partnership (OMOP) CDM for process mining and demonstrates CDM-based healthcare process mining with several real-life study cases while answering frequently posed questions in process mining, in the CDM environment. METHODS: We propose a method to extract, construct, and analyze event logs from the OMOP CDM for process types including inpatient, outpatient, emergency room processes, and patient journey. Using the proposed method, we extract the retrospective data of several surgical procedure cases (i.e., Total Laparoscopic Hysterectomy (TLH), Total Hip Replacement (THR), Coronary Bypass (CB), Transcatheter Aortic Valve Implantation (TAVI), Pancreaticoduodenectomy (PD)) from the CDM of a Korean tertiary hospital. Patient data are extracted for each of the operations and analyzed using several process mining techniques. RESULTS: Using process mining, the clinical pathways, outpatient process models, emergency room process models, and patient journeys are demonstrated using the extracted logs. The result shows CDM's usability as a novel and valuable data source for healthcare process analysis, yet with a few considerations. We found that CDM should be complemented by different internal and external data sources to address the administrative and operational aspects of healthcare processes, particularly for outpatient and ER process analyses. CONCLUSION: To the best of our knowledge, we are the first to exploit CDM for healthcare process mining. Specifically, we provide a step-by-step guidance by demonstrating process analysis from locating relevant CDM tables to visualizing results using process mining tools. The proposed method can be widely applicable across different institutions. This work can contribute to bringing a process mining perspective to the existing CDM users in the changing Hospital Information Systems (HIS) environment and also to facilitating CDM-based studies in the process mining research community.


Subject(s)
Data Management , Health Facilities , Female , Humans , Retrospective Studies , Databases, Factual , Delivery of Health Care , Electronic Health Records
17.
Eur J Neurol ; 30(7): 2062-2069, 2023 07.
Article in English | MEDLINE | ID: mdl-36056876

ABSTRACT

BACKGROUND AND PURPOSE: The temporal characteristics of stroke risks were evaluated in emergency department patients who had a diagnosis of peripheral vertigo. It was also attempted to reveal the stroke risk factor amongst those with peripheral vertigo. METHODS: This is a parallel-group cohort study in a tertiary referral hospital. After assigning each of 4367 matched patients to the comparative set of peripheral vertigo and appendicitis-ureterolithiasis groups and each of 4911 matched patients to the comparative set of peripheral vertigo and ischaemic stroke groups, the relative stroke risk was evaluated. In addition, to predict the individual stroke risk in patients with peripheral vertigo, any association between the demographic factors and stroke events was evaluated in the peripheral vertigo group. RESULTS: The peripheral vertigo group had a higher stroke risk than the appendicitis-ureterolithiasis group (hazard ratio 1.73, 95% confidence interval 1.18-2.55) but a lower risk than the ischaemic stroke group (hazard ratio 0.30, 95% confidence interval 0.24-0.37). The stroke risk of the peripheral vertigo group was just below that of small vessel stroke. The stroke risk of the peripheral vertigo group differed markedly by time: higher within 7 days, moderate between 7 days and 1 year, and diminished thereafter. Old age (>65 years), male gender and diabetes mellitus were the risk factors for stroke in the peripheral vertigo group. CONCLUSION: Patients with a diagnosis of peripheral vertigo in the emergency department showed a moderate future stroke risk and so a stroke preventive strategy tailored to the timing of symptom onset and individual risk is required.


Subject(s)
Appendicitis , Brain Ischemia , Ischemic Stroke , Stroke , Humans , Male , Aged , Stroke/complications , Stroke/diagnosis , Stroke/epidemiology , Dizziness/complications , Cohort Studies , Appendicitis/complications , Brain Ischemia/complications , Vertigo/diagnosis , Vertigo/epidemiology , Vertigo/complications , Risk Factors , Ischemic Stroke/complications , Emergency Service, Hospital
18.
JMIR Med Inform ; 10(10): e41503, 2022 Oct 13.
Article in English | MEDLINE | ID: mdl-36227638

ABSTRACT

BACKGROUND: Cardio-cerebrovascular diseases (CVDs) result in 17.5 million deaths annually worldwide, accounting for 46.2% of noncommunicable causes of death, and are the leading cause of death, followed by cancer, respiratory disease, and diabetes mellitus. Coronary artery computed tomography angiography (CCTA), which detects calcification in the coronary arteries, can be used to detect asymptomatic but serious vascular disease. It allows for noninvasive and quick testing despite involving radiation exposure. OBJECTIVE: The objective of our study was to investigate the effectiveness of CCTA screening on CVD outcomes by using the Observational Health Data Sciences and Informatics' Observational Medical Outcomes Partnership Common Data Model (OMOP-CDM) data and the population-level estimation method. METHODS: Using electronic health record-based OMOP-CDM data, including health questionnaire responses, adults (aged 30-74 years) without a history of CVD were selected, and 5-year CVD outcomes were compared between patients undergoing CCTA (target group) and a comparison group via 1:1 propensity score matching. Participants were stratified into low-risk and high-risk groups based on the American College of Cardiology/American Heart Association atherosclerotic cardiovascular disease (ASCVD) risk score and Framingham risk score (FRS) for subgroup analyses. RESULTS: The 2-year and 5-year risk scores were compared as secondary outcomes between the two groups. In total, 8787 participants were included in both the target group and comparison group. No significant differences (calibration P=.37) were found between the hazard ratios of the groups at 5 years. The subgroup analysis also revealed no significant differences between the ASCVD risk scores and FRSs of the groups at 5 years (ASCVD risk score: P=.97; FRS: P=.85). However, the CCTA group showed a significantly lower increase in risk scores at 2 years (ASCVD risk score: P=.03; FRS: P=.02). CONCLUSIONS: Although we could not confirm a significant difference in the preventive effects of CCTA screening for CVDs over a long period of 5 years, it may have a beneficial effect on risk score management over 2 years.

19.
Healthc Inform Res ; 28(3): 240-246, 2022 Jul.
Article in English | MEDLINE | ID: mdl-35982598

ABSTRACT

OBJECTIVES: This study investigated the effectiveness of using standardized vocabularies to generate epilepsy patient cohorts with local medical codes, SNOMED Clinical Terms (SNOMED CT), and International Classification of Diseases tenth revision (ICD-10)/Korean Classification of Diseases-7 (KCD-7). METHODS: We compared the granularity between SNOMED CT and ICD-10 for epilepsy by counting the number of SNOMED CT concepts mapped to one ICD-10 code. Next, we created epilepsy patient cohorts by selecting all patients who had at least one code included in the concept sets defined using each vocabulary. We set patient cohorts generated by local codes as the reference to evaluate the patient cohorts generated using SNOMED CT and ICD-10/KCD-7. We compared the number of patients, the prevalence of epilepsy, and the age distribution between patient cohorts by year. RESULTS: In terms of the cohort size, the match rate with the reference cohort was approximately 99.2% for SNOMED CT and 94.0% for ICD-10/KDC7. From 2010 to 2019, the mean prevalence of epilepsy defined using the local codes, SNOMED CT, and ICD-10/KCD-7 was 0.889%, 0.891% and 0.923%, respectively. The age distribution of epilepsy patients showed no significant difference between the cohorts defined using local codes or SNOMED CT, but the ICD-9/KCD-7-generated cohort showed a substantial gap in the age distribution of patients with epilepsy compared to the cohort generated using the local codes. CONCLUSIONS: The number and age distribution of patients were substantially different from the reference when we used ICD-10/KCD-7 codes, but not when we used SNOMED CT concepts. Therefore, SNOMED CT is more suitable for representing clinical ideas and conducting clinical studies than ICD-10/KCD-7.

20.
BMC Med Inform Decis Mak ; 22(1): 210, 2022 08 08.
Article in English | MEDLINE | ID: mdl-35941636

ABSTRACT

BACKGROUND: While various quantitative studies based on the Unified Theory of Acceptance and Use of Technology (UTAUT) and Technology Acceptance Models (TAM) exist in the general medical sectors, just a few have been conducted in the behavioral sector; they have all been qualitative interview-based studies. OBJECTIVE: The purpose of this study is to assess the adoption dimensions of a behavioral electronic health record (EHR) system for behavioral clinical professionals using a modified clinical adoption (CA) research model that incorporates a variety of micro, meso, and macro level factors. METHODS: A questionnaire survey with quantitative analysis approach was used via purposive sampling method. We modified the existing CA framework to be suitable for evaluating the adoption of an EHR system by behavioral clinical professionals. We designed and verified questionnaires that fit into the dimensions of the CA framework. The survey was performed in five US behavioral hospitals, and the adoption factors were analyzed using a structural equation analysis. RESULTS: We derived a total of seven dimensions, omitting those determined to be unsuitable for behavioral clinical specialists to respond to. We polled 409 behavioral clinical experts from five hospitals. As a result, the ease of use and organizational support had a substantial impact on the use of the behavioral EHR system. Although the findings were not statistically significant, information and service quality did appear to have an effect on the system's ease of use. The primary reported benefit of behavioral EHR system adoption was the capacity to swiftly locate information, work efficiently, and access patient information via a mobile app, which resulted in more time for better care. The primary downside, on the other hand, was an unhealthy reliance on the EHR system. CONCLUSIONS: We demonstrated in this study that the CA framework can be a useful tool for evaluating organizational and social elements in addition to the EHR system's system features. Not only the EHR system's simplicity of use, but also organizational support, should be considered for the effective implementation of the behavioral EHR system. TRIAL REGISTRATION: The study was approved by the Institutional Review Board of Seoul National University Bundang Hospital (IRB No.: B-1904-534-301).


Subject(s)
Electronic Health Records , Physicians , Attitude of Health Personnel , Health Personnel , Hospitals, University , Humans
SELECTION OF CITATIONS
SEARCH DETAIL
...