Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 47
Filter
1.
Sensors (Basel) ; 24(3)2024 Jan 29.
Article in English | MEDLINE | ID: mdl-38339603

ABSTRACT

In a time where sustainability and CO2 efficiency are of ever-increasing importance, heating systems deserve special considerations. Despite well-functioning hardware, inefficiencies may arise when controller parameters are not well chosen. While monitoring systems could help to identify such issues, they lack improvement suggestions. One possible solution would be the use of digital twins; however, critical values such as the water consumption of the residents can often not be acquired for accurate models. To address this issue, coarse models can be employed to generate quantitative predictions, which can then be interpreted qualitatively to assess "better or worse" system behavior. In this paper, we present a simulation and calibration framework as well as a preprocessing module. These components can be run locally or deployed as containerized microservices and are easy to interface with existing data acquisition infrastructure. We evaluate the two main operating modes, namely automatic model calibration, using measured data, and the optimization of controller parameters. Our results show that using a coarse model of a real heating system and data augmentation through preprocessing, it is possible to achieve an acceptable fit of partially incomplete measured data, and that the calibrated model can subsequently be used to perform an optimization of the controller parameters in regard to the simulated boiler gas consumption.

2.
Endoscopy ; 56(1): 63-69, 2024 Jan.
Article in English | MEDLINE | ID: mdl-37532115

ABSTRACT

BACKGROUND AND STUDY AIMS: Artificial intelligence (AI)-based systems for computer-aided detection (CADe) of polyps receive regular updates and occasionally offer customizable detection thresholds, both of which impact their performance, but little is known about these effects. This study aimed to compare the performance of different CADe systems on the same benchmark dataset. METHODS: 101 colonoscopy videos were used as benchmark. Each video frame with a visible polyp was manually annotated with bounding boxes, resulting in 129 705 polyp images. The videos were then analyzed by three different CADe systems, representing five conditions: two versions of GI Genius, Endo-AID with detection Types A and B, and EndoMind, a freely available system. Evaluation included an analysis of sensitivity and false-positive rate, among other metrics. RESULTS: Endo-AID detection Type A, the earlier version of GI Genius, and EndoMind detected all 93 polyps. Both the later version of GI Genius and Endo-AID Type B missed 1 polyp. The mean per-frame sensitivities were 50.63 % and 67.85 %, respectively, for the earlier and later versions of GI Genius, 65.60 % and 52.95 %, respectively, for Endo-AID Types A and B, and 60.22 % for EndoMind. CONCLUSIONS: This study compares the performance of different CADe systems, different updates, and different configuration modes. This might help clinicians to select the most appropriate system for their specific needs.


Subject(s)
Colonic Polyps , Colorectal Neoplasms , Humans , Colonic Polyps/diagnostic imaging , Artificial Intelligence , Colonoscopy/methods , Colorectal Neoplasms/diagnosis
3.
JMIR Med Inform ; 11: e41808, 2023 May 22.
Article in English | MEDLINE | ID: mdl-37213191

ABSTRACT

BACKGROUND: Due to the importance of radiologic examinations, such as X-rays or computed tomography scans, for many clinical diagnoses, the optimal use of the radiology department is 1 of the primary goals of many hospitals. OBJECTIVE: This study aims to calculate the key metrics of this use by creating a radiology data warehouse solution, where data from radiology information systems (RISs) can be imported and then queried using a query language as well as a graphical user interface (GUI). METHODS: Using a simple configuration file, the developed system allowed for the processing of radiology data exported from any kind of RIS into a Microsoft Excel, comma-separated value (CSV), or JavaScript Object Notation (JSON) file. These data were then imported into a clinical data warehouse. Additional values based on the radiology data were calculated during this import process by implementing 1 of several provided interfaces. Afterward, the query language and GUI of the data warehouse were used to configure and calculate reports on these data. For the most common types of requested reports, a web interface was created to view their numbers as graphics. RESULTS: The tool was successfully tested with the data of 4 different German hospitals from 2018 to 2021, with a total of 1,436,111 examinations. The user feedback was good, since all their queries could be answered if the available data were sufficient. The initial processing of the radiology data for using them with the clinical data warehouse took (depending on the amount of data provided by each hospital) between 7 minutes and 1 hour 11 minutes. Calculating 3 reports of different complexities on the data of each hospital was possible in 1-3 seconds for reports with up to 200 individual calculations and in up to 1.5 minutes for reports with up to 8200 individual calculations. CONCLUSIONS: A system was developed with the main advantage of being generic concerning the export of different RISs as well as concerning the configuration of queries for various reports. The queries could be configured easily using the GUI of the data warehouse, and their results could be exported into the standard formats Excel and CSV for further processing.

4.
BMC Med Imaging ; 23(1): 59, 2023 04 20.
Article in English | MEDLINE | ID: mdl-37081495

ABSTRACT

BACKGROUND: Colorectal cancer is a leading cause of cancer-related deaths worldwide. The best method to prevent CRC is a colonoscopy. However, not all colon polyps have the risk of becoming cancerous. Therefore, polyps are classified using different classification systems. After the classification, further treatment and procedures are based on the classification of the polyp. Nevertheless, classification is not easy. Therefore, we suggest two novel automated classifications system assisting gastroenterologists in classifying polyps based on the NICE and Paris classification. METHODS: We build two classification systems. One is classifying polyps based on their shape (Paris). The other classifies polyps based on their texture and surface patterns (NICE). A two-step process for the Paris classification is introduced: First, detecting and cropping the polyp on the image, and secondly, classifying the polyp based on the cropped area with a transformer network. For the NICE classification, we design a few-shot learning algorithm based on the Deep Metric Learning approach. The algorithm creates an embedding space for polyps, which allows classification from a few examples to account for the data scarcity of NICE annotated images in our database. RESULTS: For the Paris classification, we achieve an accuracy of 89.35 %, surpassing all papers in the literature and establishing a new state-of-the-art and baseline accuracy for other publications on a public data set. For the NICE classification, we achieve a competitive accuracy of 81.13 % and demonstrate thereby the viability of the few-shot learning paradigm in polyp classification in data-scarce environments. Additionally, we show different ablations of the algorithms. Finally, we further elaborate on the explainability of the system by showing heat maps of the neural network explaining neural activations. CONCLUSION: Overall we introduce two polyp classification systems to assist gastroenterologists. We achieve state-of-the-art performance in the Paris classification and demonstrate the viability of the few-shot learning paradigm in the NICE classification, addressing the prevalent data scarcity issues faced in medical machine learning.


Subject(s)
Colonic Polyps , Deep Learning , Humans , Colonic Polyps/diagnostic imaging , Colonoscopy , Neural Networks, Computer , Algorithms
5.
J Imaging ; 9(2)2023 Jan 24.
Article in English | MEDLINE | ID: mdl-36826945

ABSTRACT

Colorectal cancer (CRC) is a leading cause of cancer-related deaths worldwide. The best method to prevent CRC is with a colonoscopy. During this procedure, the gastroenterologist searches for polyps. However, there is a potential risk of polyps being missed by the gastroenterologist. Automated detection of polyps helps to assist the gastroenterologist during a colonoscopy. There are already publications examining the problem of polyp detection in the literature. Nevertheless, most of these systems are only used in the research context and are not implemented for clinical application. Therefore, we introduce the first fully open-source automated polyp-detection system scoring best on current benchmark data and implementing it ready for clinical application. To create the polyp-detection system (ENDOMIND-Advanced), we combined our own collected data from different hospitals and practices in Germany with open-source datasets to create a dataset with over 500,000 annotated images. ENDOMIND-Advanced leverages a post-processing technique based on video detection to work in real-time with a stream of images. It is integrated into a prototype ready for application in clinical interventions. We achieve better performance compared to the best system in the literature and score a F1-score of 90.24% on the open-source CVC-VideoClinicDB benchmark.

6.
J Digit Imaging ; 36(2): 715-724, 2023 04.
Article in English | MEDLINE | ID: mdl-36417023

ABSTRACT

This study aims to show the feasibility and benefit of single queries in a research data warehouse combining data from a hospital's clinical and imaging systems. We used a comprehensive integration of a production picture archiving and communication system (PACS) with a clinical data warehouse (CDW) for research to create a system that allows data from both domains to be queried jointly with a single query. To achieve this, we mapped the DICOM information model to the extended entity-attribute-value (EAV) data model of a CDW, which allows data linkage and query constraints on multiple levels: the patient, the encounter, a document, and a group level. Accordingly, we have integrated DICOM metadata directly into CDW and linked it to existing clinical data. We included data collected in 2016 and 2017 from the Department of Internal Medicine in this analysis for two query inquiries from researchers targeting research about a disease and in radiology. We obtained quantitative information about the current availability of combinations of clinical and imaging data using a single multilevel query compiled for each query inquiry. We compared these multilevel query results to results that linked data at a single level, resulting in a quantitative representation of results that was up to 112% and 573% higher. An EAV data model can be extended to store data from clinical systems and PACS on multiple levels to enable combined querying with a single query to quickly display actual frequency data.


Subject(s)
Radiology Information Systems , Radiology , Humans , Data Warehousing , Information Storage and Retrieval , Diagnostic Imaging
7.
Transl Vis Sci Technol ; 11(6): 22, 2022 06 01.
Article in English | MEDLINE | ID: mdl-35737376

ABSTRACT

Purpose: Nycthemeral (24-hour) intraocular pressure (IOP) monitoring in glaucoma has been used in Europe for more than 100 years to detect peaks missed during regular office hours. Data supporting this practice are lacking, because it is difficult to correlate manually drawn IOP curves to objective glaucoma progression. To address this, we developed an automated IOP data extraction tool, HIOP-Reader. Methods: Machine learning image analysis software extracted IOP data from hand-drawn, nycthemeral IOP curves of 225 retrospectively identified patients with glaucoma. The relationship between demographic parameters, IOP, and mean ocular perfusion pressure (MOPP) data to spectral-domain optical coherence tomography (SDOCT) data was analyzed. Sensitivities and specificities for the historical cutoff values of 15 mm Hg and 22 mm Hg in detecting glaucoma progression were calculated. Results: Machine data extraction was 119 times faster than manual data extraction. The IOP average was 15.2 ± 4.0 mm Hg, nycthemeral IOP variation was 6.9 ± 4.2 mm Hg, and MOPP was 59.1 ± 8.9 mm Hg. Peak IOP occurred at 10 am and trough at 9 pm. Progression occurred mainly in the temporal-superior and temporal-inferior SDOCT sectors. No correlation could be established between demographic, IOP, or MOPP variables and disease progression on OCT. The sensitivity and specificity of both cutoff points (15 and 22 mm Hg) were insufficient to be clinically useful. Outpatient IOPs were noninferior to nycthemeral IOPs. Conclusions: IOP data obtained during a single visit make for a poor diagnostic tool, no matter whether obtained using nycthemeral measurements or during outpatient hours. Translational Relevance: HIOP-Reader rapidly extracts manually recorded IOP data to allow critical analysis of existing databases.


Subject(s)
Glaucoma, Open-Angle , Glaucoma , Circadian Rhythm , Glaucoma/diagnosis , Glaucoma, Open-Angle/diagnosis , Glaucoma, Open-Angle/etiology , Humans , Intraocular Pressure , Retrospective Studies , Tonometry, Ocular/adverse effects
8.
Scand J Gastroenterol ; 57(11): 1397-1403, 2022 Nov.
Article in English | MEDLINE | ID: mdl-35701020

ABSTRACT

BACKGROUND AND AIMS: Computer-aided polyp detection (CADe) may become a standard for polyp detection during colonoscopy. Several systems are already commercially available. We report on a video-based benchmark technique for the first preclinical assessment of such systems before comparative randomized trials are to be undertaken. Additionally, we compare a commercially available CADe system with our newly developed one. METHODS: ENDOTEST consisted in the combination of two datasets. The validation dataset contained 48 video-snippets with 22,856 manually annotated images of which 53.2% contained polyps. The performance dataset contained 10 full-length screening colonoscopies with 230,898 manually annotated images of which 15.8% contained a polyp. Assessment parameters were accuracy for polyp detection and time delay to first polyp detection after polyp appearance (FDT). Two CADe systems were assessed: a commercial CADe system (GI-Genius, Medtronic), and a self-developed new system (ENDOMIND). The latter being a convolutional neuronal network trained on 194,983 manually labeled images extracted from colonoscopy videos recorded in mainly six different gastroenterologic practices. RESULTS: On the ENDOTEST, both CADe systems detected all polyps in at least one image. The per-frame sensitivity and specificity in full colonoscopies was 48.1% and 93.7%, respectively for GI-Genius; and 54% and 92.7%, respectively for ENDOMIND. Median FDT of ENDOMIND with 217 ms (Inter-Quartile Range(IQR)8-1533) was significantly faster than GI-Genius with 1050 ms (IQR 358-2767, p = 0.003). CONCLUSIONS: Our benchmark ENDOTEST may be helpful for preclinical testing of new CADe devices. There seems to be a correlation between a shorter FDT with a higher sensitivity and a lower specificity for polyp detection.


Subject(s)
Colonic Polyps , Humans , Colonic Polyps/diagnostic imaging , Benchmarking , Colonoscopy/methods , Mass Screening
9.
Int J Colorectal Dis ; 37(6): 1349-1354, 2022 Jun.
Article in English | MEDLINE | ID: mdl-35543874

ABSTRACT

PURPOSE: Computer-aided polyp detection (CADe) systems for colonoscopy are already presented to increase adenoma detection rate (ADR) in randomized clinical trials. Those commercially available closed systems often do not allow for data collection and algorithm optimization, for example regarding the usage of different endoscopy processors. Here, we present the first clinical experiences of a, for research purposes publicly available, CADe system. METHODS: We developed an end-to-end data acquisition and polyp detection system named EndoMind. Examiners of four centers utilizing four different endoscopy processors used EndoMind during their clinical routine. Detected polyps, ADR, time to first detection of a polyp (TFD), and system usability were evaluated (NCT05006092). RESULTS: During 41 colonoscopies, EndoMind detected 29 of 29 adenomas in 66 of 66 polyps resulting in an ADR of 41.5%. Median TFD was 130 ms (95%-CI, 80-200 ms) while maintaining a median false positive rate of 2.2% (95%-CI, 1.7-2.8%). The four participating centers rated the system using the System Usability Scale with a median of 96.3 (95%-CI, 70-100). CONCLUSION: EndoMind's ability to acquire data, detect polyps in real-time, and high usability score indicate substantial practical value for research and clinical practice. Still, clinical benefit, measured by ADR, has to be determined in a prospective randomized controlled trial.


Subject(s)
Adenoma , Colonic Polyps , Colorectal Neoplasms , Adenoma/diagnosis , Colonic Polyps/diagnosis , Colonoscopy/methods , Colorectal Neoplasms/diagnosis , Computers , Humans , Pilot Projects , Prospective Studies , Randomized Controlled Trials as Topic
10.
Biomed Eng Online ; 21(1): 33, 2022 May 25.
Article in English | MEDLINE | ID: mdl-35614504

ABSTRACT

BACKGROUND: Machine learning, especially deep learning, is becoming more and more relevant in research and development in the medical domain. For all the supervised deep learning applications, data is the most critical factor in securing successful implementation and sustaining the progress of the machine learning model. Especially gastroenterological data, which often involves endoscopic videos, are cumbersome to annotate. Domain experts are needed to interpret and annotate the videos. To support those domain experts, we generated a framework. With this framework, instead of annotating every frame in the video sequence, experts are just performing key annotations at the beginning and the end of sequences with pathologies, e.g., visible polyps. Subsequently, non-expert annotators supported by machine learning add the missing annotations for the frames in-between. METHODS: In our framework, an expert reviews the video and annotates a few video frames to verify the object's annotations for the non-expert. In a second step, a non-expert has visual confirmation of the given object and can annotate all following and preceding frames with AI assistance. After the expert has finished, relevant frames will be selected and passed on to an AI model. This information allows the AI model to detect and mark the desired object on all following and preceding frames with an annotation. Therefore, the non-expert can adjust and modify the AI predictions and export the results, which can then be used to train the AI model. RESULTS: Using this framework, we were able to reduce workload of domain experts on average by a factor of 20 on our data. This is primarily due to the structure of the framework, which is designed to minimize the workload of the domain expert. Pairing this framework with a state-of-the-art semi-automated AI model enhances the annotation speed further. Through a prospective study with 10 participants, we show that semi-automated annotation using our tool doubles the annotation speed of non-expert annotators compared to a well-known state-of-the-art annotation tool. CONCLUSION: In summary, we introduce a framework for fast expert annotation for gastroenterologists, which reduces the workload of the domain expert considerably while maintaining a very high annotation quality. The framework incorporates a semi-automated annotation system utilizing trained object detection models. The software and framework are open-source.


Subject(s)
Gastroenterologists , Endoscopy , Humans , Machine Learning , Prospective Studies
11.
Graefes Arch Clin Exp Ophthalmol ; 260(10): 3349-3356, 2022 Oct.
Article in English | MEDLINE | ID: mdl-35501491

ABSTRACT

PURPOSE: To determine whether 24-h IOP monitoring can be a predictor for glaucoma progression and to analyze the inter-eye relationship of IOP, perfusion, and progression parameters. METHODS: We extracted data from manually drawn IOP curves with HIOP-Reader, a software suite we developed. The relationship between measured IOPs and mean ocular perfusion pressures (MOPP) to retinal nerve fiber layer (RNFL) thickness was analyzed. We determined the ROC curves for peak IOP (Tmax), average IOP(Tavg), IOP variation (IOPvar), and historical IOP cut-off levels to detect glaucoma progression (rate of RNFL loss). Bivariate analysis was also conducted to check for various inter-eye relationships. RESULTS: Two hundred seventeen eyes were included. The average IOP was 14.8 ± 3.5 mmHg, with a 24-h variation of 5.2 ± 2.9 mmHg. A total of 52% of eyes with RNFL progression data showed disease progression. There was no significant difference in Tmax, Tavg, and IOPvar between progressors and non-progressors (all p > 0.05). Except for Tavg and the temporal RNFL, there was no correlation between disease progression in any quadrant and Tmax, Tavg, and IOPvar. Twenty-four-hour and outpatient IOP variables had poor sensitivities and specificities in detecting disease progression. The correlation of inter-eye parameters was moderate; correlation with disease progression was weak. CONCLUSION: In line with our previous study, IOP data obtained during a single visit (outpatient or inpatient monitoring) make for a poor diagnostic tool, no matter the method deployed. Glaucoma progression and perfusion pressure in left and right eyes correlated weakly to moderately with each other.


Subject(s)
Glaucoma , Intraocular Pressure , Disease Progression , Glaucoma/diagnosis , Humans , Retina
12.
Health Informatics J ; 28(1): 14604582211058081, 2022.
Article in English | MEDLINE | ID: mdl-34986681

ABSTRACT

A deep integration of routine care and research remains challenging in many respects. We aimed to show the feasibility of an automated transformation and transfer process feeding deeply structured data with a high level of granularity collected for a clinical prospective cohort study from our hospital information system to the study's electronic data capture system, while accounting for study-specific data and visits. We developed a system integrating all necessary software and organizational processes then used in the study. The process and key system components are described together with descriptive statistics to show its feasibility in general and to identify individual challenges in particular. Data of 2051 patients enrolled between 2014 and 2020 was transferred. We were able to automate the transfer of approximately 11 million individual data values, representing 95% of all entered study data. These were recorded in n = 314 variables (28% of all variables), with some variables being used multiple times for follow-up visits. Our validation approach allowed for constant good data quality over the course of the study. In conclusion, the automated transfer of multi-dimensional routine medical data from HIS to study databases using specific study data and visit structures is complex, yet viable.


Subject(s)
Data Warehousing , Electronic Health Records , Databases, Factual , Follow-Up Studies , Humans , Prospective Studies
13.
Gastrointest Endosc ; 95(4): 794-798, 2022 Apr.
Article in English | MEDLINE | ID: mdl-34929183

ABSTRACT

BACKGROUND AND AIMS: Adenoma detection rate is the crucial parameter for colorectal cancer screening. Increasing the field of view with additional side optics has been reported to detect flat adenomas hidden behind folds. Furthermore, artificial intelligence (AI) has also recently been introduced to detect more adenomas. We therefore aimed to combine both technologies in a new prototypic colonoscopy concept. METHODS: A 3-dimensional-printed cap including 2 microcameras was attached to a conventional endoscope. The prototype was applied in 8 gene-targeted pigs with mutations in the adenomatous polyposis coli gene. The first 4 animals were used to train an AI system based on the images generated by microcameras. Thereafter, the conceptual prototype for detecting adenomas was tested in a further series of 4 pigs. RESULTS: Using our prototype, we detected, with side optics, adenomas that might have been missed conventionally. Furthermore, the newly developed AI could detect, mark, and present adenomas visualized with side optics outside of the conventional field of view. CONCLUSIONS: Combining AI with side optics might help detect adenomas that otherwise might have been missed.


Subject(s)
Adenoma , Colonic Polyps , Colorectal Neoplasms , Adenoma/diagnosis , Animals , Artificial Intelligence , Colonic Polyps/diagnostic imaging , Colonoscopy/methods , Colorectal Neoplasms/diagnosis , Humans , Swine
14.
Stud Health Technol Inform ; 283: 69-77, 2021 Sep 21.
Article in English | MEDLINE | ID: mdl-34545821

ABSTRACT

Optimizing the utilization of radiology departments is one of the primary objectives for many hospitals. To support this, a solution has been developed, which at first transforms the export of different Radiological Information Systems (RIS) into the data format of a clinical data warehouse (CDW). Additional features, like for example the time between the creation of a radiologic request and the finalization of the diagnosis for the created images, can then be defined using a simple interface and are calculated and saved in the CDW as well. Finally, the query language of the CDW can be used to create custom reports with all the RIS data including the calculated features and export them into the standard formats Excel and CSV. The solution has been successfully tested with data from two German hospitals.


Subject(s)
Radiology Information Systems , Radiology , Data Warehousing , Humans
15.
Stud Health Technol Inform ; 281: 484-485, 2021 May 27.
Article in English | MEDLINE | ID: mdl-34042612

ABSTRACT

A semi-automatic tool for fast and accurate annotation of endoscopic videos utilizing trained object detection models is presented. A novel workflow is implemented and the preliminary results suggest that the annotation process is nearly twice as fast with our novel tool compared to the current state of the art.


Subject(s)
Algorithms , Gastroenterologists , Endoscopy , Humans , Machine Learning , Workflow
16.
J Digit Imaging ; 33(4): 1016-1025, 2020 08.
Article in English | MEDLINE | ID: mdl-32314069

ABSTRACT

Clinical Data Warehouses (DWHs) are used to provide researchers with simplified access to pseudonymized and homogenized clinical routine data from multiple primary systems. Experience with the integration of imaging and metadata from picture archiving and communication systems (PACS), however, is rare. Our goal was therefore to analyze the viability of integrating a production PACS with a research DWH to enable DWH queries combining clinical and medical imaging metadata and to enable the DWH to display and download images ad hoc. We developed an application interface that enables to query the production PACS of a large hospital from a clinical research DWH containing pseudonymized data. We evaluated the performance of bulk extracting metadata from the PACS to the DWH and the performance of retrieving images ad hoc from the PACS for display and download within the DWH. We integrated the system into the query interface of our DWH and used it successfully in four use cases. The bulk extraction of imaging metadata required a median (quartiles) time of 0.09 (0.03-2.25) to 12.52 (4.11-37.30) seconds for a median (quartiles) number of 10 (3-29) to 103 (8-693) images per patient, depending on the extraction approach. The ad hoc image retrieval from the PACS required a median (quartiles) of 2.57 (2.57-2.79) seconds per image for the download, but 5.55 (4.91-6.06) seconds to display the first and 40.77 (38.60-41.63) seconds to display all images using the pure web-based viewer. A full integration of a production PACS with a research DWH is viable and enables various use cases in research. While the extraction of basic metadata from all images can be done with reasonable effort, the extraction of all metadata seems to be more appropriate for subgroups.


Subject(s)
Data Warehousing , Radiology Information Systems , Diagnostic Imaging , Humans
17.
Eur Heart J ; 41(11): 1203-1211, 2020 03 14.
Article in English | MEDLINE | ID: mdl-30957867

ABSTRACT

AIMS: Anxiety, depression, and reduced quality of life (QoL) are common in patients with implantable cardioverter-defibrillators (ICDs). Treatment options are limited and insufficiently defined. We evaluated the efficacy of a web-based intervention (WBI) vs. usual care (UC) for improving psychosocial well-being in ICD patients with elevated psychosocial distress. METHODS AND RESULTS: This multicentre, randomized controlled trial (RCT) enrolled 118 ICD patients with increased anxiety or depression [≥6 points on either subscale of the Hospital Anxiety and Depression Scale (HADS)] or reduced QoL [≤16 points on the Satisfaction with Life Scale (SWLS)] from seven German sites (mean age 58.8 ± 11.3 years, 22% women). The primary outcome was a composite assessing change in heart-focused fear, depression, and mental QoL 6 weeks after randomization to WBI or UC, stratified for age, gender, and indication for ICD placement. Web-based intervention consisted of 6 weeks' access to a structured interactive web-based programme (group format) including self-help interventions based on cognitive behaviour therapy, a virtual self-help group, and on-demand support from a trained psychologist. Linear mixed-effects models analyses showed that the primary outcome was similar between groups (ηp2 = 0.001). Web-based intervention was superior to UC in change from pre-intervention to 6 weeks (overprotective support; P = 0.004, ηp2 = 0.036), pre-intervention to 1 year (depression, P = 0.004, ηp2 = 0.032; self-management, P = 0.03, ηp2 = 0.015; overprotective support; P = 0.02, ηp2 = 0.031), and 6 weeks to 1 year (depression, P = 0.02, ηp2 = 0.026; anxiety, P = 0.03, ηp2 = 0.022; mobilization of social support, P = 0.047, ηp2 = 0.018). CONCLUSION: Although the primary outcome was neutral, this is the first RCT showing that WBI can improve psychosocial well-being in ICD patients.


Subject(s)
Cognitive Behavioral Therapy , Defibrillators, Implantable , Internet-Based Intervention , Aged , Anxiety/prevention & control , Depression/therapy , Female , Humans , Male , Middle Aged , Quality of Life
18.
Stud Health Technol Inform ; 267: 46-51, 2019 Sep 03.
Article in English | MEDLINE | ID: mdl-31483253

ABSTRACT

The Clinical Quality Language (CQL) is a useful tool for defining search requests for data stores containing FHIR data. Unfortunately, there are only few execution engines that are able to evaluate CQL queries. As FHIR data represents a graph structure, the authors pursue the approach of storing all data contained in a FHIR server in the graph database Neo4J and to translate CQL queries into Neo4J's query language Cypher. The query results returned by the graph database are retranslated into their FHIR representation and returned to the querying user. The approach has been positively tested on publicly available FHIR servers with a handcrafted set of example CQL queries.


Subject(s)
Databases, Factual , Language
19.
Stud Health Technol Inform ; 264: 128-132, 2019 Aug 21.
Article in English | MEDLINE | ID: mdl-31437899

ABSTRACT

Secondary use of electronic health records using data aggregation systems (DAS) with standardized access interfaces (e.g. openEHR, i2b2, FHIR) have become an attractive approach to support clinical research. In order to increase the volume of underlying patient data, multiple DASs at different institutions can be connected to research networks. Two obstacles to connect a DAS to such a network are the syntactical differences between the involved DAS query interfaces and differences in the data models the DASs operate on. The current work presents an approach to tackle both problems by translating queries from a DAS using openEHR's query language AQL (Archetype Query Language) into queries using the query language CQL (Clinical Quality Language) and vice versa. For the subset of queries which are expressible in both query languages the presented approach is well feasible.


Subject(s)
Electronic Health Records , Humans
20.
J Clin Med ; 8(7)2019 Jul 09.
Article in English | MEDLINE | ID: mdl-31324026

ABSTRACT

BACKGROUND: Natural language processing (NLP) is a powerful tool supporting the generation of Real-World Evidence (RWE). There is no NLP system that enables the extensive querying of parameters specific to multiple myeloma (MM) out of unstructured medical reports. We therefore created a MM-specific ontology to accelerate the information extraction (IE) out of unstructured text. METHODS: Our MM ontology consists of extensive MM-specific and hierarchically structured attributes and values. We implemented "A Rule-based Information Extraction System" (ARIES) that uses this ontology. We evaluated ARIES on 200 randomly selected medical reports of patients diagnosed with MM. RESULTS: Our system achieved a high F1-Score of 0.92 on the evaluation dataset with a precision of 0.87 and recall of 0.98. CONCLUSIONS: Our rule-based IE system enables the comprehensive querying of medical reports. The IE accelerates the extraction of data and enables clinicians to faster generate RWE on hematological issues. RWE helps clinicians to make decisions in an evidence-based manner. Our tool easily accelerates the integration of research evidence into everyday clinical practice.

SELECTION OF CITATIONS
SEARCH DETAIL
...