ABSTRACT
Information systems such as Electronic Health Record (EHR) systems are susceptible to data quality (DQ) issues. Given the growing importance of EHR data, there is an increasing demand for strategies and tools to help ensure that available data are fit for use. However, developing reliable data quality assessment (DQA) tools necessary for guiding and evaluating improvement efforts has remained a fundamental challenge. This review examines the state of research on operationalising EHR DQA, mainly automated tooling, and highlights necessary considerations for future implementations. We reviewed 1841 articles from PubMed, Web of Science, and Scopus published between 2011 and 2021. 23 DQA programs deployed in real-world settings to assess EHR data quality (n = 14), and a few experimental prototypes (n = 9), were identified. Many of these programs investigate completeness (n = 15) and value conformance (n = 12) quality dimensions and are backed by knowledge items gathered from domain experts (n = 9), literature reviews and existing DQ measurements (n = 3). A few DQA programs also explore the feasibility of using data-driven techniques to assess EHR data quality automatically. Overall, the automation of EHR DQA is gaining traction, but current efforts are fragmented and not backed by relevant theory. Existing programs also vary in scope, type of data supported, and how measurements are sourced. There is a need to standardise programs for assessing EHR data quality, as current evidence suggests their quality may be unknown.
Subject(s)
Data Accuracy , Electronic Health Records , Humans , SoftwareABSTRACT
BACKGROUND: Drug-resistance mutations were mostly detected using capillary electrophoresis sequencing, which does not detect minor variants with a frequency below 20%. Next-Generation Sequencing (NGS) can now detect additional mutations which can be useful for HIV-1 drug resistance interpretation. The objective of this study was to evaluate the performances of CE-IVD assays for HIV-1 drug-resistance assessment both for target-specific and whole-genome sequencing, using standardized end-to-end solution platforms. METHODS: A total of 301 clinical samples were prepared, extracted, and amplified for the three HIV-1 genomic targets, Protease (PR), Reverse Transcriptase (RT), and Integrase (INT), using the CE-IVD DeepChek® Assays; and then 19 clinical samples, using the CE-IVD DeepChek® HIV Whole Genome Assay, were sequenced on the NGS iSeq100 and MiSeq (Illumina, San Diego, CA, USA). Sequences were compared to those obtained by capillary electrophoresis. Quality control for Molecular Diagnostics (QCMD) samples was added to validate the clinical accuracy of these in vitro diagnostics (IVDs). Nineteen clinical samples were then tested with the same sample collection, handling, and measurement procedure for evaluating the use of NGS for whole-genome HIV-1. Sequencing analyzer outputs were submitted to a downstream CE-IVD standalone software tailored for HIV-1 analysis and interpretation. RESULTS: The limits of range detection were 1000 to 106 cp/mL for the HIV-1 target-specific sequencing. The median coverage per sample for the three amplicons (PR/RT and INT) was 13,237 reads. High analytical reproducibility and repeatability were evidenced by a positive percent agreement of 100%. Duplicated samples in two distinct NGS runs were 100% homologous. NGS detected all the mutations found by capillary electrophoresis and identified additional resistance variants. A perfect accuracy score with the QCMD panel detection of drug-resistance mutations was obtained. CONCLUSIONS: This study is the first evaluation of the DeepChek® Assays for targets specific (PR/RT and INT) and whole genome. A cutoff of 3% allowed for a better characterization of the viral population by identifying additional resistance mutations and improving the HIV-1 drug-resistance interpretation. The use of whole-genome sequencing is an additional and complementary tool to detect mutations in newly infected untreated patients and heavily experienced patients, both with higher HIV-1 viral-load profiles, to offer new insight and treatment strategies, especially using the new HIV-1 capsid/maturation inhibitors and to assess the potential clinical impact of mutations in the HIV-1 genome outside of the usual HIV-1 targets (RT/PR and INT).
Subject(s)
HIV Seropositivity , HIV-1 , Humans , Electrophoresis, Capillary , Endopeptidases , High-Throughput Nucleotide Sequencing , HIV-1/genetics , Integrases , Peptide Hydrolases , Reproducibility of Results , Research Design , SoftwareABSTRACT
Objective.To provide an open-source software for repeatable and efficient quantification ofT1andT2relaxation times with the ISMRM/NIST system phantom. Quantitative magnetic resonance imaging (qMRI) biomarkers have the potential to improve disease detection, staging and monitoring of treatment response. Reference objects, such as the system phantom, play a major role in translating qMRI methods into the clinic. The currently available open-source software for ISMRM/NIST system phantom analysis, Phantom Viewer (PV), includes manual steps that are subject to variability.Approach.We developed the Magnetic Resonance BIomarker Assessment Software (MR-BIAS) to automatically extract system phantom relaxation times. The inter-observer variability (IOV) and time efficiency of MR-BIAS and PV was observed in six volunteers analysing three phantom datasets. The IOV was measured with the coefficient of variation (CV) of percent bias (%bias) inT1andT2with respect to NMR reference values. The accuracy of MR-BIAS was compared to a custom script from a published study of twelve phantom datasets. This included comparison of overall bias and %bias for variable inversion recovery (T1VIR), variable flip angle (T1VFA) and multiple spin-echo (T2MSE) relaxation models.Main results.MR-BIAS had a lower mean CV withT1VIR(0.03%) andT2MSE(0.05%) in comparison to PV withT1VIR(1.28%) andT2MSE(4.55%). The mean analysis duration was 9.7 times faster for MR-BIAS (0.8 min) than PV (7.6 min). There was no statistically significant difference in the overall bias, or the %bias for the majority of ROIs, as calculated by MR-BIAS or the custom script for all models.Significance.MR-BIAS has demonstrated repeatable and efficient analysis of the ISMRM/NIST system phantom, with comparable accuracy to previous studies. The software is freely available to the MRI community, providing a framework to automate required analysis tasks, with the flexibility to explore open questions and accelerate biomarker research.
Subject(s)
Magnetic Resonance Imaging , Software , Humans , Reproducibility of Results , Magnetic Resonance Imaging/methods , Phantoms, Imaging , Biomarkers , Magnetic Resonance SpectroscopyABSTRACT
BACKGROUND: Artificial intelligence (AI)-based chatbots can offer personalized, engaging, and on-demand health promotion interventions. OBJECTIVE: The aim of this systematic review was to evaluate the feasibility, efficacy, and intervention characteristics of AI chatbots for promoting health behavior change. METHODS: A comprehensive search was conducted in 7 bibliographic databases (PubMed, IEEE Xplore, ACM Digital Library, PsycINFO, Web of Science, Embase, and JMIR publications) for empirical articles published from 1980 to 2022 that evaluated the feasibility or efficacy of AI chatbots for behavior change. The screening, extraction, and analysis of the identified articles were performed by following the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines. RESULTS: Of the 15 included studies, several demonstrated the high efficacy of AI chatbots in promoting healthy lifestyles (n=6, 40%), smoking cessation (n=4, 27%), treatment or medication adherence (n=2, 13%), and reduction in substance misuse (n=1, 7%). However, there were mixed results regarding feasibility, acceptability, and usability. Selected behavior change theories and expert consultation were used to develop the behavior change strategies of AI chatbots, including goal setting, monitoring, real-time reinforcement or feedback, and on-demand support. Real-time user-chatbot interaction data, such as user preferences and behavioral performance, were collected on the chatbot platform to identify ways of providing personalized services. The AI chatbots demonstrated potential for scalability by deployment through accessible devices and platforms (eg, smartphones and Facebook Messenger). The participants also reported that AI chatbots offered a nonjudgmental space for communicating sensitive information. However, the reported results need to be interpreted with caution because of the moderate to high risk of internal validity, insufficient description of AI techniques, and limitation for generalizability. CONCLUSIONS: AI chatbots have demonstrated the efficacy of health behavior change interventions among large and diverse populations; however, future studies need to adopt robust randomized control trials to establish definitive conclusions.
Subject(s)
Artificial Intelligence , Health Promotion , Humans , Health Promotion/methods , Health Behavior , Delivery of Health Care , SoftwareABSTRACT
MoSDeF-GOMC is a python interface for the Monte Carlo software GOMC to the Molecular Simulation Design Framework (MoSDeF) ecosystem. MoSDeF-GOMC automates the process of generating initial coordinates, assigning force field parameters, and writing coordinate (PDB), connectivity (PSF), force field parameter, and simulation control files. The software lowers entry barriers for novice users while allowing advanced users to create complex workflows that encapsulate simulation setup, execution, and data analysis in a single script. All relevant simulation parameters are encoded within the workflow, ensuring reproducible simulations. MoSDeF-GOMC's capabilities are illustrated through a number of examples, including prediction of the adsorption isotherm for CO2 in IRMOF-1, free energies of hydration for neon and radon over a broad temperature range, and the vapor-liquid coexistence curve of a four-component surrogate for the jet fuel S-8. The MoSDeF-GOMC software is available on GitHub at https://github.com/GOMC-WSU/MoSDeF-GOMC.
Subject(s)
Ecosystem , Software , Workflow , Monte Carlo Method , Computer SimulationABSTRACT
BACKGROUND AND PURPOSE: Automated volumetric analysis of structural MR imaging allows quantitative assessment of brain atrophy in neurodegenerative disorders. We compared the brain segmentation performance of the AI-Rad Companion brain MR imaging software against an in-house FreeSurfer 7.1.1/Individual Longitudinal Participant pipeline. MATERIALS AND METHODS: T1-weighted images of 45 participants with de novo memory symptoms were selected from the OASIS-4 database and analyzed through the AI-Rad Companion brain MR imaging tool and the FreeSurfer 7.1.1/Individual Longitudinal Participant pipeline. Correlation, agreement, and consistency between the 2 tools were compared among the absolute, normalized, and standardized volumes. Final reports generated by each tool were used to compare the rates of detection of abnormality and the compatibility of radiologic impressions made using each tool, compared with the clinical diagnoses. RESULTS: We observed strong correlation, moderate consistency, and poor agreement between absolute volumes of the main cortical lobes and subcortical structures measured by the AI-Rad Companion brain MR imaging tool compared with FreeSurfer. The strength of the correlations increased after normalizing the measurements to the total intracranial volume. Standardized measurements differed significantly between the 2 tools, likely owing to differences in the normative data sets used to calibrate each tool. When considering the FreeSurfer 7.1.1/Individual Longitudinal Participant pipeline as a reference standard, the AI-Rad Companion brain MR imaging tool had a specificity of 90.6%-100% and a sensitivity of 64.3%-100% in detecting volumetric abnormalities. There was no difference between the rate of compatibility of radiologic and clinical impressions when using the 2 tools. CONCLUSIONS: The AI-Rad Companion brain MR imaging tool reliably detects atrophy in cortical and subcortical regions implicated in the differential diagnosis of dementia.
Subject(s)
Brain , Magnetic Resonance Imaging , Humans , Brain/diagnostic imaging , Brain/pathology , Magnetic Resonance Imaging/methods , Cerebral Cortex , Software , Atrophy/pathology , Image Processing, Computer-Assisted/methods , Reproducibility of ResultsABSTRACT
A simulation of the effect of metal nano-oxides at various concentrations (25, 50, 100, and 200 milligrams per millilitre) on cell viability in THP-1 cells (%) based on data on the molecular structure of the oxide and its concentration is proposed. We used a simplified molecular input-line entry system (SMILES) to represent the molecular structure. So-called quasi-SMILES extends usual SMILES with special codes for experimental conditions (concentration). The approach based on building up models using quasi-SMILES is self-consistent, i.e., the predictive potential of the model group obtained by random splits into training and validation sets is stable. The Monte Carlo method was used as a basis for building up the above groups of models. The CORAL software was applied to building the Monte Carlo calculations. The average determination coefficient for the five different validation sets was R2 = 0.806 ± 0.061.
Subject(s)
Quantitative Structure-Activity Relationship , Software , Humans , Molecular Structure , THP-1 Cells , Cell Survival , Computer Simulation , Oxides , Monte Carlo MethodABSTRACT
This study was a multistage process of recruiting participants through Reddit with the intent of increasing data integrity when facing an infiltration of Internet bots. Approaches to increase data integrity centered around preventing the occurrence of Internet bots from the onset and increasing the ability to identify Internet bot responses. We attempted to detect bots in a study focused on understanding social factors related to autism and suicide risk. Four recruitment rounds occurred through Reddit on mental health-related subreddits, with one post made on each subreddit per recruitment round. We found high presence of bots in the initial rounds-indeed, using location data, one third of the total responses (33.4 percent; 118/353) came from just eight locations (i.e., 4.7 percent of all locations). The proportion of detected bots was significantly different across the rounds of recruitment (χ2 = 150.22, df = 3, p < 0.001). In round 4, language advertising compensation was removed from recruitment posts. This round had significantly lower proportions of detected bots compared with round 1 (χ2 = 33.01, df = 1, p < 0.001), round 2 (χ2 = 129.14, df = 1, p < 0.001), and round 3 (χ2 = 46.6, df = 1, p < 0.001). Through a multistage recruitment process, we were able to increase the integrity of our collected data, as determined by a low percentage of fraudulent responses. Only once we removed advertisement of compensation in recruitment posts, did we see a significant decrease in the quantity and percentage of Internet bot responses. This multistage recruitment study provides valuable information regarding how to adapt when an online survey study is infiltrated with Internet bots.
Subject(s)
Social Media , Humans , Software , Surveys and Questionnaires , Internet , Health SurveysABSTRACT
ABSTRACT: The present work introduces an open-source graphical user interface (GUI) computer program called DynamicMC. The present program has the ability to generate ORNL phantom input script for the Monte Carlo N-Particle (MCNP) package. The relative dynamic movement of the radiation source with respect to the ORNL phantom can be modeled, which essentially resembles the dynamic movement of source-to-target (i.e., human phantom) distance in a 3-dimensional radiation field. The present program makes the organ-based dosimetry of the human body much easier, as users are not required to write lengthy scripts or deal with any programming that many may find tedious, time consuming, and error prone. In this paper, we have demonstrated that the present program can successfully model simple and complex relative dynamic movements (i.e., those involving rotation of source and human phantom in a 3-dimensional field). The present program would be useful for organ-based dosimetry and could also be used as a tool for teaching nuclear radiation physics and its interaction with the human body.
Subject(s)
Radiometry , Software , Humans , Radiometry/methods , Phantoms, Imaging , Monte Carlo Method , Computer SimulationABSTRACT
BACKGROUND: There is the need for the development of reliable and easy to use in clinical setting gait assessment tools. An open-access video analysis software that administers the calculation of kinematical and spatio-temporal characteristics of human movement is Kinovea® however, its repeatability as a gait analysis tool has not been well addressed. The purpose of the study was to examine the applicability and reliability of an objective, quantitative, low-cost and easy to use in the clinical setting, gait evaluation method, using Kinovea® software. METHODS: Data collected from 44 healthy subjects recording gait in sagittal and frontal plane using two smartphones. Time consumption of the procedure was captured. Kinovea® software was used to calculate kinematical and spatial parameters. RESULTS: Intra- and inter-rater reliability of the video processing as well as intra-rater reliability of the measurement procedure represented good to excellent and there were less random measurement errors. There was no measurement error due to random variation for the most of the calculated parameters, except of the pelvis position. CONCLUSIONS: The results suggest that excepting low accuracy in calculation of pelvis position, gait evaluation using Kinovea® software is objective, quantitative, low-cost, reliable and easy to use in the clinical setting.
Subject(s)
Gait , Software , Humans , Reproducibility of Results , Movement , Healthy Volunteers , Biomechanical PhenomenaABSTRACT
BACKGROUND: Bacteriocins are defined as thermolabile peptides produced by bacteria with biological activity against taxonomically related species. These antimicrobial peptides have a wide application including disease treatment, food conservation, and probiotics. However, even with a large industrial and biotechnological application potential, these peptides are still poorly studied and explored. BADASS is software with a user-friendly graphical interface applied to the search and analysis of bacteriocin diversity in whole-metagenome shotgun sequencing data. RESULTS: The search for bacteriocin sequences is performed with tools such as BLAST or DIAMOND using the BAGEL4 database as a reference. The putative bacteriocin sequences identified are used to determine the abundance and richness of the three classes of bacteriocins. Abundance is calculated by comparing the reads identified as bacteriocins to the reads identified as 16S rRNA gene using SILVA database as a reference. BADASS has a complete pipeline that starts with the quality assessment of the raw data. At the end of the analysis, BADASS generates several plots of richness and abundance automatically as well as tabular files containing information about the main bacteriocins detected. The user is able to change the main parameters of the analysis in the graphical interface. To demonstrate how the software works, we used four datasets from WMS studies using default parameters. Lantibiotics were the most abundant bacteriocins in the four datasets. This class of bacteriocin is commonly produced by Streptomyces sp. CONCLUSIONS: With a user-friendly graphical interface and a complete pipeline, BADASS proved to be a powerful tool for prospecting bacteriocin sequences in Whole-Metagenome Shotgun Sequencing (WMS) data. This tool is publicly available at https://sourceforge.net/projects/badass/ .
Subject(s)
Bacteriocins , Bacteriocins/pharmacology , Bacteriocins/genetics , RNA, Ribosomal, 16S/genetics , Software , Bacteria/genetics , Metagenome , Anti-Bacterial AgentsABSTRACT
BACKGROUND: The purpose of this work was to obtain the dosimetric parameters of the new GZP3 60Co high-dose-rate afterloading system launched by the Nuclear Power Institute of China, which is comprised of two different 60Co sources. METHODS: The Monte Carlo software Geant4 and EGSnrc were employed to derive accurate calculations of the dosimetric parameters of the new GZP3 60Co brachytherapy source in the range of 0-10 cm, following the formalism proposed by American Association of Physicists in Medicine reports TG43 and TG43U1. Results of the two Monte Carlo codes were compared to verify the accuracy of the data. The source was located in the center of a 30-cm-radius theoretical sphere water phantom. RESULTS: For channels 1 and 2 of the new GZP3 60Co afterloading system, the results of the dose-rate constant (Λ) were 1.115 cGy h-1 U-1 and 1.112 cGy h-1 U-1, and for channel 3 they were 1.116 cGy h-1 U-1 and 1.113 cGy h-1 U-1 according to the Geant4 and EGSnrc, respectively. The radial dose function in the range of 0.25-10.0 cm in a longitudinal direction was calculated, and the fitting formulas for the function were obtained. The polynomial function for the radial dose function and the anisotropy function (1D and 2D) with a [Formula: see text] of 0°-175° and an r of 0.5-10.0 cm were obtained. The curves of the radial function and the anisotropy function fitted well compared with the two Monte Carlo software. CONCLUSION: These dosimetric data sets can be used as input data for TPS calculations and quality control for the new GZP3 60Co afterloading system.
Subject(s)
Brachytherapy , Radiometry , Humans , Radiotherapy Dosage , Radiometry/methods , Software , Cobalt Radioisotopes , Monte Carlo Method , Brachytherapy/methods , AnisotropyABSTRACT
The present paper reports the outcomes of activities concerning a real-time SHM system for debonding flaw detection based on ground testing of an aircraft structural component as a basis for condition-based maintenance. In this application, a damage detection method unrelated to structural or load models is investigated. In the reported application, the system is applied for real-time detection of two flaws, kissing bond type, artificially deployed over a full-scale composite spar under the action of external bending loads. The proposed algorithm, local high-edge onset (LHEO), detects damage as an edge onset in both the space and time domains, correlating current strain levels to next strain levels within a sliding inner product proportional to the sensor step and the acquisition time interval, respectively. Real-time implementation can run on a consumer-grade computer. The SHM algorithm was written in Matlab and compiled as a Python module, then called from a multiprocess wrapper code with separate operations for data reception and data elaboration. The proposed SHM system is made of FBG arrays, an interrogator, an in-house SHM code, an original decoding software (SW) for real-time implementation of multiple SHM algorithms and a continuous interface with an external operator.
Subject(s)
Computers , Software , Monitoring, Physiologic , Aircraft , AlgorithmsABSTRACT
The advancement of complex Internet of Things (IoT) devices in recent years has deepened their dependency on network connectivity, demanding low latency and high throughput. At the same time, expanding operating conditions for these devices have brought challenges that limit the design constraints and accessibility for future hardware or software upgrades. These limitations can result in data loss because of out-of-order packets if the design specification cannot keep up with network demands. In addition, existing network reordering solutions become less applicable due to the drastic changes in the type of network endpoints, as IoT devices typically have less memory and are likely to be power-constrained. One approach to address this problem is reordering packets using reconfigurable hardware to ease computation in other functions. Field Programmable Gate Array (FPGA) devices are ideal candidates for hardware implementations at the network endpoints due to their high performance and flexibility. Moreover, previous research on packet reordering using FPGAs has serious design flaws that can lead to unnecessary packet dropping due to blocking in memory. This research proposes a scalable hardware-focused method for packet reordering that can overcome the flaws from previous work while maintaining minimal resource usage and low time complexity. The design utilizes a pipelined approach to perform sorting in parallel and completes the operation within two clock cycles. FPGA resources are optimized using a two-layer memory management system that consumes minimal on-chip memory and registers. Furthermore, the design is scalable to support multi-flow applications with shared memories in a single FPGA chip.
Subject(s)
Computers , Software , Cost-Benefit Analysis , InternetABSTRACT
PURPOSE: Personalized dosimetry with high accuracy drew great attention in clinical practices. Voxel S-value (VSV) convolution has been proposed to speed up absorbed dose calculations. However, the VSV method is efficient for personalized internal radiation dosimetry only when there are pre-calculated VSVs of the radioisotope. In this work, we propose a new method for VSV calculation based on the developed mono-energetic particle VSV database of γ, ß, α, and X-ray for any radioisotopes. METHODS: Mono-energetic VSV database for γ, ß, α, and X-ray was calculated using Monte Carlo methods. Radiation dose was first calculated based on mono-energetic VSVs for [F-18]-FDG in 10 patients. The estimated doses were compared with the values obtained from direct Monte Carlo simulation for validation of the proposed method. The number of VSVs used in calculation was optimized based on the estimated dose accuracy and computation time. RESULTS: The generated VSVs showed a great consistency with the results calculated using direct Monte Carlo simulation. For [F-18]-FDG, the proposed VSV method with number of VSV of 9 shows the best relative average organ absorbed dose uncertainty of 3.25% while the calculation time was reduced by 99% and 97% compared to the Monte Carlo simulation and traditional multiple VSV methods, respectively. CONCLUSIONS: In this work, we provided a method to generate the VSV kernels for any radioisotope based on the pre-calculated mono-energetic VSV database and significantly reduced the time cost for the multiple VSVs dosimetry approach. A software was developed to generate VSV kernels for any radioisotope in 19 mediums.
Subject(s)
Fluorodeoxyglucose F18 , Radiometry , Humans , Radiometry/methods , Radioisotopes , Software , Computer Simulation , Monte Carlo Method , Phantoms, ImagingABSTRACT
BACKGROUND: The emergence of digital technology in the field of psychological and educational measurement and assessment broadens the traditional concept of pencil and paper tests. New assessment models built on the proliferation of smartphones, social networks and software developments are opening up new horizons in the field. METHOD: This study is divided into four sections, each discussing the benefits and limitations of a specific type of technology-based assessment: ambulatory assessment, social networks, gamification and forced-choice testing. RESULTS: The latest developments are clearly relevant in the field of psychological and educational measurement and assessment. Among other benefits, they bring greater ecological validity to the assessment process and eliminate the bias associated with retrospective assessment. CONCLUSIONS: Some of these new approaches point to a multidisciplinary scenario with a tradition which has yet to be created. Psychometrics must secure a place in this new world by contributing sound expertise in the measurement of psychological variables. The challenges and debates facing the field of psychology as it incorporates these new approaches are also discussed.
Subject(s)
Digital Technology , Software , Humans , Retrospective Studies , Psychometrics , Educational MeasurementABSTRACT
OBJECTIVE: This study was performed to examine the value of computed tomography-based texture assessment for characterizing different types of ovarian neoplasms. METHODS: This retrospective study involved 225 patients with histopathologically confirmed ovarian tumors after surgical resection. Two different data sets of thick (5-mm) slices (during regular and portal venous phases) were analyzed. Raw data analysis, principal component analysis, linear discriminant analysis, and nonlinear discriminant analysis were performed to classify ovarian tumors. The radiologist's misclassification rate was compared with the MaZda (texture analysis software) findings. The results were validated with the neural network classifier. Receiver operating characteristic curves were analyzed to determine the performances of different parameters. RESULTS: Nonlinear discriminant analysis had a lower misclassification rate than the other analyses. Thirty texture parameters significantly differed between the two groups. In the training set, WavEnLH_s-3 and WavEnHL_s-3 were the optimal texture features during the regular phase, while WavEnHH_s-4 and Kurtosis seemed to be the most discriminative features during the portal venous phase. In the validation test, benign versus malignant tumors and benign versus borderline lesions were well-distinguished. CONCLUSIONS: Computed tomography-based texture features provide a useful imaging signature that may assist in differentiating benign, borderline, and early-stage ovarian cancer.
Subject(s)
Ovarian Neoplasms , Tomography, X-Ray Computed , Humans , Female , Retrospective Studies , Tomography, X-Ray Computed/methods , Ovarian Neoplasms/diagnostic imaging , ROC Curve , Software , Diagnosis, DifferentialABSTRACT
The establishment of grid-connected prosumer communities to bridge the demand-supply gap in developing nations, especially in rural areas will assist to minimize the use of carbon enriched fossil fuels and the resulting economic pressure. In the promoted study, an economic and ecosystem-friendly hybrid energy model is proposed for grid-connected prosumer community of 147 houses in district Kotli, AJK. The grid search algorithm-based HOMER software is used to simulate and analyze the load demand and biomass sources-based onsite collected data through a survey for an optimal proposed design. The research objectives are to minimize the net present cost (USD) of design, the per unit cost of energy (USD/kWh), and the carbon emissions (kgs/year). A sensitivity analysis based on photovoltaic module lifetime is also performed. The simulations show that the per unit cost of energy is reduced from 0.1 USD/kWh to 0.001 USD/kWh for the annual energy demand (kWh/year) of the community. The number of carbon emissions is also minimized from 122056 kgs/year to 1628 kgs/year through the proposed optimal energy model.
Subject(s)
Ecosystem , Fossil Fuels , Software , Algorithms , CarbonABSTRACT
INTRODUCTION: Knee X-rays are a standard examination to diagnose multiple conditions ranging from traumatic injuries, degeneration, and cancer. This study explores the differences between adult Anterior-Posterior (AP) and Posterior-Anterior (PA) weight-bearing knee examinations using absorbed radiation dose data and image quality. METHODS: The study modelled and compared AP and PA knee X-ray radiation dose data using Monte-Carlo software, an Ion Chamber, and thermoluminescence dosemeters (TLDs) on a Rando phantom. Imaging parameters used were 66kVp, 4mAs, 100cm distance and 13 × 24cm collimation. The interval data analysis used a two-tailed t-test. The image quality of a sample of the AP and PA knee X-rays was assessed using Likert 5-point ordinal Image Quality Scoring (IQS) and the Wilcoxon matched pairs test. RESULTS: Monte-Carlo modelling provided limited results; the Ion Chamber data for absorbed dose provided no variation between AP and PA positions but was similar to the AP TLD dose. The absorbed doses recorded with batches of TLDs demonstrated a 27.4% reduction (46.1µGy; p=0.01) in Skin Entrance Dose (ESD) and 9 - 58% dose reduction (1.6 - 16.4µGy; p=0.00-0.2) to the tissues and organs while maintaining diagnostic image quality (p=0.67). CONCLUSION: The study has highlighted the various challenges of using different dosimetry approaches to measure absorbed radiation dose in extremity (knee) X-ray imaging. The Monte-Carlo simulated absorbed knee dose was overestimated, but the simulated body organ/tissue doses were lower than the actual TLD absorbed doses. The Ion Chamber absorbed doses did not differentiate between the positions. The TLD organ/tissue absorbed doses demonstrated a reduction in dose in the PA position compared to the AP position, without a detrimental effect on image quality. The study findings in laboratory conditions raise awareness of opportunities and potential to lower radiation dose, with further study replicated in a clinical site recommended.
Subject(s)
Radiometry , Software , Humans , Adult , Radiation Dosage , Radiography , Phantoms, ImagingABSTRACT
BACKGROUND: Advantages of meta-analysis depend on the assumptions underlying the statistical procedures used being met. One of the main assumptions that is usually taken for granted is the normality underlying the population of true effects in a random-effects model, even though the available evidence suggests that this assumption is often not met. This paper examines how 21 frequentist and 24 Bayesian methods, including several novel procedures, for computing a point estimate of the heterogeneity parameter ([Formula: see text]) perform when the distribution of random effects departs from normality compared to normal scenarios in meta-analysis of standardized mean differences. METHODS: A Monte Carlo simulation was carried out using the R software, generating data for meta-analyses using the standardized mean difference. The simulation factors were the number and average sample size of primary studies, the amount of heterogeneity, as well as the shape of the random-effects distribution. The point estimators were compared in terms of absolute bias and variance, although results regarding mean squared error were also discussed. RESULTS: Although not all the estimators were affected to the same extent, there was a general tendency to obtain lower and more variable [Formula: see text] estimates as the random-effects distribution departed from normality. However, the estimators ranking in terms of their absolute bias and variance did not change: Those estimators that obtained lower bias also showed greater variance. Finally, a large number and sample size of primary studies acted as a bias-protective factor against a lack of normality for several procedures, whereas only a high number of studies was a variance-protective factor for most of the estimators analyzed. CONCLUSIONS: Although the estimation and inference of the combined effect have proven to be sufficiently robust, our work highlights the role that the deviation from normality may be playing in the meta-analytic conclusions from the simulation results and the numerical examples included in this work. With the aim to exercise caution in the interpretation of the results obtained from random-effects models, the tau2() R function is made available for obtaining the range of [Formula: see text] values computed from the 45 estimators analyzed in this work, as well as to assess how the pooled effect, its confidence and prediction intervals vary according to the estimator chosen.