Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 41
Filter
1.
Sensors (Basel) ; 24(7)2024 Apr 07.
Article in English | MEDLINE | ID: mdl-38610553

ABSTRACT

This paper proposes a novel method to improve the clock bias short-term prediction accuracy of navigation receivers then solve the problem of low positioning accuracy when the satellite signal quality deteriorates. Considering that the clock bias of a navigation receiver is equivalent to a virtual satellite, the predicted value of clock bias is used to assist navigation receivers in positioning. Consequently, a combined prediction method for navigation receiver clock bias based on Empirical Mode Decomposition (EMD) and Back Propagation Neural Network (BPNN) analysis theory is demonstrated. In view of systematic errors and random errors in the clock bias data from navigation receivers, the EMD method is used to decompose the clock bias data; then, the BPNN prediction method is used to establish a high-precision clock bias prediction model; finally, based on the clock bias prediction value, the three-dimensional positioning of the navigation receiver is realized by expanding the observation equation. The experimental results show that the proposed model is suitable for clock bias time series prediction and providing three-dimensional positioning information meets the requirements of navigation application in the harsh environment of only three satellites.

2.
Clin Chem Lab Med ; 62(8): 1548-1556, 2024 Jul 26.
Article in English | MEDLINE | ID: mdl-38456711

ABSTRACT

OBJECTIVES: The aim of this study is to develop a practical method for bivariate z-score analysis which can be applied to the survey of an external quality assessment programme. METHODS: To develop the bivariate z-score analysis, the results of four surveys of the international D-Dimer external quality assessment programme of 2022 of the ECAT Foundation were used. The proposed methodology starts by identifying the bivariate outliers, using a Supervised Sequential Hotelling T2 control chart. The outlying data are removed, and all the remaining data are used to provide robust estimates of the parameters of the assumed underlying bivariate normal distribution. Based on these estimates two nested homocentric ellipses are drawn, corresponding to confidence levels of 95 and 99.7 %. The bivariate z-score plot described provides the laboratory with an indication of both systematic and random deviations from zero z-score values. The bivariate z-score analysis was examined within survey 2022-D4 across the three most frequently used methods. RESULTS: The number of z-score pairs included varied between 830 and 857 and the number of bivariate outliers varied between 20 and 28. The correlation between the z-score pairs varied between 0.431 and 0.647. The correlation between the z-score pairs for the three most frequently used varied between 0.208 and 0.636. CONCLUSIONS: The use of the bivariate z-score analysis is of major importance when multiple samples are distributed around in the same survey and dependency of the results is likely. Important lessons can be drawn from the shape of the ellipse with respect to random and systematic deviations, while individual laboratories have been informed about their position in the state-of-the-art distribution and whether they have to deal with systematic and/or random deviations.


Subject(s)
Fibrin Fibrinogen Degradation Products , Quality Control , Fibrin Fibrinogen Degradation Products/analysis , Humans , Consensus
3.
Biomed Phys Eng Express ; 10(2)2024 Feb 26.
Article in English | MEDLINE | ID: mdl-38359444

ABSTRACT

Purpose.This study aims to establish a robust dose prescription methodology in stereotactic radiosurgery (SRS) and stereotactic radiotherapy (SRT) for brain metastases, considering geometrical uncertainty and minimising dose exposure to the surrounding normal brain tissue.Methods and Materials.Treatment plans employing 40%-90% isodose lines (IDL) at 10% IDL intervals were created for variously sized brain metastases. The plans were constructed to deliver 21 Gy in SRS. Robustness of each plan was analysed using parameters such as the near minimum dose to the tumour, the near maximum dose to the normal brain, and the volume of normal brain irradiated above 14 Gy.Results.Plans prescribed at 60% IDL demonstrated the least variation in the near minimum dose to the tumour and the near maximum dose to the normal brain under conditions of minimal geometrical uncertainty relative to tumour radius. When the IDL-percentage prescription was below 60%, geometrical uncertainties led to increases in these doses. Conversely, they decreased with IDL-percentage prescriptions above 60%. The volume of normal brain irradiated above 14 Gy was lowest at 60% IDL, regardless of geometrical uncertainty.Conclusions.To enhance robustness against geometrical uncertainty and to better spare healthy brain tissue, a 60% IDL prescription is recommended in SRS and SRT for brain metastases using a robotic radiosurgery system.


Subject(s)
Brain Neoplasms , Radiosurgery , Robotic Surgical Procedures , Humans , Radiosurgery/methods , Radiotherapy Dosage , Brain Neoplasms/radiotherapy , Brain Neoplasms/secondary , Brain/pathology
4.
Sensors (Basel) ; 24(4)2024 Feb 14.
Article in English | MEDLINE | ID: mdl-38400371

ABSTRACT

Electrolysis stands as a pivotal method for environmentally sustainable hydrogen production. However, the formation of gas bubbles during the electrolysis process poses significant challenges by impeding the electrochemical reactions, diminishing cell efficiency, and dramatically increasing energy consumption. Furthermore, the inherent difficulty in detecting these bubbles arises from the non-transparency of the wall of electrolysis cells. Additionally, these gas bubbles induce alterations in the conductivity of the electrolyte, leading to corresponding fluctuations in the magnetic flux density outside of the electrolysis cell, which can be measured by externally placed magnetic sensors. By solving the inverse problem of the Biot-Savart Law, we can estimate the conductivity distribution as well as the void fraction within the cell. In this work, we study different approaches to solve the inverse problem including Invertible Neural Networks (INNs) and Tikhonov regularization. Our experiments demonstrate that INNs are much more robust to solving the inverse problem than Tikhonov regularization when the level of noise in the magnetic flux density measurements is not known or changes over space and time.

5.
Med Dosim ; 2024 Jan 08.
Article in English | MEDLINE | ID: mdl-38195371

ABSTRACT

Planning target volume (PTV) to deliver the desired dose to the clinical target volume (CTV) accounts for systematic (∑) and random (σ) errors during the planning and execution of intensity modulated radiation therapy (IMRT). As these errors vary at different departments, this study was conducted to determine the 3-dimensional PTV (PTV3D) margins for head and neck cancer (HNC) at our center. The same was also estimated from reported studies for a comparative assessment. A total of 77 patients with HNCs undergoing IMRT were included. Of these, 39 patients received radical RT and 38 received postoperative IMRT. An extended no action level protocol was implemented using on-board imaging. Shifts in the mediolateral (ML), anteroposterior (AP), and superoinferior (SI) directions of each patient were recorded for every fraction. PTV margins in each direction (ML, AP, SI) and PTV3D were calculated using van Herk's equation. Weighted PTV3D was also computed from the ∑ and σ errors in each direction published in the literature for HNC. Our patients were staged T2-4 (66/77) and N0 (39/77). In all, 2280 on-board images were acquired, and daily shifts in each direction were recorded. The PTV margins in the ML, AP, and SI directions were computed as 3.2 mm, 2.9 mm, and 2.6 mm, respectively. The PTV3D margin was estimated to be 6.5 mm. This compared well with the weighted median PTV3D of 7.2 mm (range: 3.2 to 9.9) computed from the 16 studies reported in the literature. To ensure ≥95% CTV dose coverage in 90% of HNC patients, PTV3D margin for our department was estimated as 6.5 mm. This agrees with the weighted median PTV3D margin of 7.2 mm computed from the 16 published studies in HNCs. Site-specific PTV3D margin estimations should be an integral component of the quality assurance protocol of each department to ensure adequate coverage of dose to CTV during IMRT.

6.
Micromachines (Basel) ; 14(2)2023 Feb 10.
Article in English | MEDLINE | ID: mdl-36838119

ABSTRACT

There are various errors in practical applications of micromachined silicon resonant accelerometers (MSRA), among which the composition of random errors is complex and uncertain. In order to improve the output accuracy of MSRA, this paper proposes an MSRA random error suppression method based on an improved grey wolf and particle swarm optimized extreme learning machine (IGWPSO-ELM). A modified wavelet threshold function is firstly used to separate the white noise from the useful signal. The output frequency at the previous sampling point and the sequence value are then added to the current output frequency to form a three-dimensional input. Additional improvements are made on the particle swarm optimized extreme learning machine (PSO-ELM): the grey wolf optimization (GWO) is fused into the algorithm and the three factors (inertia, acceleration and convergence) are non-linearized to improve the convergence efficiency and accuracy of the algorithm. The model trained offline using IGWPSO-ELM is applied to predicting compensation experiments, and the results show that the method is able to reduce velocity random walk from the original 4.3618 µg/√Hz to 2.1807 µg/√Hz, bias instability from the original 2.0248 µg to 1.3815 µg, and acceleration random walk from the original 0.53429 µg·âˆšHz to 0.43804 µg·âˆšHz, effectively suppressing the random error in the MSRA output.

7.
Eur J Sport Sci ; 23(4): 588-598, 2023 Apr.
Article in English | MEDLINE | ID: mdl-35234572

ABSTRACT

Multiple statistical methods have been proposed to estimate individual responses to exercise training; yet, the evaluation of these methods is lacking. We compared five of these methods including the following: the use of a control group, a control period, repeated testing during an intervention, a reliability trial and a repeated intervention. Apparently healthy males from the Gene SMART study completed a 4-week control period, 4 weeks of High-Intensity Interval Training (HIIT), >1 year of washout, and then subsequently repeated the same 4 weeks of HIIT, followed by an additional 8 weeks of HIIT. Aerobic fitness measurements were measured in duplicates at each time point. We found that the control group and control period were not intended to measure the degree to which individuals responded to training, but rather estimated whether individual responses to training can be detected with the current exercise protocol. After a repeated intervention, individual responses to 4 weeks of HIIT were not consistent, whereas repeated testing during the 12-week-long intervention was able to capture individual responses to HIIT. The reliability trial should not be used to study individual responses, rather should be used to classify participants as responders with a certain level of confidence. 12 weeks of HIIT with repeated testing during the intervention is sufficient and cost-effective to measure individual responses to exercise training since it allows for a confident estimate of an individual's true response. Our study has significant implications for how to improve the design of exercise studies to accurately estimate individual responses to exercise training interventions.HighlightsWhat are the findings? We implemented five statistical methods in a single study to estimate the magnitude of within-subject variability and quantify responses to exercise training at the individual level.The various proposed methods used to estimate individual responses to training provide different types of information and rely on different assumptions that are difficult to test.Within-subject variability is often large in magnitude, and as such, should be systematically evaluated and carefully considered in future studies to successfully estimate individual responses to training.How might it impact on clinical practice in the future?Within-subject variability in response to exercise training is a key factor that must be considered in order to obtain a reproducible measurement of individual responses to exercise training. This is akin to ensuring data are reproducible for each subject.Our findings provide guidelines for future exercise training studies to ensure results are reproducible within participants and to minimise wasting precious research resources.By implementing five suggested methods to estimate individual responses to training, we highlight their feasibility, strengths, weaknesses and costs, for researchers to make the best decision on how to accurately measure individual responses to exercise training.


Subject(s)
Exercise , High-Intensity Interval Training , Male , Humans , Reproducibility of Results , Exercise/physiology , Health Status
8.
Am J Epidemiol ; 192(3): 467-474, 2023 02 24.
Article in English | MEDLINE | ID: mdl-35388406

ABSTRACT

"Fusion" study designs combine data from different sources to answer questions that could not be answered (as well) by subsets of the data. Studies that augment main study data with validation data, as in measurement-error correction studies or generalizability studies, are examples of fusion designs. Fusion estimators, here solutions to stacked estimating functions, produce consistent answers to identified research questions using data from fusion designs. In this paper, we describe a pair of examples of fusion designs and estimators, one where we generalize a proportion to a target population and one where we correct measurement error in a proportion. For each case, we present an example motivated by human immunodeficiency virus research and summarize results from simulation studies. Simulations demonstrate that the fusion estimators provide approximately unbiased results with appropriate 95% confidence interval coverage. Fusion estimators can be used to appropriately combine data in answering important questions that benefit from multiple sources of information.


Subject(s)
Research Design , Humans , Computer Simulation
9.
Alzheimers Dement ; 2022 Jun 14.
Article in English | MEDLINE | ID: mdl-35699240

ABSTRACT

INTRODUCTION: The effect of random error on the performance of blood-based biomarkers for Alzheimer's disease (AD) must be determined before clinical implementation. METHODS: We measured test-retest variability of plasma amyloid beta (Aß)42/Aß40, neurofilament light (NfL), glial fibrillary acidic protein (GFAP), and phosphorylated tau (p-tau)217 and simulated effects of this variability on biomarker performance when predicting either cerebrospinal fluid (CSF) Aß status or conversion to AD dementia in 399 non-demented participants with cognitive symptoms. RESULTS: Clinical performance was highest when combining all biomarkers. Among single-biomarkers, p-tau217 performed best. Test-retest variability ranged from 4.1% (Aß42/Aß40) to 25% (GFAP). This variability reduced the performance of the biomarkers (≈ΔAUC [area under the curve] -1% to -4%) with the least effects on models with p-tau217. The percent of individuals with unstable predicted outcomes was lowest for the multi-biomarker combination (14%). DISCUSSION: Clinical prediction models combining plasma biomarkers-particularly p-tau217-exhibit high performance and are less effected by random error. Individuals with unstable predicted outcomes ("gray zone") should be recommended for further tests.

11.
J Clin Hypertens (Greenwich) ; 24(3): 263-270, 2022 03.
Article in English | MEDLINE | ID: mdl-35137521

ABSTRACT

The authors examined the proportion of US adults that would have their high blood pressure (BP) status changed if systolic BP (SBP) and diastolic BP (DBP) were measured with systematic bias and/or random error versus following a standardized protocol. Data from the 2017-2018 National Health and Nutrition Examination Survey (NHANES; n = 5176) were analyzed. BP was measured up to three times using a mercury sphygmomanometer by a trained physician following a standardized protocol and averaged. High BP was defined as SBP ≥130 mm Hg or DBP ≥80 mm Hg. Among US adults not taking antihypertensive medication, 32.0% (95%CI: 29.6%,34.4%) had high BP. If SBP and DBP were measured with systematic bias, 5 mm Hg for SBP and 3.5 mm Hg for DBP higher and lower than in NHANES, the proportion with high BP was estimated to be 44.4% (95%CI: 42.6%,46.2%) and 21.9% (95%CI 19.5%,24.4%). Among US adults taking antihypertensive medication, 60.6% (95%CI: 57.2%,63.9%) had high BP. If SBP and DBP were measured 5 and 3.5 mm Hg higher and lower than in NHANES, the proportion with high BP was estimated to be 71.8% (95%CI: 68.3%,75.0%) and 48.4% (95%CI: 44.6%,52.2%), respectively. If BP was measured with random error, with standard deviations of 15 mm Hg for SBP and 7 mm Hg for DBP, 21.4% (95%CI: 19.8%,23.0%) of US adults not taking antihypertensive medication and 20.5% (95%CI: 17.7%,23.3%) taking antihypertensive medication had their high BP status re-categorized. In conclusions, measuring BP with systematic or random errors may result in the misclassification of high BP for a substantial proportion of US adults.


Subject(s)
Hypertension , Adult , Antihypertensive Agents/therapeutic use , Blood Pressure/physiology , Humans , Hypertension/diagnosis , Hypertension/drug therapy , Hypertension/epidemiology , Nutrition Surveys , Prevalence , United States/epidemiology
12.
J Eval Clin Pract ; 28(3): 353-362, 2022 06.
Article in English | MEDLINE | ID: mdl-35089627

ABSTRACT

RATIONALE, AIMS, AND OBJECTIVES: It is generally believed that evidence from low quality of evidence generate inaccurate estimates about treatment effects more often than evidence from high (certainty) quality evidence (CoE). As a result, we would expect that (a) estimates of effects of health interventions initially based on high CoE change less frequently than the effects estimated by lower CoE (b) the estimates of magnitude of effect size differ between high and low CoE. Empirical assessment of these foundational principles of evidence-based medicine has been lacking. METHODS: We reviewed the Cochrane Database of Systematic Reviews from January 2016 through May 2021 for pairs of original and updated reviews for change in CoE assessments based on the Grading of Recommendations Assessment, Development and Evaluation (GRADE) method. We assessed the difference in effect sizes between the original versus updated reviews as a function of change in CoE, which we report as a ratio of odds ratio (ROR). We compared ROR generated in the studies in which CoE changed from very low/low (VL/L) to moderate/high (M/H) versus M/H to VL/L. Heterogeneity and inconsistency were assessed using the tau and I2 statistic. We also assessed the change in precision of effect estimates (by calculating the ratio of standard errors) (seR), and the absolute deviation in estimates of treatment effects (aROR). RESULTS: Four hundred and nineteen pairs of reviews were included of which 414 (207 × 2) informed the CoE appraisal and 384 (192 × 2) the assessment of effect size. We found that CoE originally appraised as VL/L had 2.1 [95% confidence interval (CI): 1.19-4.12; p = 0.0091] times higher odds to be changed in the future studies than M/H CoE. However, the effect size was not different (p = 1) when CoE changed from VL/L → M/H [ROR = 1.02 (95% CI: 0.74-1.39)] compared with M/H → VL/L (ROR = 1.02 [95% CI: 0.44-2.37]). Similar overlap in aROR between the VL/L → M/H versus M/H → VL/L subgroups was observed [median (IQR): 1.12 (1.07-1.57) vs. 1.21 (1.12-2.43)]. We observed large inconsistency across ROR estimates (I2 = 99%). There was larger imprecision in treatment effects when CoE changed from VL/L → M/H (seR = 1.46) than when it changed from M/H → VL/L (seR = 0.72). CONCLUSIONS: We found that low-quality evidence changes more often than high CoE. However, the effect size did not systematically differ between the studies with low versus high CoE. The finding that the effect size did not differ between low and high CoE indicate urgent need to refine current EBM critical appraisal methods.


Subject(s)
Systematic Reviews as Topic , Humans
13.
Article in Chinese | WPRIM (Western Pacific) | ID: wpr-934409

ABSTRACT

Objective:To establish and evaluate a new real-time quality control method that can identify the random errors by using the backpropagation neural network (BPNN) algorithm and taking blood glucose test as an example.Methods:A total of 219 000 blood glucose results measured by Siemens advia 2 400 analytical system from January 2019 to July 2020 and derived from Laboratory Information System of Beijing Chaoyang Hospital Laboratory Department was regarded as the unbiased data of our study. Six deviations with different sizes were introduced to generate the corresponding biased data. With each biased data, BPNN and MovSD algorithms were used and tested, and then evaluated by traceability method and clinical method.Results:For BPNN algorithm, the block size was pre-set to 10 and the false-positive rate in all biases was within 0.1%. For MovSD, however, the optimal block size and exclusive limit were 150 and 10% separately and its false-positive rate in all biases was 0.38%, which was 0.28% higher than BPNN. Especially, for the least two error factors of 0.5 and 1, all the random errors were not detected by MovSD; for the error factor larger than 1, random errors could be detected by MovSD but the MNPed was higher than that of BPNN under all deviations. The difference was up to 91.67 times. 460 000 reference data were produced by traceability procedure. The uncertainty of BPNN algorithm evaluated by these reference data was only 0.078%.Conclusion:A real-time quality control method based on BPNN algorithm was successfully established to identify random errors in analytical phase, which was more efficient than MovSD method and provided a new idea and method for the identification of random errors in clinical practice.

14.
Front Physiol ; 12: 758015, 2021.
Article in English | MEDLINE | ID: mdl-34867462

ABSTRACT

Purpose: Instrumentation systems are increasingly used in rowing to measure training intensity and performance but have not been validated for measures of power. In this study, the concurrent validity of Peach PowerLine (six units), Nielsen-Kellerman EmPower (five units), Weba OarPowerMeter (three units), Concept2 model D ergometer (one unit), and a custom-built reference instrumentation system (Reference System; one unit) were investigated. Methods: Eight female and seven male rowers [age, 21 ± 2.5 years; rowing experience, 7.1 ± 2.6 years, mean ± standard deviation (SD)] performed a 30-s maximal test and a 7 × 4-min incremental test once per week for 5 weeks. Power per stroke was extracted concurrently from the Reference System (via chain force and velocity), the Concept2 itself, Weba (oar shaft-based), and either Peach or EmPower (oarlock-based). Differences from the Reference System in the mean (representing potential error) and the stroke-to-stroke variability (represented by its SD) of power per stroke for each stage and device, and between-unit differences, were estimated using general linear mixed modeling and interpreted using rejection of non-substantial and substantial hypotheses. Results: Potential error in mean power was decisively substantial for all devices (Concept2, -11 to -15%; Peach, -7.9 to -17%; EmPower, -32 to -48%; and Weba, -7.9 to -16%). Between-unit differences (as SD) in mean power lacked statistical precision but were substantial and consistent across stages (Peach, ∼5%; EmPower, ∼7%; and Weba, ∼2%). Most differences from the Reference System in stroke-to-stroke variability of power were possibly or likely trivial or small for Peach (-3.0 to -16%), and likely or decisively substantial for EmPower (9.7-57%), and mostly decisively substantial for Weba (61-139%) and the Concept2 (-28 to 177%). Conclusion: Potential negative error in mean power was evident for all devices and units, particularly EmPower. Stroke-to-stroke variation in power showed a lack of measurement sensitivity (apparent smoothing) that was minor for Peach but larger for the Concept2, whereas EmPower and Weba added random error. Peach is therefore recommended for measurement of mean and stroke power.

15.
Biomed Phys Eng Express ; 7(4)2021 06 30.
Article in English | MEDLINE | ID: mdl-34126605

ABSTRACT

Aim. The aim of the current study was to compare between the deep inspiration breath-hold (DIBH) technique and free-breathing (FB) method in the treatment delivery uncertainty of breast cancer radiotherapy using skin dose measurements.Methods. In a prospective manner, eighty patients were randomly selected for skin dose measurements, and they were assigned to two groups. DIBH (40 patients) and FB (40 patients). The systematic inter-fraction dose variation was quantified using the mean percentage error (MPE) between the average measured total dose per session in three consecutive sessions and the corresponding calculated point dose from the treatment planning system. The random inter-fraction dose variation was quantified using the standard deviation (SD) of the dose delivered by the medial or lateral tangential fields, or the total session dose in the three sessions (SDMT, SDLT, or SDtotal, respectively). While the random intra-fraction dose variation was quantified using the SD of the dose difference between the medial and lateral tangential fields in three consecutive sessions (SDMT-LT).Results. There was no statistically significant difference in MPE between the DIBH and FB groups (p = 0.583). Moreover, the mean SDtotaland SDMTof the DIBH group were significantly lower than that of the FB group (2.75 ± 2.33 cGy versus 4.45 cGy ± 4.33, p = 0.048) and (1.94 ± 1.63 cGy versus 3.76 ± 3.42 cGy, p = 0.007), respectively. However, there was no significant difference in the mean SDLTand SDMT-LTbetween the two groups (p > 0.05).Conclusion. In addition to the advantage of reducing the cardiopulmonary radiation doses in left breast cancer, the DIBH technique could reduce the treatment delivery uncertainty compared to the FB method due to the significant reduction in the random inter-fraction dose variations.


Subject(s)
Breast Neoplasms , Breast Neoplasms/radiotherapy , Breath Holding , Female , Humans , Prospective Studies , Radiotherapy Dosage , Radiotherapy Planning, Computer-Assisted
16.
Equine Vet J ; 53(2): 205-216, 2021 Mar.
Article in English | MEDLINE | ID: mdl-33135243

ABSTRACT

The study of free-living populations is important to generate knowledge related to the epidemiology of disease and other health outcomes. These studies are unable to provide the same level of control as is possible in laboratory studies and thus are susceptible to certain errors. The primary categories of study errors are random and systematic. Random errors cause imprecision and can be quantified using statistical methods including the calculation of confidence intervals. Systematic errors cause bias, which is typically difficult to quantify within the context of an individual study. The three main categories of systematic errors are selection, information, and confounding bias. Selection bias occurs when enrolled animals are not representative of the target population of interest in respect to characteristics important to the primary study objective. Information bias occurs when data collected from enrolled animals deviates from the true value. Information bias is most damaging when errors vary among comparison groups. Both selection and information bias are prevented through the application of good study design procedures. Researchers should select study animals after careful consideration of the primary study objective and desired target population. Investigators can reduce information bias through standardised data collection procedures and the use of blinding. Confounding bias occurs when the measured association between a predictor and an outcome ignores the influential effect of an additional variable. Confounding is common and analysts must implement the appropriate statistical adjustments to reduce the associated bias. All studies will have some errors and biased data with high precision are the most damaging to the validity of study conclusions. Authors can facilitate the critical evaluation of their research by providing text related to the limitations and potential sources of bias within the discussion section of their manuscripts.


Subject(s)
Research Design , Animals , Bias
17.
Am J Epidemiol ; 190(2): 191-193, 2021 02 01.
Article in English | MEDLINE | ID: mdl-32648906

ABSTRACT

Measures of information and surprise, such as the Shannon information value (S value), quantify the signal present in a stream of noisy data. We illustrate the use of such information measures in the context of interpreting P values as compatibility indices. S values help communicate the limited information supplied by conventional statistics and cast a critical light on cutoffs used to judge and construct those statistics. Misinterpretations of statistics may be reduced by interpreting P values and interval estimates using compatibility concepts and S values instead of "significance" and "confidence."


Subject(s)
Data Interpretation, Statistical , Epidemiologic Methods , Confidence Intervals , Humans , Uncertainty
18.
Article in Chinese | WPRIM (Western Pacific) | ID: wpr-910532

ABSTRACT

Objective:To evaluate the dosimetric effect of multi-leaf collimator (MLC) position error on dynamic intensity-modulated radiotherapy (dMLC-IMRT), aiming to provide guidance for the establishment of MLC quality control accuracy and operation tolerance.Methods:In the phantom study, the virtual water phantom established in the treatment planning system (TPS), and three dynamic sliding window test fields with gap width of 5 mm, 10 mm and 20 mm were designed. Clinical treatment plans of 7 common tumor types were extracted, including nasopharyngeal carcinoma, glioma, lung cancer, esophageal cancer, cervical cancer, prostate cancer, and breast cancer, with 6 cases in each. MLC errors were introduced into the copy from original plan to generate the simulation plans. MLC errors included systematic open/close error, systematic deviation error and random error. The dosimetric differences between the original and simulation plans were compared.Results:The phantom study showed that the symbol of dose deviation was the same as that of systematic open/close error, and the value was increased with the increase of MLC error and decreased with the increase of gap width. The results of patient study showed that the systematic open/close error had a significant effect on dosimetry, the target volume dose sensitivities of different plans were 7.258-13.743%/mm, and were negatively correlated with the average field width. The dosimetric deviation caused by the systematic shift error below 2 mm was less than 2%. The dosimetric change caused by the random error below 2 mm could be neglected in clinical treatment.Conclusions:The minimal gap width should be limited in TPS, whereas the quality control of MLC should be strengthened. In addition, for the dynamic intensity-modulated treatment technology, 2 mm random error was suggested to be the operation tolerance during treatment delivery, and 0.2 mm alignment accuracy on each side (or 0.4 mm unilateral) is recommended to be the MLC quality control accuracy to ensure the dose accuracy of radiotherapy for different tumors.

19.
Micromachines (Basel) ; 11(11)2020 Nov 21.
Article in English | MEDLINE | ID: mdl-33233457

ABSTRACT

Research and industrial studies have indicated that small size, low cost, high precision, and ease of integration are vital features that characterize microelectromechanical systems (MEMS) inertial sensors for mass production and diverse applications. In recent times, sensors like MEMS accelerometers and MEMS gyroscopes have been sought in an increased application range such as medical devices for health care to defense and military weapons. An important limitation of MEMS inertial sensors is repeatedly documented as the ease of being influenced by environmental noise from random sources, along with mechanical and electronic artifacts in the underlying systems, and other random noise. Thus, random error processing is essential for proper elimination of artifact signals and improvement of the accuracy and reliability from such sensors. In this paper, a systematic review is carried out by investigating different random error signal processing models that have been recently developed for MEMS inertial sensor precision improvement. For this purpose, an in-depth literature search was performed on several databases viz., Web of Science, IEEE Xplore, Science Direct, and Association for Computing Machinery Digital Library. Forty-nine representative papers that focused on the processing of signals from MEMS accelerometers, MEMS gyroscopes, and MEMS inertial measuring units, published in journal or conference formats, and indexed on the databases within the last 10 years, were downloaded and carefully reviewed. From this literature overview, 30 mainstream algorithms were extracted and categorized into seven groups, which were analyzed to present the contributions, strengths, and weaknesses of the literature. Additionally, a summary of the models developed in the studies was presented, along with their working principles viz., application domain, and the conclusions made in the studies. Finally, the development trend of MEMS inertial sensor technology and its application prospects were presented.

20.
Quant Imaging Med Surg ; 9(7): 1255-1269, 2019 Jul.
Article in English | MEDLINE | ID: mdl-31448211

ABSTRACT

BACKGROUND: To evaluate the performance of a highly accelerated 3D MRI on inter-fractional positional measurement for MR-guided radiotherapy (MRgRT) in the head and neck (HN). METHODS: Fourteen healthy volunteers received 159 scans on a 1.5 T MR-sim to simulate MRgRT fractions. MRI acquisition included a high-resolution (HQI-MRI, voxel-size =1.05×1.05×1.05 mm3, duration =5 min) and a highly-accelerated low-resolution (true-LQI-MRI, acceleration-factor =9, voxel-size =1.4×1.4×1.4 mm3, duration =86 s) T1w spin-echo sequence (TR/TE =420/7.2 ms). The first session HQI-MRI was used as the reference to mimic planning MRI. Other HQI-MRI was also retrospectively down-sampled in K-space and GRAPPA reconstructed to generate pseudo-LQI-MRI. Inter-sessional positional shift calculated from HQI-MRI, true-LQI-MRI and pseudo-LQI-MRI rigidly registering to the reference were analyzed and compared in the overall HN and the sub-regions of brain, nasopharynx, oropharynx and hypopharynx. RESULTS: The calculated SD of systematic errors (Σ) from HQI-MRI/pseudo-LQI-MRI/true-LQI-MRI images for overall HN were 1.11/1.14/1.08, 0.28/0.26/0.29, 0.43/0.44/0.60, and 0.77/0.79/0.74 mm for translation in LR, AP, SI and 3D, respectively; The corresponding RMS of random errors (σ) were 0.97/0.98/0.96, 0.28/0.27/0.26, 0.77/0.77/0.72, and 0.85/0.87/0.85 mm. For all sub-regions, brain showed the smallest Σ and σ in 3D. Other sub-regions showed direction-dependent error patterns, but the positioning results were consistent, independent of the datasets used for registration. CONCLUSIONS: A highly-accelerated 3D-MRI could be used for MR-guided HN radiotherapy without compromising position verification accuracy.

SELECTION OF CITATIONS
SEARCH DETAIL
...