Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 36.013
Filter
1.
An. psicol ; 40(2): 344-354, May-Sep, 2024. ilus, tab, graf
Article in Spanish | IBECS | ID: ibc-232727

ABSTRACT

En los informes meta-analíticos se suelen reportar varios tipos de intervalos, hecho que ha generado cierta confusión a la hora de interpretarlos. Los intervalos de confianza reflejan la incertidumbre relacionada con un número, el tamaño del efecto medio paramétrico. Los intervalos de predicción reflejan el tamaño paramétrico probable en cualquier estudio de la misma clase que los incluidos en un meta-análisis. Su interpretación y aplicaciones son diferentes. En este artículo explicamos su diferente naturaleza y cómo se pueden utilizar para responder preguntas específicas. Se incluyen ejemplos numéricos, así como su cálculo con el paquete metafor en R.(AU)


Several types of intervals are usually employed in meta-analysis, a fact that has generated some confusion when interpreting them. Confidence intervals reflect the uncertainty related to a single number, the parametric mean effect size. Prediction intervals reflect the probable parametric effect size in any study of the same class as those included in a meta-analysis. Its interpretation and applications are different. In this article we explain in de-tail their different nature and how they can be used to answer specific ques-tions. Numerical examples are included, as well as their computation with the metafor Rpackage.(AU)


Subject(s)
Humans , Male , Female , Confidence Intervals , Forecasting , Data Interpretation, Statistical
2.
Biometrics ; 80(2)2024 Mar 27.
Article in English | MEDLINE | ID: mdl-38837900

ABSTRACT

Randomization-based inference using the Fisher randomization test allows for the computation of Fisher-exact P-values, making it an attractive option for the analysis of small, randomized experiments with non-normal outcomes. Two common test statistics used to perform Fisher randomization tests are the difference-in-means between the treatment and control groups and the covariate-adjusted version of the difference-in-means using analysis of covariance. Modern computing allows for fast computation of the Fisher-exact P-value, but confidence intervals have typically been obtained by inverting the Fisher randomization test over a range of possible effect sizes. The test inversion procedure is computationally expensive, limiting the usage of randomization-based inference in applied work. A recent paper by Zhu and Liu developed a closed form expression for the randomization-based confidence interval using the difference-in-means statistic. We develop an important extension of Zhu and Liu to obtain a closed form expression for the randomization-based covariate-adjusted confidence interval and give practitioners a sufficiency condition that can be checked using observed data and that guarantees that these confidence intervals have correct coverage. Simulations show that our procedure generates randomization-based covariate-adjusted confidence intervals that are robust to non-normality and that can be calculated in nearly the same time as it takes to calculate the Fisher-exact P-value, thus removing the computational barrier to performing randomization-based inference when adjusting for covariates. We also demonstrate our method on a re-analysis of phase I clinical trial data.


Subject(s)
Computer Simulation , Confidence Intervals , Humans , Biometry/methods , Models, Statistical , Data Interpretation, Statistical , Random Allocation , Randomized Controlled Trials as Topic/statistics & numerical data , Randomized Controlled Trials as Topic/methods
4.
J Med Syst ; 48(1): 58, 2024 Jun 01.
Article in English | MEDLINE | ID: mdl-38822876

ABSTRACT

Modern anesthetic drugs ensure the efficacy of general anesthesia. Goals include reducing variability in surgical, tracheal extubation, post-anesthesia care unit, or intraoperative response recovery times. Generalized confidence intervals based on the log-normal distribution compare variability between groups, specifically ratios of standard deviations. The alternative statistical approaches, performing robust variance comparison tests, give P-values, not point estimates nor confidence intervals for the ratios of the standard deviations. We performed Monte-Carlo simulations to learn what happens to confidence intervals for ratios of standard deviations of anesthesia-associated times when analyses are based on the log-normal, but the true distributions are Weibull. We used simulation conditions comparable to meta-analyses of most randomized trials in anesthesia, n ≈ 25 and coefficients of variation ≈ 0.30 . The estimates of the ratios of standard deviations were positively biased, but slightly, the ratios being 0.11% to 0.33% greater than nominal. In contrast, the 95% confidence intervals were very wide (i.e., > 95% of P ≥ 0.05). Although substantive inferentially, the differences in the confidence limits were small from a clinical or managerial perspective, with a maximum absolute difference in ratios of 0.016. Thus, P < 0.05 is reliable, but investigators should plan for Type II errors at greater than nominal rates.


Subject(s)
Monte Carlo Method , Humans , Confidence Intervals , Anesthesia, General , Time Factors , Models, Statistical
5.
Braz J Phys Ther ; 28(3): 101079, 2024.
Article in English | MEDLINE | ID: mdl-38865832

ABSTRACT

BACKGROUND: The physical therapy profession has made efforts to increase the use of confidence intervals due to the valuable information they provide for clinical decision-making. Confidence intervals indicate the precision of the results and describe the strength and direction of a treatment effect measure. OBJECTIVES: To determine the prevalence of reporting of confidence intervals, achievement of intended sample size, and adjustment for multiple primary outcomes in randomised trials of physical therapy interventions. METHODS: We randomly selected 100 trials published in 2021 and indexed on the Physiotherapy Evidence Database. Two independent reviewers extracted the number of participants, any sample size calculation, and any adjustments for multiple primary outcomes. We extracted whether at least one between-group comparison was reported with a 95 % confidence interval and whether any confidence intervals were interpreted. RESULTS: The prevalence of use of confidence intervals was 47 % (95 % CI 38, 57). Only 6 % of trials (95 % CI: 3, 12) both reported and interpreted a confidence interval. Among the 100 trials, 59 (95 % CI: 49, 68) calculated and achieved the required sample size. Among the 100 trials, 19 % (95 % CI: 13, 28) had a problem with unadjusted multiplicity on the primary outcomes. CONCLUSIONS: Around half of trials of physical therapy interventions published in 2021 reported confidence intervals around between-group differences. This represents an increase of 5 % from five years earlier. Very few trials interpreted the confidence intervals. Most trials reported a sample size calculation, and among these most achieved that sample size. There is still a need to increase the use of adjustment for multiple comparisons.


Subject(s)
Physical Therapy Modalities , Randomized Controlled Trials as Topic , Humans , Sample Size , Confidence Intervals
6.
Korean J Anesthesiol ; 77(3): 316-325, 2024 06.
Article in English | MEDLINE | ID: mdl-38835136

ABSTRACT

The statistical significance of a clinical trial analysis result is determined by a mathematical calculation and probability based on null hypothesis significance testing. However, statistical significance does not always align with meaningful clinical effects; thus, assigning clinical relevance to statistical significance is unreasonable. A statistical result incorporating a clinically meaningful difference is a better approach to present statistical significance. Thus, the minimal clinically important difference (MCID), which requires integrating minimum clinically relevant changes from the early stages of research design, has been introduced. As a follow-up to the previous statistical round article on P values, confidence intervals, and effect sizes, in this article, we present hands-on examples of MCID and various effect sizes and discuss the terms statistical significance and clinical relevance, including cautions regarding their use.


Subject(s)
Minimal Clinically Important Difference , Humans , Probability , Research Design , Clinical Trials as Topic/methods , Data Interpretation, Statistical , Confidence Intervals
7.
HGG Adv ; 5(3): 100304, 2024 Jul 18.
Article in English | MEDLINE | ID: mdl-38720460

ABSTRACT

Genetic correlation refers to the correlation between genetic determinants of a pair of traits. When using individual-level data, it is typically estimated based on a bivariate model specification where the correlation between the two variables is identifiable and can be estimated from a covariance model that incorporates the genetic relationship between individuals, e.g., using a pre-specified kinship matrix. Inference relying on asymptotic normality of the genetic correlation parameter estimates may be inaccurate when the sample size is low, when the genetic correlation is close to the boundary of the parameter space, and when the heritability of at least one of the traits is low. We address this problem by developing a parametric bootstrap procedure to construct confidence intervals for genetic correlation estimates. The procedure simulates paired traits under a range of heritability and genetic correlation parameters, and it uses the population structure encapsulated by the kinship matrix. Heritabilities and genetic correlations are estimated using the close-form, method of moment, Haseman-Elston regression estimators. The proposed parametric bootstrap procedure is especially useful when genetic correlations are computed on pairs of thousands of traits measured on the same exact set of individuals. We demonstrate the parametric bootstrap approach on a proteomics dataset from the Jackson Heart Study.


Subject(s)
Models, Genetic , Humans , Protein Interaction Maps/genetics , Confidence Intervals , Computer Simulation , Algorithms , Phenotype
8.
Stat Med ; 43(15): 2894-2927, 2024 Jul 10.
Article in English | MEDLINE | ID: mdl-38738397

ABSTRACT

Estimating causal effects from large experimental and observational data has become increasingly prevalent in both industry and research. The bootstrap is an intuitive and powerful technique used to construct standard errors and confidence intervals of estimators. Its application however can be prohibitively demanding in settings involving large data. In addition, modern causal inference estimators based on machine learning and optimization techniques exacerbate the computational burden of the bootstrap. The bag of little bootstraps has been proposed in non-causal settings for large data but has not yet been applied to evaluate the properties of estimators of causal effects. In this article, we introduce a new bootstrap algorithm called causal bag of little bootstraps for causal inference with large data. The new algorithm significantly improves the computational efficiency of the traditional bootstrap while providing consistent estimates and desirable confidence interval coverage. We describe its properties, provide practical considerations, and evaluate the performance of the proposed algorithm in terms of bias, coverage of the true 95% confidence intervals, and computational time in a simulation study. We apply it in the evaluation of the effect of hormone therapy on the average time to coronary heart disease using a large observational data set from the Women's Health Initiative.


Subject(s)
Algorithms , Causality , Computer Simulation , Humans , Female , Confidence Intervals , Coronary Disease/epidemiology , Models, Statistical , Data Interpretation, Statistical , Bias , Observational Studies as Topic/methods , Observational Studies as Topic/statistics & numerical data
9.
Med Decis Making ; 44(4): 365-379, 2024 May.
Article in English | MEDLINE | ID: mdl-38721872

ABSTRACT

BACKGROUND: For time-to-event endpoints, three additional benefit assessment methods have been developed aiming at an unbiased knowledge about the magnitude of clinical benefit of newly approved treatments. The American Society of Clinical Oncology (ASCO) defines a continuous score using the hazard ratio point estimate (HR-PE). The European Society for Medical Oncology (ESMO) and the German Institute for Quality and Efficiency in Health Care (IQWiG) developed methods with an ordinal outcome using lower and upper limits of the 95% HR confidence interval (HR-CI), respectively. We describe all three frameworks for additional benefit assessment aiming at a fair comparison across different stakeholders. Furthermore, we determine which ASCO score is consistent with which ESMO/IQWiG category. METHODS: In a comprehensive simulation study with different failure time distributions and treatment effects, we compare all methods using Spearman's correlation and descriptive measures. For determination of ASCO values consistent with categories of ESMO/IQWiG, maximizing weighted Cohen's Kappa approach was used. RESULTS: Our research depicts a high positive relationship between ASCO/IQWiG and a low positive relationship between ASCO/ESMO. An ASCO score smaller than 17, 17 to 20, 20 to 24, and greater than 24 corresponds to ESMO categories. Using ASCO values of 21 and 38 as cutoffs represents IQWiG categories. LIMITATIONS: We investigated the statistical aspects of the methods and hence implemented slightly reduced versions of all methods. CONCLUSIONS: IQWiG and ASCO are more conservative than ESMO, which often awards the maximal category independent of the true effect and is at risk of overcompensating with various failure time distributions. ASCO has similar characteristics as IQWiG. Delayed treatment effects and underpowered/overpowered studies influence all methods in some degree. Nevertheless, ESMO is the most liberal one. HIGHLIGHTS: For the additional benefit assessment, the American Society of Clinical Oncology (ASCO) uses the hazard ratio point estimate (HR-PE) for their continuous score. In contrast, the European Society for Medical Oncology (ESMO) and the German Institute for Quality and Efficiency in Health Care (IQWiG) use the lower and upper 95% HR confidence interval (HR-CI) to specific thresholds, respectively. ESMO generously assigns maximal scores, while IQWiG is more conservative.This research provides the first comparison between IQWiG and ASCO and describes all three frameworks for additional benefit assessment aiming for a fair comparison across different stakeholders. Furthermore, thresholds for ASCO consistent with ESMO and IQWiG categories are determined, enabling a comparison of the methods in practice in a fair manner.IQWiG and ASCO are the more conservative methods, while ESMO awards high percentages of maximal categories, especially with various failure time distributions. ASCO has similar characteristics as IQWiG. Delayed treatment effects and under/-overpowered studies influence all methods. Nevertheless, ESMO is the most liberal one. An ASCO score smaller than 17, 17 to 20, 20 to 24, and greater than 24 correspond to the categories of ESMO. Using ASCO values of 21 and 38 as cutoffs represents categories of IQWiG.


Subject(s)
Proportional Hazards Models , Humans , Computer Simulation , Confidence Intervals , Medical Oncology/methods , Medical Oncology/standards
10.
JNCI Cancer Spectr ; 8(3)2024 Apr 30.
Article in English | MEDLINE | ID: mdl-38684185

ABSTRACT

Statistical significance has long relied on the criterion of P less than or equal to .05. Although this threshold has generally functioned well, it has engendered some negative practices to circumvent it and been criticized as too inflexible. We concur with the statisticians and methodologists who are currently arguing for more flexibility to the P value and more reliance on the 95% confidence interval, a shift that is likely to change future practice in data analysis and interpretation for oncology.


Subject(s)
Medical Oncology , Humans , Data Interpretation, Statistical , Confidence Intervals , Research Design
11.
Stat Med ; 43(12): 2359-2367, 2024 May 30.
Article in English | MEDLINE | ID: mdl-38565328

ABSTRACT

A multi-stage randomized trial design can significantly improve efficiency by allowing early termination of the trial when the experimental arm exhibits either low or high efficacy compared to the control arm during the study. However, proper inference methods are necessary because the underlying distribution of the target statistic changes due to the multi-stage structure. This article focuses on multi-stage randomized phase II trials with a dichotomous outcome, such as treatment response, and proposes exact conditional confidence intervals for the odds ratio. The usual single-stage confidence intervals are invalid when used in multi-stage trials. To address this issue, we propose a linear ordering of all possible outcomes. This ordering is conditioned on the total number of responders in each stage and utilizes the exact conditional distribution function of the outcomes. This approach enables the estimation of an exact confidence interval accounting for the multi-stage designs.


Subject(s)
Clinical Trials, Phase II as Topic , Randomized Controlled Trials as Topic , Humans , Randomized Controlled Trials as Topic/methods , Randomized Controlled Trials as Topic/statistics & numerical data , Clinical Trials, Phase II as Topic/methods , Clinical Trials, Phase II as Topic/statistics & numerical data , Confidence Intervals , Odds Ratio , Models, Statistical , Computer Simulation , Research Design
12.
Sensors (Basel) ; 24(6)2024 Mar 15.
Article in English | MEDLINE | ID: mdl-38544148

ABSTRACT

Parkinson's disease is one of the major neurodegenerative diseases that affects the postural stability of patients, especially during gait initiation. There is actually an increasing demand for the development of new non-pharmacological tools that can easily classify healthy/affected patients as well as the degree of evolution of the disease. The experimental characterization of gait initiation (GI) is usually done through the simultaneous acquisition of about 20 variables, resulting in very large datasets. Dimension reduction tools are therefore suitable, considering the complexity of the physiological processes involved. The principal Component Analysis (PCA) is very powerful at reducing the dimensionality of large datasets and emphasizing correlations between variables. In this paper, the Principal Component Analysis (PCA) was enhanced with bootstrapping and applied to the study of the GI to identify the 3 majors sets of variables influencing the postural control disability of Parkinsonian patients during GI. We show that the combination of these methods can lead to a significant improvement in the unsupervised classification of healthy/affected patients using a Gaussian mixture model, since it leads to a reduced confidence interval on the estimated parameters. The benefits of this method for the identification and study of the efficiency of potential treatments is not addressed in this paper but could be addressed in future works.


Subject(s)
Gait Disorders, Neurologic , Parkinson Disease , Humans , Principal Component Analysis , Confidence Intervals , Parkinson Disease/therapy , Gait/physiology , Postural Balance/physiology
13.
Rev. int. med. cienc. act. fis. deporte ; 24(95): 1-23, mar.-2024. graf, tab
Article in English | IBECS | ID: ibc-ADZ-313

ABSTRACT

CBA is a sports event that allows fans to enjoy themselves and players to give full play, and traditional Chinese cultural values have a profound influence on it. This paper takes the 100 sets of historical rating data of the fourteen teams in CBA league as the basic basis, firstly, we simply deal with the 100 sets of historical rating data and use Excel function formula to find out the mean, extreme deviation and variance of each team, then we carry out SAS normal test, and we find that except for the very few data with large deviation, the historical rating data satisfy the normal distribution. Through the outlier algorithm to screen the values, compare the confidence intervals as well as carry out hypothesis testing, to objectively and scientifically explore the probability of each team winning the championship in the CBA league. Compare the probability of winning the championship of these fourteen teams and predict the top four teams in the CBA league to ensure that the prediction results are as reasonable as possible. With the help of hierarchical analysis to qualitatively analyze the level of each team, and then through cluster analysis to compare these data, and combined with the trend of the development of the world's basketball movement, the use of multiple regression and SPSS to analyze the level of the team's factors, in-depth thinking about the league, a more reasonable to give a more scientific to improve the probability of the team's winning the championship, and to promote better development of the basketball movement. (AU)


Subject(s)
Humans , Confidence Intervals , Hypothesis-Testing , Forecasting , Research Support as Topic , Basketball
14.
Genet Sel Evol ; 56(1): 18, 2024 Mar 08.
Article in English | MEDLINE | ID: mdl-38459504

ABSTRACT

BACKGROUND: Validation by data truncation is a common practice in genetic evaluations because of the interest in predicting the genetic merit of a set of young selection candidates. Two of the most used validation methods in genetic evaluations use a single data partition: predictivity or predictive ability (correlation between pre-adjusted phenotypes and estimated breeding values (EBV) divided by the square root of the heritability) and the linear regression (LR) method (comparison of "early" and "late" EBV). Both methods compare predictions with the whole dataset and a partial dataset that is obtained by removing the information related to a set of validation individuals. EBV obtained with the partial dataset are compared against adjusted phenotypes for the predictivity or EBV obtained with the whole dataset in the LR method. Confidence intervals for predictivity and the LR method can be obtained by replicating the validation for different samples (or folds), or bootstrapping. Analytical confidence intervals would be beneficial to avoid running several validations and to test the quality of the bootstrap intervals. However, analytical confidence intervals are unavailable for predictivity and the LR method. RESULTS: We derived standard errors and Wald confidence intervals for the predictivity and statistics included in the LR method (bias, dispersion, ratio of accuracies, and reliability). The confidence intervals for the bias, dispersion, and reliability depend on the relationships and prediction error variances and covariances across the individuals in the validation set. We developed approximations for large datasets that only need the reliabilities of the individuals in the validation set. The confidence intervals for the ratio of accuracies and predictivity were obtained through the Fisher transformation. We show the adequacy of both the analytical and approximated analytical confidence intervals and compare them versus bootstrap confidence intervals using two simulated examples. The analytical confidence intervals were closer to the simulated ones for both examples. Bootstrap confidence intervals tend to be narrower than the simulated ones. The approximated analytical confidence intervals were similar to those obtained by bootstrapping. CONCLUSIONS: Estimating the sampling variation of predictivity and the statistics in the LR method without replication or bootstrap is possible for any dataset with the formulas presented in this study.


Subject(s)
Genomics , Models, Genetic , Humans , Genotype , Reproducibility of Results , Confidence Intervals , Pedigree , Genomics/methods , Phenotype
15.
Stat Med ; 43(11): 2216-2238, 2024 May 20.
Article in English | MEDLINE | ID: mdl-38545940

ABSTRACT

A frequently addressed issue in clinical trials is the comparison of censored paired survival outcomes, for example, when individuals were matched based on their characteristics prior to the analysis. In this regard, a proper incorporation of the dependence structure of the paired censored outcomes is required and, up to now, appropriate methods are only rarely available in the literature. Moreover, existing methods are not motivated by the strive for insights by means of an easy-to-interpret parameter. Hence, we seek to develop a new estimand-driven method to compare the effectiveness of two treatments in the context of right-censored survival data with matched pairs. With the help of competing risks techniques, the so-called relative treatment effect is estimated. This estimand describes the probability that individuals under Treatment 1 have a longer lifetime than comparable individuals under Treatment 2. We derive hypothesis tests and confidence intervals based on a studentized version of the estimator, where resampling-based inference is established by means of a randomization method. In a simulation study, we demonstrate for numerous sample sizes and different amounts of censoring that the developed test exhibits a good power. Finally, we apply the methodology to a well-known benchmark data set from a trial with patients suffering from diabetic retinopathy.


Subject(s)
Computer Simulation , Diabetic Retinopathy , Humans , Survival Analysis , Diabetic Retinopathy/mortality , Diabetic Retinopathy/therapy , Randomized Controlled Trials as Topic , Treatment Outcome , Statistics, Nonparametric , Models, Statistical , Confidence Intervals
16.
Stat Methods Med Res ; 33(3): 465-479, 2024 Mar.
Article in English | MEDLINE | ID: mdl-38348637

ABSTRACT

The weighted sum of binomial proportions and the interaction effect are two important cases of the linear combination of binomial proportions. Existing confidence intervals for these two parameters are approximate. We apply the h-function method to a given approximate interval and obtain an exact interval. The process is repeated multiple times until the final-improved interval (exact) cannot be shortened. In particular, for the weighted sum of two proportions, we derive two final-improved intervals based on the (approximate) adjusted score and fiducial intervals. After comparing several currently used intervals, we recommend these two final-improved intervals for practice. For the weighted sum of three proportions and the interaction effect, the final-improved interval based on the adjusted score interval should be used. Three real datasets are used to detail how the approximate intervals are improved.


Subject(s)
Models, Statistical , Binomial Distribution , Confidence Intervals
17.
Pharmacoepidemiol Drug Saf ; 33(2): e5750, 2024 Feb.
Article in English | MEDLINE | ID: mdl-38362649

ABSTRACT

PURPOSE: Outcome variables that are assumed to follow a negative binomial distribution are frequently used in both clinical and epidemiological studies. Epidemiological studies, particularly those performed by pharmaceutical companies often aim to describe a population rather than compare treatments. Such descriptive studies are often analysed using confidence intervals. While precision calculations and sample size calculations are not always performed in these settings, they have the important role of setting expectations of what results the study may generate. Current methods for precision calculations for the negative binomial rate are based on plugging in parameter values into the confidence interval formulae. This method has the downside of ignoring the randomness of the confidence interval limits. To enable better practice for precision calculations, methods are needed that address the randomness. METHODS: Using the well-known delta-method we develop a method for calculating the precision probability, that is, the probability of achieving a certain width. We assess the performance of the method in smaller samples through simulations. RESULTS: The method for the precision probability performs well in small to medium sample sizes, and the usefulness of the method is demonstrated through an example. CONCLUSIONS: We have developed a simple method for calculating the precision probability for negative binomial rates. This method can be used when planning epidemiological studies in for example, asthma, while correctly taking the randomness of confidence intervals into account.


Subject(s)
Models, Statistical , Humans , Sample Size , Probability , Binomial Distribution , Confidence Intervals
18.
Stat Med ; 43(8): 1577-1603, 2024 Apr 15.
Article in English | MEDLINE | ID: mdl-38339872

ABSTRACT

Due to the dependency structure in the sampling process, adaptive trial designs create challenges in point and interval estimation and in the calculation of P-values. Optimal adaptive designs, which are designs where the parameters governing the adaptivity are chosen to maximize some performance criterion, suffer from the same problem. Various analysis methods which are able to handle this dependency structure have already been developed. In this work, we aim to give a comprehensive summary of these methods and show how they can be applied to the class of designs with planned adaptivity, of which optimal adaptive designs are an important member. The defining feature of these kinds of designs is that the adaptive elements are completely prespecified. This allows for explicit descriptions of the calculations involved, which makes it possible to evaluate different methods in a fast and accurate manner. We will explain how to do so, and present an extensive comparison of the performance characteristics of various estimators between an optimal adaptive design and its group-sequential counterpart.


Subject(s)
Research Design , Humans , Confidence Intervals , Sample Size
20.
JASA Express Lett ; 4(2)2024 Feb 01.
Article in English | MEDLINE | ID: mdl-38299985

ABSTRACT

Confidence intervals of location (CIL) of calling marine mammals, derived from time-differences-of-arrival (TDOA) between receivers, depend on errors of TDOAs, receiver location, clocks, and sound speeds. Simulations demonstrate a time-differences-of-arrival-beamforming-locator (TDOA-BL) yields CIL in error by O(10-100) km for experimental scenarios because it is not designed to account for relevant errors. The errors are large and sometimes exceed the distances of detection. Another locator designed for all errors, sequential bound estimation, yields CIL always containing the true location. TDOA-BL have and are being used to understand potential effects of environmental stress on marine mammals; a use worth reconsidering.


Subject(s)
Caniformia , Animals , Confidence Intervals , Cetacea , Sound
SELECTION OF CITATIONS
SEARCH DETAIL
...