Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters










Database
Language
Publication year range
1.
J Biopharm Stat ; : 1-14, 2024 Feb 09.
Article in English | MEDLINE | ID: mdl-38335320

ABSTRACT

It is commonly necessary to perform inferences on the difference, ratio, and odds ratio of two proportions p1 and p2 based on two independent samples. For this purpose, the most common asymptotic statistics are based on the score statistics (S-type statistics). As these do not correct the bias of the estimator of the product pi (1-pi), Miettinen and Nurminen proposed the MN-type statistics, which consist of multiplying the statistics S by (N-1)/N, where N is the sum of the two sample sizes. This paper demonstrates that the factor (N-1)/N is only correct in the case of the test of equality of two proportions, providing the estimation of the correct factor (AU-type statistics) and the minimum value of the same (AUM-type statistics). Moreover, this paper assesses the performance of the four-type statistics mentioned (S, MN, AU and AUM) in one and two-tailed tests, and for each of the three parameters cited (d, R and OR). We found that the AUM-type statistics are the best, followed by the MN type (whose performance was most similar to that of AU-type). Finally, this paper also provides the correct factors when the data are from a multinomial distribution, with the novelty that the MN and AU statistics are similar in the case of the test for the odds ratio.

2.
Br J Math Stat Psychol ; 73(1): 1-22, 2020 02.
Article in English | MEDLINE | ID: mdl-31056757

ABSTRACT

There is a frequent need to measure the degree of agreement among R observers who independently classify n subjects within K nominal or ordinal categories. The most popular methods are usually kappa-type measurements. When R = 2, Cohen's kappa coefficient (weighted or not) is well known. When defined in the ordinal case while assuming quadratic weights, Cohen's kappa has the advantage of coinciding with the intraclass and concordance correlation coefficients. When R > 2, there are more discrepancies because the definition of the kappa coefficient depends on how the phrase 'an agreement has occurred' is interpreted. In this paper, Hubert's interpretation, that 'an agreement occurs if and only if all raters agree on the categorization of an object', is used, which leads to Hubert's (nominal) and Schuster and Smith's (ordinal) kappa coefficients. Formulae for the large-sample variances for the estimators of all these coefficients are given, allowing the latter to illustrate the different ways of carrying out inference and, with the use of simulation, to select the optimal procedure. In addition, it is shown that Schuster and Smith's kappa coefficient coincides with the intraclass and concordance correlation coefficients if the first coefficient is also defined assuming quadratic weights.


Subject(s)
Models, Statistical , Reproducibility of Results , Computer Simulation , Data Interpretation, Statistical , Humans
SELECTION OF CITATIONS
SEARCH DETAIL
...