Several nonparametric tests exist to test for differences among alternatives when using ranked data. Testing for differences among alternatives amounts to testing for uniformity over the set of possible permutations of the alternatives. Well-known tests of uniformity, such as the Friedman test or the Anderson test, are based on the impact of the usual limiting theorems (e.g. central limit theorem) and the results of different summary statistics (e.g. mean ranks, marginals, and pairwise ranks). Inconsistencies can occur among statistical tests' outcomes - different statistical tests can yield different outcomes when applied to the same ranked data. In this paper, we describe a conceptual framework that naturally decomposes the underlying ranked data space. Using the framework, we explain why test results can differ and how their differences are related. In practice, one may choose a test based on the power or the structure of the ranked data. We discuss the implications of these choices and illustrate that for data meeting certain conditions, no existing test is effective in detecting nonuniformity. Finally, using a real data example, we illustrate how to construct new linear rank tests of uniformity.
Anna E. Bargagliotti, Susan E. Martonosi, Michael E. Orrison, Austin H. Johnson, Sarah A. Fefer. (2021) Using ranked survey data in education research: Methods and applications. Journal of School Psychology 85, pages 17-36.