100 Statistical Tests Apr 2026

Regardless of which of the 100 tests is used, they almost all follow a unified logic: The assumption that there is no effect or difference. The Alternative Hypothesis ( H1cap H sub 1 ): The claim that there is a significant effect.

Parametric tests (like the t-test or ANOVA ) assume the data follows a specific distribution, usually the normal distribution. Non-parametric tests (like the Mann-Whitney U or Wilcoxon signed-rank ) make fewer assumptions and are used for skewed data or small samples.

Tests like the Kolmogorov-Smirnov or Shapiro-Wilk check if a dataset fits a theoretical distribution, which is often a prerequisite for more complex modeling. The Logic of Hypothesis Testing 100 Statistical Tests

The probability that the observed results occurred by chance. Generally, a p-value less than 0.05 suggests the result is "statistically significant." Choosing the Right Tool

These are the workhorses of research. A One-sample t-test compares a group to a known value, while an Independent samples t-test compares two distinct groups. For three or more groups, the F-test (ANOVA) is used. Regardless of which of the 100 tests is

The landscape of statistical analysis is defined by a vast toolkit of tests, often cited in the classic compendium 100 Statistical Tests by Gopal K. Kanji. These tests serve as the bridge between raw data and scientific certainty, allowing researchers to determine if their findings represent genuine patterns or mere coincidences. The Categorization of Tests

To manage such a large number of procedures, statisticians group them based on the nature of the data and the specific question being asked: Non-parametric tests (like the Mann-Whitney U or Wilcoxon

The sheer volume of available tests exists because real-world data is messy. You might need a test for circular data (the ), a test for outliers (the Grubbs' test ), or a test for the equality of variances ( Levene's test ). Selecting the wrong test—such as using a parametric test on highly non-normal data—can lead to "Type I errors" (false positives) or "Type II errors" (false negatives). Conclusion