Experimental design is a fundamental component of research in ecology and other disciplines. It is critical to understand the consequences of statistical inference, including power and effect size, when making decisions about designing experiments. However, issues such as file drawer effect, funding logistics, and reproducibility are a major concern that often are not considered when starting one’s scientific journey; a poor understanding of these problems may lead to overly conservative estimates or claims that can not be replicated. Here we argue that researchers can dramatically improve inferences from experiments by focusing on two key issues. First, properly manipulating treatment dispersion, which refers to the variation among levels of a quantitative factor, can improve inference without the need for increasing replicates and sample sizes. Secondly, choosing analytics judiciously, such as selecting between ANOVA and replicated regression for experimental data, can improve inference by contrasting inferential outcomes on the same data. We use language, simple fictional examples, and simulations to show that effect size and power increase with treatment dispersion. We also conducted a small meta-analysis on real data to assess whether the literature confirms in published data that treatment dispersion affects inference. We found that there is no association between treatment dispersion and effect size in published literature, suggesting that some form of bias may be prevalent in published literature. Overall, we found that by focusing on treatment dispersion and analytics, researchers can improve their ability to make sound inferences from their data without the need for increased sample sizes.