Do you know the difference between statistical significance and clinical significance? What is a p-value, and what does effect size actually tell us?

These are some of the most common questions I get asked.

But understanding each of these terms, and how they relate to one another, is vital for interpreting your results accurately. Knowing your p-values and whether your results have statistical and clinical significance will affect the impact your study will have in the real-world.

So in this blog, I’m going to take a deep dive into the definition of each term. I’ll also explain how they help us to convey the results of our studies.

*What is statistical significance?*

*What is statistical significance?*

Statistical significance is how likely it is that your results are due to chance. We measure it using a p-value.

The p-value is an easy way to quickly determine how certain you are about the result of your study. They indicate probability, and as such, range from 0 to 1.

The lower the p-value, the more statistically significant your results are. This means it is less likely that the result is due to chance.

To put it another way, a lower p-value means greater certainty in our result.

Generally speaking, the more people you have in your study, the more likely your results are to be statistically significant. This is the case even if the effect size is very small.

*How do you calculate statistical significance?*

*How do you calculate statistical significance?*

We use a cut-off to decide between certain versus uncertain. The cut-off is normally 0.05, and this helps us to decide whether or not we ‘trust’ a result.

If p is less than 0.05 then we call it ‘statistically significant’ and we believe that the result is probably not due to chance. If p is greater than 0.05 then we call it non-significant; in other words, we believe that it is probably due to chance.

So in the example in the picture below, the first two p-values (0.470 and 0.802) wouldn’t be statistically significant because they are greater than 0.05. The following two p-values (0.000 and 0.033) would both be statistically significant because they are less than 0.05.

As a side note, stats packages will show p-values of 0.000 when the p-value is very, very small. However, remember that p-values are probabilities and (almost!) nothing is impossible so as a convention we would never report a p-value of 0.000 in a paper. Instead, we would write this p-value as <0.001 to show that it is a very small number that is close to 0.

In the past, different cut-offs have been used to determine what is deemed ‘certain’, which means that p-values close to 0.05 are more nuanced. However, today 0.05 is a fairly universal standard. The main exception is for interaction terms. In this case, the cut-off value is 0.10.

*Why is statistical significance important?*

*Why is statistical significance important?*

This uncertainty about your results is an important part of explaining your study to others. P-values help us to interpret results. There are other uncertainty measures we can use as well, like confidence intervals.

However you decide to measure it, you should always display at least one indicator of uncertainty with your results. I usually go for a p-value and confidence interval.

*What is clinical significance?*

*What is clinical significance?*

Clinical significance is more about the impact for patients than the certainty of results. For example, is the impact on patients proportional to the work and cost needed to achieve that result? Or could a new method help to achieve similar results as current methods but in a shorter timespan?

If the answer to one of those questions is yes, your results have clinical significance.

Let’s look at it another way: if you had to diet heavily for weeks on end to lose 0.01kg would you bother? Probably not.

Clinical significance is similar. It’s about looking at your results and seeing whether the effect size (your main estimate) actually means anything is likely to change in practice.

*Is clinical significance always proportional to effect size? *

*Is clinical significance always proportional to effect size?*

In short: no!

You don’t necessarily have to have a massive effect size in order to achieve clinical significance. This is because smaller effects can add up to be important if you’re looking at something with a huge burden, or if the population that it’ll affect is really large.

*Is statistical significance more important than clinical significance?*

*Is statistical significance more important than clinical significance?*

Each of these measures tells us something different about how to interpret the results of a study. Neither is more or less important than the other.

The important thing is to consider BOTH statistical and clinical significance when interpreting your results.

Emphasising only one of these can be misleading so make sure you talk about both!

*Want help analysing your data?*

*Want help analysing your data?*

I can help with that! Find out more here.