How to tell the difference between a hazard ratio, relative risk, and odds ratio

I recently had an interesting conversation about the different types of relative risk.

The crux of the conversation was: what is the difference between a hazard ratio, relative risk, and odds ratio? I wanted to expand on that conversation here because I’ve got more space, and hey, who doesn’t want to hear more about relative risks?! 

So, if you’ve ever wondered what these terms have in common and what distinguishes them from one another, read on. 

They’re all ‘relative’ risks

Before I go any further: I fully appreciate how confusing it is to have a measure called ‘relative risk’ as well as relative risk being a generic term! Who came up with that?!

Getting back on topic, risk measures can be relative or absolute.

Relative risk is risk that we compare with something else. For example, smokers are 5 times as likely to get lung cancer compared with non-smokers. I’ve made that number up to illustrate the point, but it’s something pretty high, right?!

Absolute risk is risk without comparison. For example, 1 in 4 smokers get lung cancer (again, a made-up fact but you get the point).

Understanding whether something is a relative or absolute risk is vital in order to communicate your data accurately. So, keep in mind that whenever we talk about the hazard ratio, relative risk, and odds ratio, there will always be a comparison to be made. These values are dependent on another value for context. 

Relative risks all have a baseline of 1

Relative risks are calculated like this: risk in the intervention group divided by risk in the control group.

For example, if 55% of smokers develop lung cancer and 25% of non-smokers develop lung cancer then we can find the relative risk for smokers compared with non-smokers by calculating 50% divided by 25% = 2 (are you enjoying my made up numbers?!).

If instead, 25% of smokers develop lung cancer and 25% of non-smokers develop lung cancer then the relative risk for smokers compared with non-smokers is 25% divided by 25% = 1.

In the first example, the risk of lung cancer is higher among smokers than non-smokers, whereas in the second example the risk is equal.

Anytime we see a relative risk very close to 1 then we know that there is very little difference between the two groups. If the relative risk is below 1, the risk to the cohort of interest is lower than the control. Anything above 1 means that the risk is higher than for the control. 

For comparison, the baseline for absolute risk is 0.

How to calculate and interpret each measure

Hazard ratios

Hazard ratios are calculated using survival data and survival analysis. You would use this if you have a one-off event as your outcome (for example, death, cancer diagnosis, or discharge from hospital) and follow people up for a variable amount of time.

Hazard ratio = (hazard rate in intervention group) / (hazard rate in control group)

The hazard ratio interpretation is a little clunky. It tells you the risk of an event in the intervention group compared with the control group at any particular point in time. For example, a hazard ratio of 0.5 tells you that, at any particular point in time, the intervention group are half as likely to be experiencing the event of interest as the control group.

(Side note: The interpretation needs this comparison to be true at any point in time. This means it’s important that the hazard rates in both groups are proportional over time, so that’s one of the main assumptions to check when you’re doing survival analysis)

Relative risks

Relative risks are also called risk ratios, possibly to avoid the naming confusion! They are calculated using Poisson regression and also use event data. The main difference between this and hazard ratios is that Poisson regression usually uses counts for an event that can happen multiple times. An example of this would be the number of epileptic seizures a person has during follow-up. This type of model also allows for following people up for a variable amount of time.

Relative risks (or risk ratios) have a more intuitive interpretation as you simply interpret it as a ratio. For example, a relative risk of 1.5 would suggest a 50% increase in risk, whereas a relative risk of 0.5 would suggest a 50% decrease in risk.

Odds ratios

Odds ratios are calculated using binary outcomes (i.e. where one of only two things can happen) and logistic regression. The main difference between this and the other two measures is that there is no way of including a time element in this model. So odds ratio are best used if follow-up time is fixed, for example if the outcome is measured at a specific follow-up visit or you have a case-control study.

Odds ratio = (odds in intervention group) / (odds in control group)

Another slightly clunky interpretation here too! As an example, an odds ratio of 1.1 would suggest a 10% increase in the odds of the outcome in the intervention group than the control group.

What the heck are odds?! It is the number of times that an event occurs in the group divided by the number of times it didn’t occur. So an odds ratio is a proportion divided by a proportion. If you’re not a gambling person then have a read of Wikipedia to find out more about odds!

A note on interpretation

The technical definition of hazard ratios and odds ratio can be pretty clunky.

While I think it’s important to know this definition, I always reassure my clients that it’s not the be all and end all. As long as you keep in mind the more intuitive interpretation that a ratio of <1 means the risk is lower for the cohort of interest, and a ratio of >1 risk means the risk is higher, this is the most important point.


While hazard ratios, relative risks, and odds ratios all show relative risk, there’s plenty to distinguish them from one another, particularly around how they’re calculated and interpreted. That’s why it’s important to know which one is best for your data set and research question. 

Looking for more help with your statistics?

Check out the options for working with me.