If, say, an event happens in group A at a rate of 1%, and in the control group B at a rate of 2%, then the relative risk reduction would be (rateB - rateA)/rateB, i.e. (2% - 1%) / 2%, i.e. 0.5, or 50%.
In absolute terms, however, the absolute risk reduction would simply be the difference between the rates in these two groups, i.e. 2% - 1%, or 1%.
Absolute risk reductions, reminds the paper, "tend to be ignored because they give a much less impressive effect size than RRRs".
Or, as a leaflet possibly for medical students puts it: "If a patient is told that treatment B reduces their risk of dying by 25% (the relative risk reduction), they may make a different decision to the one they would make when told that treatment B reduces their risk of dying by 2.5%."
"Absolute risk reduction (ARR) – also called risk difference (RD) – is the most useful way of presenting research results to help your decision-making," says a book for patients, published on the NCBI website.
The ARRs for different vaccines listed in the article are soberingly different from the RRRs, which were the percentages messaged to the public.
And although I am sure I studied this once, I have completely forgotten it by now.