# Alt.smokers RR Discussie

De volgende discussie over de interpretatie van de RR waarde in

epidemiologisch onderzoek ontspon zich op de

alt.smokers nieuwsgroep tussen Dave Hitt, pro-roken activist, en een

zichzelf epidemioloog noemende arts.

"CBI" <nospam@mindspring.com>

wrote:

>"Audrey" <102776.1270@compuserve.com>

wrote in message

>news:3AA89C2C.14735E92@compuserve.com…

>>

>> I understand your intent to show you’re open to questioning the study by

>> capitalizing particular words. Just wanted to let you know I see that.

>> But, epidemiological experts have said that ANY relative risk between 1

>> and 2 is a VERY weak link and statistically *insignificant.* In that

>> respect it cannot be *clinically* significant either.

>>

>

>This is just plain wrong. What experts? Are these the same "doctors" whose

>research is featured in 3AM infomercials.

Hardly.

*"As a general rule of thumb, we are looking for a relative risk of 3
or more before accepting a paper for publication." – Marcia Angell,
editor of the New England Journal of Medicine"*

"My basic rule is if the relative risk isn’t at least 3 or 4, forget

it." – Robert Temple, director of drug evaluation at the Food and Drug

Administration.

"Relative risks of less than 2 are considered small and are usually

difficult to interpret. Such increases may be due to chance,

statistical bias, or the effect of confounding factors that are

sometimes not evident." – The National Cancer Institute

*"An association is generally considered weak if the odds ratio
[relative risk] is under 3.0 and particularly when it is under 2.0, as
is the case in the relationship of ETS and lung cancer." – Dr. Kabat,
IAQC epidemiologist
*

>Whether something is statistically significant or not is a mathematical

>argument based on probability theory. If the study is large enough there is

>no reason why a RR of 1.00000000001 can’t be statistically significant.

True, as long as you’ve carefully studied more people than ever lived

on this planet. That’s what would be necessary to accurately

measure a difference that small. And even then it

would be far more likely that it was some unknown

confounder that generated your number.

> When

>someone says a study is statistically significant what they are saying is

>that the chances that the results are merely due to chance are below a

>certain percentage (usually arbitrarily defined as 5%). Then someone lists

>the "p-value" this is the calculated chance that the study results were due

>to chance. A p=0.5 would be a 5 in a hundred chance of random results. If

>the p=0.01 that is a 1:100 chance, etc.

Wrong again! Something is statistically significant when the

confidence interval, which is determined using p, doesn’t straddle

1.0. If it does, it is considered to confirm the null hypothesis,

which is as close as science ever gets to proving a negative.

>Whether something is clinically significant is another

matter altogether.

>This refers to whether the effect is large enough to make a difference in

>the real world and is mostly a matter of opinion. A drug company may do a

>large trial to show that their product lowers blood pressure by 1 mm Hg

>(with p=0.0001) and say it is "significant" (meaning in the statistical

>sense) but most doctors would not consider this significant in the clinical

>sense. Similarly a RR of 1.5 may be statistically significant but clinically

>not if the disease is rare. It does not matter much if your chances of

>getting a diseases go from 1: 10,000,000 to 1:15,000,000. Both are

>sufficiently unlikely to not be considered real risk for most people. On the

>other hand, if your chances go from 50% to 75% some may consider that worth

>the effort to avoid.

>

>The notion that a RR between 1 and 2 cannot be clinically or statistically

>significant is hogwash. I challenge you to find a reputable source that

>states this.

See above. Although, instead of providing you with *a* source, I’ve

given you four.

You’re welcome.

>> When the WHO came out with a RR below 2.0 that SHS causes cancer in

>> adults in their well done study it was deemed statistically

>> insignificant.

>

>Only by RJR Reynolds and crowd. Pretty much everyone, outside of North

>Carolina, thought it was pretty scary in addition to highly significant (in

>both the statistical and clinical sense).

Bullshit. Anyone who knows *anything* about statistics (i.e. not you)

knows better. The study found a Relative Risk for spousal exposure

of 1.16, with a Confidence Interval of .93 – 1.44. That means the real

number could be a 25% increase. Or a 44% increase. Or a 7% decrease.

(If you knew what you were talking about I wouldn’t have to explain this to

you.) But the bottom line is the CI straddles 1.0.

That means it’s just as likely to be 1.0 as anything else in the range, and

*that* is what makes it insignificant.

The RR for work place ETS was 1.17 with a CI of .94 – 1.45.

The RR for exposure from both a smoking spouse and a smoky work place was 1.14,

with a CI of .88 – 1.47.

People with no knowledge of stats might wonder why combined exposure, in both home and the work place, is lower than exposure in just one place or the other. The reason is quite simple – the crudeness of epidemiology. That variation is quite reasonable.

Oh, and there was one stat in that study, and only one, where the CI didn’t straddle 1.0. It was, therefore, statistically significant.

Ready for it Sparky? Here it is:

The RR for exposure during childhood was 0.78, with a CI of .64 – .96.

This indicates a protective effect! Children exposed to ETS in the home during childhood are 22% *less* likely to get lung cancer, according to this study. Note that this was the only result in the study that did not include 1.0 in the CI. Note that this result was also completely ignored by both the WHO and the press.