The Shit Is Hitting The Fan
Nadat hij jarenlang de wereld teisterde met junk science over de vermeende gevolgen van meeroken lijkt super-anti Stanton Glantz van de Universiteit van Californië nu toch zijn hand erg overspeeld te hebben. Al eerder berichtten wij hier op deze pagina’s over zijn Helena onderzoek (gepubliceerd in het British Medical Journal) waarin hij beweerde te hebben aangetoond dat invoering van een rookverbod 40% minder acute hartaanvallen tot gevolg hadden in een klein stadje in de staat Montana.
Vandaag werden in hetzelfde British Medical Journal drie moordende kritieken op zijn onderzoek gepubliceerd die zijn geloofwaardigheid als wetenschapper allesbehalve goed zullen doen en wellicht zelfs de geloofwaardigheid van het gehele anti-rokenonderzoek (en medische onderzoeken in het algemeen) kunnen aantasten.
Geniet van een van de prominentste, met 4 miljoen farmaceutische nicotine-dollars per jaar gespekte anti-rokenwetenschappers in het nauw!
Stanton Glantz dacht zich erg veilig te weten met zijn nep-studies. Nadat er lange tijd niet meer op het Helena onderzoek werd gereageerd – en de uitkomsten van zijn onderzoek werden gebruikt voor het rechtvaardigen van rookverboden in diverse Amerikaanse staten en gemeentes – publiceerde hij tijdens een congres een soortgelijk onderzoek in het stadje Pueblo. Ook daarbij kwam weer de bekende anti-roken rhetoriek: “Zie, rookverboden hebben een direct effect op de gezondheid”.
Toen een maand geleden er via de blog van Michael Siegel steeds meer kritiek kwam op zijn onderzoeken begon meneer Glantz zich in allerhande bochten te wringen maar weigerde inhoudelijk op de kritiek te reageren. Zijn volgelingen trachtte hij per e-mails te instrueren hoe op de te verwachten kritiek in het openbaar te reageren. Hij noemde zelfs New York als een bewijs dat het rookverbod daar eenzelfde effect liet zien. Maar ook die bewering werd snel ontkracht.
En vandaag mengden zich ook nog twee andere, onafhankelijke wetenschappers in de strijd die keihard aantoonden dat de bevindingen van Stanton Glantz niet meer zijn dan natuurlijk fluctuaties in de cijfers rond hartaanvallen en geenszins gezien kunnen worden als bewijs van de effectiviteit van rookverboden.
Eén van de drie reacties vraagt expliciet om maatregelen tegen Stanton Glantz:
The preceding response by Stanton Glantz and Drs. Shepard and Sargent is sadly inadequate in addressing either the Kuneman/McFadden study (1) or the longstanding criticisms of the original Helena study. Their response ignores the many substantive criticisms of their work, the important questions raised about that work, and the impact of new and more comprehensive research that challenges their public conclusions. It does this while emphasizing the support of another small and similarly flawed study.
It was gratifying though to see that the first sentence of the Helena researchers’ response emphasized the tenuous nature of the claim for statistical significance ascribed to their study. A Confidence Interval extending from 1% to 79% is a crystalline indicator of the weak foundation upon which any claims of significant correlation, much less causality, would be based. The slightest jiggle of a single AMI could easily have moved that lower boundary of 1% into the negative realm of non- significance and it was good of the authors to remind us of this.
Unfortunately that bright beginning is immediately followed by a logical and scientific fallacy. The claim that a community has to be “small” in order to detect a natural experimental effect has no basis. Indeed, the smaller the subject pool in a population with many uncontrolled and potentially confounding variables at work, the smaller the chances that any scientific finding will have any meaning at all.
The Kuneman/McFadden study was enormously more robust in this regard, utilizing a patient and population base over 1,000 times as large as Helena’s. Yet oddly it was criticized not only for being large but because it was not “isolated” to a single hospital or two. Actually it was indeed quite well “isolated” in the larger sense that the great bulk of state populations stay within the borders of those states for most of their working, recreational and medical needs — perhaps even more so than in a smaller geographically defined population such as Helena’s.
The Helena authors give a “for example” to illustrate that the K/M study did not properly meet the “smallness” and “isolation” requirements they emphasize, but the “example” they give seems somewhat confused since it addresses neither requirement but instead discusses the phasing in of smoking bans over time. They posit that this would result in a “smearing out” of any 30 or 40% post-ban declines in heart attacks.
While it’s true that such effects might be somewhat attenuated over time it should be pretty clear that there is simply no way that a 40% *drop* in heart attacks could be “smeared into” a 6% increase, as found in California, or a 32% increase over three years as observed in Massachusetts. (2)
And the authors’ further note about Florida’s “snowbird” population is simply irrelevant to the K/M study since any snowbird effect would have clearly existed both before and after the introduction of Florida’s smoking ban. Their concern about the retiree population in Florida also seems poorly based since that population would have remained stable and would likely require the same size or even larger affected hospitality workforce.
Ignoring the new K/M data and the weakness implicit in the wide CI of their own study, the Helena authors go on to assert once again that their findings clearly indicate a real result due to the interplay of two, and only two, factors: smokers quitting and reduced secondary smoke exposure. Amazingly they repeat this assertion despite having failed to gather any specific data on either factor and despite a failure to even analyze AMIs in nonsmokers.
They do note that smoking histories for Helena/Pueblo were “spotty at best,” but do not mention their strange neglect to specify what findings they got or the oddity that despite early consultations with the Pueblo authors those researchers failed to even gather such vitally important data. Could it be that such data was not expected to support claims promoting smoking bans?
The statement that the relative contribution of ETS effects on non- smokers was simply “not important” is quite disingenuous given the deliberate public portrayal of these studies as indicating a “threat” to innocent nonsmokers and given the focus of Helena’s text. Relegating consideration of the magnitude of such an effect to the realm of “non- importance” is simply ridiculous when one views the use of these studies in promoting smoking bans principally based on such a threat.
While the Helena authors have largely continued their policy of not responding to questions and criticisms about their original work, they did at least attempt to address “alternative explanations” by raising specific points to be considered. Before addressing these myself I should note that Dr. Michael Siegel has thoroughly addressed these from the viewpoint of “random variation” in his excellent internet blog at http://tobaccoanalysis.blogspot.com/
In their first point, the Helena authors note that there was a drop in AMIs in Helena. They incredibly make no reference to the lack of such a drop in either the K/M study or in Siegel’s extended analysis. The authors are certainly familiar with the study, both from its presentation here and through the fact that Dr. Siegel’s efforts were swiftly followed by his expulsion from the tp-talk discussion list-serve for tobacco control. (3)
In their second point the authors reassert that there was no drop in AMIs in the area surrounding Helena, but somehow fail to mention the very important counterpoint: there was an *increase* in surrounding AMIs. Perhaps the increase was not statistically significant, but it was certainly large enough to account for a real portion of the barely significant drop found in Helena proper. Rather than ask for an alternative explanation as to why there was no drop in surrounding areas, the Helena authors should attempt to explain either why there was an increase or why that increase was ignored by them.
The third point focuses on the “rebound” in AMIs after the Helena ban was lifted. In reality, as the authors are fully aware, most of that “rebound” actually occurred *not* after the ban was lifted, but actually during the second half of the ban period itself. While the graph indicating this was made available during the initial press release parties in 2003 and was displayed on the Internet, the incriminating data was eventually removed from both the net and the final BMJ publication. However, while it has been removed from normal Internet access, there is a little-known archival engine called the Wayback Machine that will allow researchers to access it. (5) The original powerpoint graph shows that during the first three warmer months of the ban when a lot of angry Helena smokers and their friends probably partied out of town AMIs dropped from 6 down to 2 per month. In the second three colder months of the ban it bounced back up to 5 per month. The bounce back did not occur *after* the ban as claimed.
The final two notes by the Helena authors are also puzzling. The claim that no alternative explanations have been offered is mystifying given the Rapid Responses and other critiques that have been offered and ignored over the past thousand days (6) (7).
And the final comment, “These large drops in AMI admissions… are consistent with the large and immediate effects that secondhand smoke has on blood platelets, vascular reactivity, and other determinants of cardiovascular function.” has a puzzling set of references. None of those references seem to clearly show “large and immediate effects” of the type described from the levels of exposure that would commonly be encountered in most businesses affected by smoking bans.
As noted earlier, the Helena authors’ attempt at responding to their critics and buttressing their case is sadly inadequate. The Rapid Response titled “Helena: 100 Days” enumerated 14 questions and criticisms raised within just the first 10 days of the study’s publication. To offer a response 900 days later that deliberately ignores those and other concerns is reprehensible. To end that response once again with a statement of “no competing interests” is even more so.
I would like to conclude by once again calling upon the BMJ to take some form of corrective action, particularly since their publication of this study has had such wide-ranging impact on the lives and livelihoods of so many.
Michael J. McFadden
British Medical Journal
Michael Siegel over de reacties op BMJ [post2] [post3] [post4]
De media beginnen het schandaal op te pikken