Meta-analysis is a statistical procedure that combines results of several independent clinical trials (1,2). Suppose two clinical trials evaluating the effect of a chemical poison on cancer, ended in conflicting conclusions, which one is correct, and could the controversy be settled by pooling them? Obviously one ought to match independent factors, e.g., cancer stage, age, sex, social status, diagnostic means etc (3). Still, it is not at all obvious how to pool them. Difficulties arise if one tries to solve conflicting predictions of several trials, e.g., the seven clinical trials on preventing death with aspirin. The first five were apparently matchable, predicting that aspirin was beneficial (p<0.01), while the sixth, known as AMIS trial, was inconclusive and differed significantly from the previous five. When all six were pooled prediction was not significant. Then came the seventh trial, ISIS-2, and undid the statistically non-significant effect of aspirin observed by the previous six clinical trials. Could this indicate an inconsistency in the method?
How should one evaluate the results of clinical trials? Analyze all published studies or only "good ones"? Select all studies, or only the published ones? which introduces a bias known as "file drawer phenomenon" that results from the tendency not to submit inconclusive results for publication. How to assess "combinability"? Since clinical trials are so complex one wonders what is done with other complex phenomena, e.g., weather forecast, or stock-market. With modern and powerful computers at hand, weather forecast should be simple. One feeds all satellite data into the computer and lets him "crunch-out" the dynamic equations. This was also the belief of Edward Lorenz (4) who created in his computer powerful weather models, until he realized that the outcome depended on seemingly trivial assumptions like variable grouping or matching. Whenever he changed the initial conditions of the model, even by minute amount, the predicted weather was different. This extreme instability of complex systems with respect to initial values is known as Chaos. In the most complex weather models, even the soft flapping of butterfly wings may end in a hurricane. Predicting the final result of any chaotic system is impossible, simply because measuring the initial conditions with infinite precision is impossible.
Chaos marks the transition from a linear universe that was conceived by Newton, to a non linear. A universe of streams, turbulences and transitions, that make the world of biology (5). Chaos theory provides means to distinguish between predictable and unpredictable phenomena. Predictable phenomena are "attracted" to a solution, that is known as "attractor of the system". The human organism may be regarded as such a solution. Composed of myriad streams, its appearance is an attractor in a multi-dimensional chaotic space.
"Return to the patient!" is the main message of Chaos to medicine. Since organism and society in which it operates are chaotic, any grouping or pooling will end in different outcomes. From the viewpoint of Chaos meta-analysis and clinical trials are meaningless, mainly since their underlying statistical method presumes that epidemiological phenomena are linear and may be grouped, while in reality they are non linear. The central limit theorem that forms the basis of epidemiology, presumes that random phenomena observed in epidemiology will ultimately settle at a "central limit value", which according to chaos is impossible, since like in weather prediction, even the soft flapping of butterfly wings may change the epidemiological outcome. This is why leading epidemiologists cannot agree on how to interpret their observations (3).
In order to make clinical trials meaningful, epidemiology turns to "best-evidence synthesis" (2): "This approach considers that the best evidence in any field comes from studies having the highest internal and external validity. . . Such syntheses emphasize numeric findings but the conclusions need not depend on statistical significance (sic!). . ." In other words epidemiologists are advised to rely on intuition, in the same way as has been done by medicine for ages. Yet epidemiology was created in order to rescue medicine from its "non scientific" intuition. Now that its mission failed, why not abandon it and return to the patient as suggested by chaos theory? It is striking that one clinical trial on the effect of aspirin was done on physicians who were ready to trade their valuable experience for a futile trial. Apparently many of them appreciate the advantage of relaxation, e.g., yoga, meditation or prayer, for delaying death from arteriosclerosis. These methods seem safer than aspirin, at least they do not cause gastro-intestinal bleeding. And yet since relaxation cannot be investigated by a clinical trial, it is regarded by modern medicine as non scientific. At best relaxation could find its place among "file drawer phenomena" that are not published.
Long before Chaos was discovered, medicine applied simple means for dealing with "strange attractors", known as diseases. Medical reasoning was too complicated for traditional mathematics, mainly since it is inherently non linear. This is why medicine had to pose as an art, in spite of being a science. Chaos theory promises to provide medicine with new mathematical means that will remodel it into an exact science. Medical science will be affiliated with the science of complexity, that regards Newtonian dynamics as a particular case.
1 Fleiss JL, Gross AJ Meta-analysis in epidemiology, with special reference to studies of the association between exposure to environmental tobacco smoke and lung cancer: a critique. J. Clin. Epidemiol. 44,127-139,1991.
2 Spitzer WO. Meta-meta-analysis: Unanswered questions about aggregating data.
J. Clin Epidemiol. 44,103-107,1991.
3 Zajicek G. Cancer wars. The Cancer J. 4:4-5,1991.
4 Gleick J. Chaos, Making a New Science, New York; Penguin Inc., 1989.
5 Zajicek G. Chaos and Biology. Met. Inform. Med 30,1-3,1991.