The Multiplicity Problem
The Multiplicity Problem
THE MULTIPLICITY PROBLEM
Everyday, one can find, in the newspaper or in other popular press, some
claim of association between a stimulus and an outcome, with
consequences for health or general welfare of the population at large.
Many of these associations are suspect at best, and often will not hold
up under scrutiny. Examples of such association are: coffee and heart
attacks, vitamins and IQ, tomato sauce and cancer, and on and on. Many
of these claims have shaky foundations, and some have not been
replicated in further research. With so much conflicting information in
the popular press, the general public has learned to mistrust
statistical studies, and to shy away from the use of statistics in general.
There are several reasons that cause these incorrect conclusions to
become part of the scientific and popular press; usually scientists
fault such things as improper study design and poor data. Another reason
for these claims originates from large studies, where data analysts
report all the tests that are “statistically significant” (usually
defined as p < 0.05, where "p" denotes p-value) as a "real" effect. On
the surface, this practice seems innocuousness; since this is the rule
learned in statistics classes. The problem arises when multiple test are
performed "p < 0.05" outcome can often occur when there is no real
effect at all. Historically, the "p < 0.05" rule was devised for a
single test with the following logic: if p < 0.05 outcome was observed,
than the analyst has two options Either heshe can believe that there is
no real effect and that the data is so anomalous that it is within the
range of values that would be observed only 1 in 20, or he/she may
choose to believe that the observed association is real. Because the 1
in 20 chance is relatively small, the common practice is to "reject" the
hypothesis of no real effect and "accept" the conclusion that the effect
is real.
The logic breaks down when more than one test and comparisons are
considered in a single study. If one considers 20 or more tests, than
one expects at least one "1 in 20" significant outcome, even when none
of the effects are real. Thus, there is little protection offered by the
"1 in 20" rule, and incorrect claims can result.
Although incorrect decisions can be blamed on poor design, bad data,
etc., one should be aware that multiplicity can cause faulty
conclusions, and should be taken care off in large studies that includes
many tests and comparisons.
One example for these kinds of studies is: subgroup analysis in a
clinical trial.
As a part of the pharmaceutical development process new therapies
usually are evaluated using randomized clinical trials. In such studies,
patients are randomly assigned to either active or placebo therapy.
After the conclusion of the study, the active or placebo groups are
compared to see which is better, using a single pre-defined outcome of
interest. At this stage, there is no multiplicity problem, since there
is only one test. However, there are many reasons to evaluate patient
subgroups. The therapy might
Essay About Incorrect Conclusions And P-Value
Essay, Pages 1 (507 words)
Latest Update: July 14, 2021
//= get_the_date(); ?>
Views: 203
//= gt_get_post_view(); ?>
Related Topics:
Incorrect Conclusions And P-Value. (July 14, 2021). Retrieved from https://www.freeessays.education/incorrect-conclusions-and-p-value-essay/