Sorting out Sources: Guidelines for Evaluating
Scientific Information
The goal of this page is to help you evaluate information, specifically
scientific information. This skill is critical not only when conducting research
using the internet or traditional print media, but also in helping you sort out claims you
may encounter in your daily life.
Be aware of pseudoscience (false-science).
Pseudoscientists make claims that may appear scientific, but dont follow scientific
principles. Distinguishing between science and pseudoscience can be difficult.
When trying to discern whether something is scientific, check the following:
INTEREST
Who is funding the research and
who may profit from it?
Biased organizations may give themselves neutral-sounding names. An organization
will often have an interest in the outcome of a study they are funding.
AUTHOR and PUBLISHER
Who conducted the research?
Where was it done? Where was it published?
Look at the background of the people involved in the research, if possible. What
kind of training have they had? Have they done extensive research in the
field? Have they published other papers on the topic? Do others frequently
cite them? Was the work conducted at an established facility, which could provide
the support necessary to conduct thorough research?
Scientists publish their results in peer-reviewed journals so that others in the same
field can critically evaluate their work. View with suspicion any discoveries that
are secret or rely on secret formulas. Results that have
been originally published in journals such as Science, Nature, the New England Journal of
Medicine, etc. will have been examined more closely, and are therefore are more reliable,
than those that are directly announced to the media.
HYPOTHESES
Are hypotheses testable and capable of being falsified?
Hypotheses and theories (even those which cannot be tested directly) should be able to be
used to make predictions and allow the collection of evidence to test those
predictions. Often pseudoscientific claims cant be proven wrong by any
possible evidence. For example, there is no way to disprove the claim that only
someone with special powers can sense a certain phenomenon.
There is a large body of knowledge in science that is not influenced by trends in
public opinion and is not likely to change. However, scientific ideas should be
capable of changing should new evidence arise. In contrast, ideas in pseudoscience
either stay the same (if there is an unchanging idea behind them) or change randomly (if
criteria for accepting ideas and rejecting others do not exist).
PROCEDURE
1. Are experiments repeatable? Have they been repeated?
Experimental procedures are reported so that others may repeat them. Valid results can be
reproduced by others. Check to see that there has been more than one study, and that
the studies support past research. One single study may produce results that other
studies cant repeat. The more independent studies that exist which can support
a claim, the more likely it is to be true.
2. Are specific, well-defined predictions made?
Scientists use careful, precise language and make quantitative predictions if
possible. Pseudoscientists use vague and imprecise terms that can be interpreted in
many different ways, such as the language used in many horoscopes.
3. Are appropriate controls used?
If a drug is being tested, for example, scientists compare an experimental group (getting
the treatment) with a control group (not receiving the treatment).
Controls (which should be identical to the experimental group except for the factor
being tested) ensure that results are due to the drug itself and not some other
factor. Test subjects should be randomly assigned to either group
(randomized). Blind studies (subjects dont know which group they
are in) and double-blind studies (neither subjects nor researchers know which group
subjects are in) provide additional safeguards.
4. Was a representative sample used? Was it large enough?
Were enough trials done?
Scientists use samples that represent larger groups. If only men were used in a
study, claims about how the study applies to women would be suspect.
Pseudoscientific or unproven claims will rely on case histories, anecdotal evidence, or
personal testimonials (Jane lost 30 lbs. in two weeks with Slim-X!) While case
studies might be a starting point for future research, scientists require many trials
combined with statistical analysis in order to evaluate their claims. Furthermore,
ethical scientists wouldnt reveal the names of people involved in tests.
Sometimes, a statistical claim may be made without reference to the sample size
(3 out of 4 dentists surveyed
but how many were surveyed?) The
larger the sample size, and the longer the study lasted, the more confident scientists are
about their results
RESULTS
1. Were the results statistically significant?
Statistical significance measures how often a
particular result would occur due to chance alone, assuming that the experiment were
repeated many times. The convention is to say that results are statistically
significant if there is a 5% probability or less that the results were due to chance
alone.
2. Are logic and statistical analysis used to help distinguish between coincidence (chance), correlation (association),
and causation?
Correlation and causation are commonly confused with each other. For example,
people who exercise have a lower risk of heart attack is a statement of
correlation, but exercise lowers the risk of heart attack is a statement
of causation.
It is very hard to prove causation (that A causes B). In order to do so, one
needs to show that A must always be present for B to occur, and that B will always occur
when A is present (A is both necessary and sufficient cause of B).
An example of how this can be done in science is the use of Kochs postulates for
determining whether a microorganism causes a particular disease:
- The organism must be associated with every case of the disease
- A pure culture of the organism must be able to be grown outside the body
- When introduced into a healthy subject, the pure culture of the organism must cause the disease to occur.
- The organism must be recovered from the subject and cultured again.
Because of limits on time, funding, or because of ethical considerations, often the
best that can be done is to evaluate a relationship using logic and laws of probability.
When looking for a cause of an illness, scientists would look for large differences
between people who had and didnt have exposure to a suspected cause. They
would check to see that those differences are present between groups that would otherwise
be at similar risk for developing an illness. Scientists would also check that a
logical reason for a suspected relationship exists.
3. Are new ideas or results viewed critically and with skepticism?
Scientists should ideally presume a new idea wrong until it is well supported with
evidence.
Pseudoscientists arent skeptical of their own results, but are skeptical of the
results of others.
Types of Arguments and Persuasive Devices
Certain techniques are commonly used to attempt to convince the reader of the validity
of an argument. Be aware of some of these techniques when you are evaluating a
source.
The following types of arguments are discussed in What Science is and How it Works,
by Gordon Derry:
1. Straw Man
An argument directed not at someones actual position, but at a weaker version (the
straw man) created by the opponent. This weaker version would seem, for
example, illogical or irrelevant.
2. Ad Hominem (to the man)
An argument directed at an individual, rather than the individuals position.
The person themselves is attacked, rather than the evidence or the logic of their
argument.
3. False Dilemma
Two choices are proposed, and one of these is more easily attacked. This leaves the
other choice as the only obvious possibility. However, in reality there may be many
other alternatives or complexities which are not addressed.
4. Begging the Question
This type of argument (also called circular reasoning) assumes the truth of
its conclusions as part of the reasoning leading up to the conclusion.
5. Slippery Slope
An argument in which the position argued against is depicted to result in something
terrible. The terrible result is then argued against, rather than the position
itself.
The following types of persuasive devices are described in Forests: Identifying
Propaganda Techniques, by Anderson and Buggey:
6. Bandwagon
Everyone else is doing it. This technique takes advantage of the desire
of many people to feel as though they belong to a group. The argument is that if
most people believe a certain way, then the reader should also feel that way.
7. Slanted Words or Phrases
In this technique, emotionally charged or biased words are used to convince the reader of
a certain position (contrast mature citizen with old fogy).
8. Scare Tactics
This technique tries to scare the reader into siding with a particular position. The
argument is evaluated on the basis of emotion (fear) rather than logic and reason.
REFERENCES
Aaseng, Nathan. Science vs. Pseudoscience. New York: Franklin Watts, 1994.
American Cancer Society: ACS Newsstand, Interpreting the Science in Scientific Studies
(1997), http://www.cancer.org/media/1mar4.html (accessed 7/5/97).
Anderson, Robert, and JoAnne Buggey. Forests: Identifying Propaganda Techniques. San
Diego, CA: Greenhaven Press, Inc., 1992.
Arthritis: Unproven Remedies, Arthritis Foundation, Atlanta, Georgia, 1987.
Derry, Gregory. What Science is and How it Works. Princeton, NJ: Princeton University
Press, 1999.
Park, Robert. "Voodoo Science: the road from foolishness to fraud,"
Oxford University Press, 2000.
Weiss, Noel S. "Distinguishing Cause From Coincidence", Alaska
Airlines/Horizon Air Magazines July 1993.
Special thanks to:
Cynthia McClellan, Steve Collins, Nancy Hutchison,
Karen Peterson, Diane Rosman, and Dave Vannet.
Please send comments or suggestions to:
Jeanne Chowning
E-mail: jtchowning@comcast.net