Skip to main content
Archive (2003-2004)

Statistics important in recognizing valid research

By Leah Elison

Everyday readers find information about the latest study in the pages of newspapers and journals, with results that range from outrageous to obvious.

Differentiating between valid research and unreliable data can be difficult for readers without a strong background in the field of statistics.

'This is a very legitimate problem,' said John Bell, associate dean of the college of biology and agriculture. 'The average public is sort of shielded from the actual science and that is not necessarily a good thing. They have to rely on the popular press or things they hear from their neighbors and friends.'

Experts like Bell have offered a few guidelines to help readers determine how dependable a study is and how seriously to consider results.

Source

When reading about a study, the first reliability indicator is to look at is the source of the information, said Scott Grimshaw, associate professor of statistics.

'No one expects the New York Times to go study heart medication,' he said. 'I am never impressed that this is the Wall Street Journal''s survey, because I know the Wall Street Journal is not in the business of doing surveys.'

Grimshaw said he never trusts studies done by a reporter or a news agency, such as over-the-phone opinion polls shown during news broadcasts.

Articles about studies need to borrow their credibility by reporting research done by experts, he said.

'I think that journalism''s role is to report,' Grimshaw said. 'Good journalists are part of the bridge between the journals and the public.'

Many news agencies will commission research companies to conduct studies or will report information collected by an established research organization, Grimshaw said.

When the research company has a good reputation, the results can usually be trusted, he said.

Companies like The Gallup Organization, The Roper Center, Nielsen Media Research and Wirthlin Worldwide have shown reliability over the years, Grimshaw said, as well as Valley Research Incorporated and Dan Jones & Associates locally.

Peer Review

When the source of the information is a trade journal, the journal''s peer review process can indicate dependability, Bell said.

'Different journals vary quite a bit in quality,' he said. 'If the article is published in a journal that does anonymous peer review, and the reputation is vigorous, you know experts looked at it carefully before it was published.'

The peer review process involves sending the research report to experts in a particular field who verify that the study uses correct methods and draws logical results.

For example, the editors of the New England Journal of Medicine submit all of the articles they consider publishing to a 4- or 5-step peer-review process, said editor-in-chief Jeffrey Drazen, a professor of medicine from Harvard.

Drazen said he reviews all articles submitted and sends those that meet the Journal''s standards on to associate editors who have specific expertise in the area of the study.

If the study earns the approval of the associate editors, who are professors at local universities, they send the article out for a statistical review and critique by two other experts in that field, Drazen said.

'The veracity of a paper is in direct proportion to the difficulty of the peer review process,' he said. 'It is like sending a paper out to get a public grade.'

The peer reviewers return the study to the associate editor with a list of questions. Approximately one out of ten articles that reach this point in the editorial process run in the Journal, said Drazen.

Editors work with the researchers to resolve problems discovered during the peer review process.

'When you work with an author, you know the ones that really know their data,' Drazen said. 'If they don''t, they don''t last too long around here; they melt in the heat of the lights.'

Method and Data

Part of the purpose of the peer review process is to ensure researchers use valid methods to complete the study, but Patti Collins, a teacher in the department of statistics, recommends that readers examine the methods themselves.

'You need to have done an actual experiment, not just looked at data,' she said. 'It needs to be a correctly done experiment, not just a fly-by-night study.'

Collins said experiments should use random selection of subjects and controls to prevent human subjectivity from biasing the results.

Observational studies, which involve looking at data instead of developing experiments with controlled variables, cannot be used to determine cause-and-effect relationships, she said.

As an example, she cited a study released by the Dairy Council of Utah that claimed calcium could reduce premenstrual discomfort.

The study used only nine subjects, and all were aware that the council was testing whether excessive amounts of calcium could relieve premenstrual syndrome, she said.

'The results were self-reported and not blinded, so they saw what they expected to see,' Collins said. 'That was a terrible experiment.'

Data should show that controls were used to reduce bias and that replication was used to ensure precision, Bell said.

After examining the method, he said, readers should determine whether the data support the conclusions, because data can be manipulated to support almost any hypothesis.

'You could write an article with a headline saying ''pickles will kill you'',' he said. 'What is the evidence? Everyone who ate pickles before 1900 is dead. Just because a conclusion is drawn, does not mean the conclusion is logical.'

Thunder Jalili, assistant professor in the division of nutrition at the University of Utah, said readers should not take studies too seriously until they have achieved credibility in several ways.

Withstanding the scrutiny of peer review and showing reliable data and methods are a start, he said, but studies should also have the support of about 5 to 10 other studies.

He said many weight loss studies come from sources that have a commercial interest in making a product look effective, creating incentive to release biased information.

'For every one success you hear about on a supplement or crazy diet, you have 99 failures,' he said. 'Weight loss is easy in concept and hard in reality because of discipline.'