Bias in Research and Why Alternative Treatments Are Often Proven Bogus

Conducting Research is a Pain in the Butt

It’s so easy to criticize, but at the heart of it, conducting research is hard.

I spent a year between my third and fourth years of medical school cooped up in a basement level neurology laboratory conducting basic science research on Amyotrophic Lateral Sclerosis (ALS). This was pre-ice bucket challenge. The greatest discovery I made during my year, other than learning a great deal about study design and research analysis, and how live on a 20,000 dollar a year salary, was that I am not very good at pipetting and even worse at doing PCR gels. Also, ironically a distressing number of rats that I anesthetized for my studies died from poorly administered anesthesia. (I swear I’ve gotten much better.)

My point is, designing and conducting a good study is extremely difficult; designing and conducting a bad one surprisingly simple.

Oh but I’m not Biased, right?

One key challenge to conducting quality research is overcoming inherent study bias.

We frequently talk about financial bias: like the researcher whose study is funded by the same drug company making the drug (certainly a conflict of interest- especially if the researcher wants to do another study).

I will say though in defense of the often maligned drug companies, their pens are just awesome! When you are a medical student or resident, pens are pretty much all you got. You’re poor, tired, overworked, but damn it when you have drug-rep pen in hand that fits snuggly in your palm and the beautiful non-smudge ink flows out gloriously from the ballpoint, it inspires you to dig deep down inside yourself and fuels your mind and body to make it through the long and arduous call night. I can remember as a medical student, before Michigan Hospital banned drug reps from the campus, I walked around campus with a collection of the best pens I’ve ever owned -each prized writing utensil covered with a label for an obscure sub-specialist drug whose purpose to this day remains a mystery to me.

Those were the days. Today, residents have abandoned hand written tasks, and mostly use computers. Lunch and pen bringing drug reps are banned from many academic centers. I don’t know how the students cope-and because they sign everything electronically now, they don’t get the esteemed doctor tradition, occurring slowly over years, of evolving their signature from a fully formed legible symbol of their actual name, into a one centimeter squiggly line. It becomes convenient at the end of residency in terms of potential plausible deniability after a mistaken signed order, as everyone had the same signature.

I digress though.

Back to the Biases 

We talk about bias in research, however there is a much more fundamental bias in research other than financial conflict of interest that we almost never talk about. It really is the elephant in the room: The fact that research is done by people who’s profession is to do research.

A thought experiment:

Think about it- you’re a young researcher in an academic center- sure you treat patients, but the advancement of your career hinges on your research activities. Publish or perish they say. Sure case reports and retrospective studies fill your time and expand your resume, but your tenure depends on conducting quality randomized controlled studies. Journals don’t really frequently publish negative results of studies: “Yes Assistant Professor Consilvio – you’re elegantly designed study showing breakdancing does not cause an increase in pneumonia is flawlessly designed and conducted, but we won’t be publishing it because nobody gives a shit about that.” So what is a newbie researcher to do? They find ways to design studies where the odds are ever in their favor.

So next time you sit down with a journal article peruse the exclusion criteria and who is not allowed to be in the study. These are all the patients, real or potential, that are evaluated to be in a study and are told they can’t join because of specific reasons. Maybe they are too old or too young, have just had a surgery, are too fat or thin; maybe they have cancer or had cancer or cardiac surgery or heart failure; or show up to the hospital during certain hours- the list can go on and on. The researches routinely exclude lots of different types of patients from a study.

Don’t believe me? Pick up an article from the New England Journal of Medicine, basically the best of the best in medical journals, find the appendix of the paper and read the exclusion criteria.

There’s a lot of people who got to the threshold of their lifelong dream of joining a clinical study only to have the door slammed in their face.

Now, if I put my researcher hat on, I completely understand why the investigators need to do this- the odds of finding a statistically significant effect in the study will decrease with confounding variables and they want to make it as simple as possible.

That is good research design.

In short, Assistant Professor Consilvio, after learning from his mistakes, will most likely in his next research endeavor not only choose a better study question, but also use good research design and carefully consider the inclusion and exclusion criteria of his study. This a priori adjustment might consequently increase his likelihood of finding a positive result. Hey the guy has a young family and needs to do anything to increase the possibility of future tenure. You gotta do what you gotta do.

So what to do?

But the real question is, how does one take knowledge from a clinical study’s results and apply it to real life clinical practice? Can we apply the study to the patients that have all the confounding variables that would have kept them out of a clinical study? Or are these people different?

Nobody really knows what is the right thing to do

It seems when I stroll around my ICU, I often realize none of my patients would have been allowed in many clinical studies.

In the ICU, I’ve realized that as patients get sicker and their pathology more complex, the number of variables, both known and unknown, affecting the treatment of an individual patient, increases to a point where it becomes very difficult to control one variable at a time- which is a fundamental requirement to conduct a controlled clinical study. The clinical situation can change over the course of minutes; one intervention causes another problem, which must be solved and so on and so on.

When I read of studies conducted in the ICU, in order for an intervention to show a statistically significant effect it would have to be so powerful as to overwhelm all of the other variables that happen moment to moment. So, when we find something strongly positive, it can be a great discovery and can confidently be applied to our library of clinical knowledge- something like giving an aspirin to a person having a heart attack. That really helps a lot by the way.

But when a study with so many variables comes up as negative, it may be more difficult to interpret. The medical effect of an intervention may be small, or only work in certain situations, or at certain times of a disease process, or in a certain subpopulation. Unless our study is set up perfectly to test this, we may miss it. In academic medicine, or more precisely us doctors who are not researchers, but do keep up with journals articles to guide our practice, often vastly underestimate the limitations of the studies we are reading.

Researching Alternative medicine…

So when I read a study concluding that an alternative medical treatment that has been used for thousands of years in Chinese culture is no better than a placebo, I should probably wonder if the study design was sound for that particular clinical question.

For example, let’s say you wanted to study if acupuncture worked to treat a disease like Lupus. Well first you would be taking a western defined disease (I mean trust me we hardly know what defines lupus in the western medicine) and then attempt to study the effects of a Chinese treatment: acupuncture- which is based on a completely different paradigm.

Another thought experiment…

Imagine this: We have two different patients that “Dr. Western”, a rheumatologist, diagnoses with lupus. But when these same patients visit “Dr Eastern”, a specialist in Chinese medicine, she concludes these two patients have two completely different disease states- like one has an overabundance of yin, the other an overabundance of yang.

So who is right? Maybe they both are from different perspectives.

It thus wouldn’t be surprising if our study concluded that acupuncture doesn’t help treat lupus. It wouldn’t be because the acupuncture wasn’t effective, but rather because we took a western diagnosis, Lupus, and attempted to treat it with a Chinese treatment, acupuncture.

We may be vastly underestimating the differences in the way different systems of medicine diagnose disease states. Sometimes the treatment may be an effective way to harness the benefits of the placebo effect. Of course sometimes, the treatments we are testing are just pure bullshit.

The medicine I learned in school is really good at diagnosing certain types of disease. We are really good at infectious disease; trauma; curing diseases with a definable cause.

We aren’t so great at treating less definable diseases or syndromes- like chronic inflammation, chronic pain, and autoimmune diseases. In fact we kinda suck at it. It is certainly possible that our lack of success in these realms is rooted in diagnostic failure, not just treatment failure. I think that may be why alternative treatments appeal to so many people. They offer an alternative to the crap which hasn’t worked for their problem and people are willing to try something new, proven or not.

Please Don’t Abandon Science

I don’t want to come off as someone who doesn’t respect well conducted studies, or doesn’t use these studies’ conclusions to adjust my clinical practice- Because I do- all the time. To not would be even more idiotic than accepting them at unquestionable fact. We must constantly
reassess what we know.

We must conduct sound science with alternative medicine therapies, while constantly questioning our study design and see if it applies to our treatments. This is the constant challenge of all medical research in all subjects of research.

The tedious flog of science must continue, even with all it’s inherent bias. But we need to take the time to tease out our bias, consider the reality of confounding variables in clinical situations, and somehow find the right questions to ask. So keep searching Assistant Professor Consilvio.

1 thought on “Bias in Research and Why Alternative Treatments Are Often Proven Bogus”

  1. I thought this site was going to be about bashing ortho-pedics… boy am I disappointed!

Comments are closed.