New Tools Promise Life-Saving Treatments from Basic Science

Done well, translational science can save lives. (Flickr/kaibara87), CC BY-SA

A mouse with cancer dies from a trial treatment that cured its genetically identical sibling. A stem cell in a dish has a different reaction than usual to a chemical cocktail and morphs into something unexpected.

This is the reproducibility crisis, and it is killing translational science.

New tools for critically reviewing research, created by researchers in the U.K., promise to tackle this crisis by pinpointing the flaws in the way we decide what is “good” data. They also promise to keep scientists on track and make sure only the most promising research moves towards the clinic.

Saving lives

If it’s good, translational science — that is, science that makes the jump from the lab into the clinic — can have a real impact on saving patient lives. If it’s bad, millions of research dollars can be wasted pursuing phantom treatments that should have ended up on the science scrapheap.

As a researcher in biology, immunology, virology and cancer, I’ve spent the last 13 years working at the lab bench and in the clinic. Now, I’m working on collecting and critically analyzing published research studies to make sure that the best drugs and treatments get licensed for patients.

Taking a deep dive

Scientific research typically falls into three camps: basic in vitro research (which is mostly done in a dish), pre-clinical in vivo research (that gets tested in animals) and clinical research (which is performed in patients). We already have a great technique to look at pre-clinical and clinical research to decide if it’s good enough to make it into patients, using a deep search and report approach called a systematic review.

A systematic review takes a hard look at the results from every high quality study that’s ever been published in a particular area of research and pools them all to produce a more powerful, overall conclusion about whether or not a treatment or technology is really effective —all the while minimizing bias and random error. Systematic reviews are used by national health boards around the world to decide whether new drugs or diagnostic tools can be approved for patient use.

But systematic reviews are hardly ever applied to basic research. This is a bit ironic, given that basic research is the foundation that every pre-clinical and clinical study is built upon. If basic research is undermined or unreliable, that means a disastrous collapse of the translational system that pushes new treatments into the clinic.

Flaws in the system

In a new paper, published in PLoS One, British researchers have highlighted some of the fundamental flaws that are going unchallenged in our research system when scientists fail to apply systematic approaches to their basic research. After investigating a test case — looking at how a cell’s machinery gets unequally divided after the cell splits in two — they found that just seven per cent of studies in this area could be considered reliable. Most were too ambiguous to be useful, and some were downright alarming. The team then custom built a brand new set of research tools to take a closer look at these studies.

One of the tools analyzed if a particular model system — from starfish, sea urchins, worms and fruit flies to mice, monkeys, humans and hamsters — was likely to produce a reliable answer to a particular research question. While most studies (61 per cent) used a relevant model, seven per cent were not fit for the purpose.

In one particular study, scientists claimed to have created a new type of stem cell-like model, but they never did a basic check to confirm that the cells behaved in a stem cell-like way.

Stem cells can divide into different types of cells – so it’s important to have good markers to fingerprint their offspring.
(swiftscientist/pixabay)

Another tool was designed to find out if the tags researchers used to shine a spotlight on different parts of the cell’s machinery were tested ahead of time to check that they were going to work. The outcome: most papers (57 per cent) never checked these key research tools before they started using them.

Inevitably, this led to some contradictory results. For example, one study used two different types of tag to highlight the same cell structure — the mitochondria, the powerhouse of the cell — and reported totally opposite results.

Research crunch

Perhaps most shockingly, 83 percent of studies failed to include any basic internal checks: positive controls (which are designed to work every time), negative controls (which are designed to fail every time) and experimental repeats (which are designed to make sure the results are real). These are the most basic building blocks of scientific research. Every researcher gets drilled to include these when they start out in science. The fact that these are being missed so often — not just by the scientists themselves, but by the peer reviewers who comb through their papers before they’re published — is a truly alarming trend.

Encouraging basic researchers to apply more rigour in how they gather, analyze and critically evaluate the data they generate in the lab is an important issue at the very heart of research. New tools, such as the ones developed by this British team, will help researchers to systematically review their basic research findings before they get moved towards the clinic. They should also help to address some of the core issues that lead translational studies to fail, and millions of research dollars to be wasted.

The ConversationUltimately, these approaches will help “landmark” research findings, which are all too often wildly exaggerated in the current research climate, become a reality, and actually help the patients in the clinic who need them the most.

Stephanie Swift, Researcher, University of Ottawa

This article was originally published on The Conversation. Read the original article.

No Comments Yet

Leave a Reply

Your email address will not be published.