Showing posts with label research ethics. Show all posts
Showing posts with label research ethics. Show all posts

Friday, October 25, 2013

Scientific research is in a serious crisis

This weekend's column from the Kingston Whig-Standard.

A short while ago I wrote a column attacking homeopathy and other pseudo-scientific solutions to serious health problems. Not unexpectedly my Twitter feed gathered followers among friends of homeopathy, slamming me variously for not reporting the amazing scientific research supporting the use of homeopathy as well as for reporting uncritically mainstream science’s take on the issue.
Well, as to the former, there is zero evidence that I am aware of, to date, that homeopathic concoctions work. There is no homeopathic ‘remedy’ that will protect you against the flu, for instance.
As to the second point though, while there is no alternative to scientific research, not all is well in the scientific research enterprise either. And it’s not, as the friends of homeopathy would have us believe, one big conspiracy instigated by nasty pharmaceutical multinationals. It turns out, there is a lot of scientific evidence — how ironic — accumulating that things are going pretty badly wrong due to the way scientific excellence is measured today by folks in the business of ranking university excellence, as well as research funders, governments and sadly many administrators in the academy.
Scientific knowledge relies fundamentally on evidence that can be replicated. Say, I undertake a clinical study with a particular experimental agent and it turns out that this agent works — by some standard — better than an alternative drug given to trial participants, my trial could eventually provide sufficient evidence for the experimental agents to become an approved drug.
Of course, much relies on my trial having been methodologically sound, my statistical analysis of the results having been correct and so on and so forth. Typically I will publish my results in a scientific journal where my data and analysis are scrutinized by specialist peer reviewers. These reviewers are tasked by the journal’s editors with checking that my trial was ethical, that it was methodologically sound, and that my conclusions are actually supported by my data. Of course, even the best reviewers make mistakes, they might not have the competencies to evaluate the relevant stuff and so bad science slips through and gets published.
Sadly we have plenty of evidence for the failure of large numbers of peer reviewers to pick even the most basic errors in scientific manuscripts. This might have to do with the fact that they are typically expected to volunteer their time and that their university employers usually don’t give them credit for this sort of work. In fact, as a journal editor, I can tell you that frequently the most seasoned academics refuse to undertake these vital reviews because that work doesn’t add to their CV, peer recognition, name it.
Well, the error control mechanism that science is based on is that someone will try to replicate scientific studies and then the erroneous research studies will come to light. It really is trial and error. That’s the theory. The practice is that very many, if not most, scientific studies are not replicated. The reason for this is that there is little incentive by research funders to do so. The buzz word is ‘innovation.’ Oh yes, the academy has not remained above the vacuous babble of modern management talk. We all want to be ‘excellent,’ ‘innovative’ and ‘path-breaking’ at pretty much everything that we do.
In fact, our research funders expect no less of us. You are not innovative if you simply check whether someone else has done a proper job. You don’t have to be a scientist to realize how foolhardy such funding policies are. Some efforts at replicating so-called landmark studies have been made. The British magazine The Economist reports that only six of 53 landmark studies in cancer research could be replicated. Another group reported that it managed to replicate only about 25% of 67 similarly important studies.
The good news is that this has been done at all. The bad news is that this isn’t standard fare in the sciences, biomedical or other. Verification of other scientists’ research just isn’t a good career move in a research enterprise that doesn’t value such vital work.
Another problem is that scientists are demonstrably reluctant to tell us if their experiment failed. The reason could be that a commercial sponsor doesn’t want the world to find out too quickly that one of the ‘promising’ drugs in its pipeline is actually a fluke. Commercial confidentiality agreements stand in the way of serving the public interest. Shareholder interests typically trump the public good, and scientific researchers more often than not collude. The likely outcome of this situation is that at some point someone else will test the same component again. Time and resources are therefore wasted. So-called negative results currently are featured in only about 14% of published scientific publications. Of course, in reality the odds are that very many more research studies fail. Scientific progress, to a large extent, depends on failure.
Alas our current systems don’t reward the reporting of failure. Academic journals have their quality measured by a foolish tool, devised in Canada, called Impact Factor. Basically this tool measures how frequently articles published in a journal are cited over a two-year period. Obviously, you won’t be able to become a high-impact journal with papers that report failed studies. People rarely cite such results. Accordingly many researchers don’t even submit such important study outcomes. Don’t we all just love success? Have you ever seen a university marketing department celebrating a researcher’s failure? Me neither. It’s not how we roll in the academy. To be fair to the marketing folks, not many media outlets would report Professor C. Ancer’s failure to replicate a landmark, breakthrough cancer study, despite the fact that the much-reported landmark study has so been shown to be of questionable quality, if not outright flawed.
Some efforts are currently under way to register all trials and to ensure that outcomes are reported, if not in scientific journal then in some other forum that’s easily accessible. The same holds true for the raw data gathered in a trial. Still, progress on this front is far from satisfactory.
New commercial publishing models in the academy further aggravate some of the problems just described. A new business model called Open Access relies overwhelmingly on authors (as opposed to subscribing university libraries) paying for the publication of their work. The journals’ commercial success depends on uploading as many academic outputs to their webservers as possible. The more they publish, the higher their profit. Recently a science journalist submitted an error-ridden manuscript to 304 such Open Access journals. A total of 157 accepted it happily for publication, subject to the article upload fee I mentioned earlier. From there said paper would have gone straight into relevant biomedical databases as a peer reviewed paper. On the face of it, a sound scientific publication.
Sadly, due to the publish-or-perish mentality that isn’t a myth in the academy, quite a significant number of academic researchers engage in academic misconduct in some form or shape. One recent survey reports that about 28% of scientists know of researchers who engage in scientific misconduct in the research they undertake. It is not clear whether all of that misconduct necessarily translates into fraud or useless research outputs, but a significant amount of it almost certainly does.
I could go on in this vein for quite a while because there is plenty of dirt to be found where there is scientific research. It is high time universities and researcher funders have a serious look at the kinds of systems they have created to measure and incentivize research activities. It does appear to be the case that what is in place currently incentivizes unethical conduct to a significant extent. That must change.
And yet, keeping Winston Churchill’s dictum in mind that ‘democracy is the worst form of government except for all those other forms that have been tried from time to time,’ much the same can be said for scientific research. It is the best we’ve got, but that shouldn’t stop us from fixing the problems we are aware of.
Udo Schuklenk holds the Ontario Research Chair in Bioethics at Queen’s University, he is a Joint Editor-in-Chief of Bioethics the official publication of the International Association of Bioethics, he tweets @schuklenk

Thursday, August 22, 2013

Research ethics scandal in Canada

So there we have our own research ethics scandal, and as is the habit with scandals, they tend to widen. Initially we learned that aboriginal children in the residential school system were subjected to research with nutritional supplements. Now we learn about even more exploitative clinical research. And the Truth and Reconciliation Commission is still digging.
Quite possibly this research was undertaken with the best of intentions, namely to improve the lot of malnourished aboriginal people. To some extent there seem to be parallels to research that is undertaken even today in some developing countries, where cheaper medicines are tested on impoverished trial populations because they are unable to afford the patented mainstream medicines that we take for granted.
There is an argument to be had that such research could be ethically acceptable if reasonable measures are taken to ensure that the trial populations will ultimately benefit from the research findings. After all, it is not the researchers’ fault that pharmaceutical companies price many life-preserving medicines out of the reach of the world’s poor. To blame them for trying to develop cheaper drugs to address genuine health issues seems unfair.
The intellectual property rights system leads to this unfortunate situation, and it should probably change, but it’s not something individual clinical researchers have any chance to influence one way or another.
Quite possibly similar motivations drove the researchers at the time in Canada. There they had a population that was severely malnourished. It was beyond their means to implement policies that would have ensured a supply of foods capable of ensuring a balanced diet for aboriginal Canadians living in remote areas of the country.
That the situation aboriginal people found themselves in was unacceptable is obvious, and the government of the day should be condemned for failing Canada’s aboriginal peoples. Where it gets trickier is to understand what exactly it was that made this research unethical. After all, quite possibly the motives were actually noble.
If it had turned out to be the case that nutritional supplements could have been a substitute for more expensive-to-provide regular food products, it might have been possible to improve the well-being of aboriginal Canadians. In all fairness, I am guessing here. I don’t know whether this is what motivated the researchers of the day. Even if it didn’t, this could have been a possible outcome. Unfortunately, the research (remember, it occurred during the 1940s and 1950s) wasn’t terribly methodologically sound, so it turned out to be time well wasted, even on that front.
A crucial issue in any kind of clinical research is the need for first-person informed consent, or consent given by a properly authorized legal guardian acting in the best interest of the potential trial participant who is unable to consent on her or his own behalf. A good example of the latter are underage people.
So, with regard to the research involving nutritional supplements, what exactly went wrong? For starters, consent was not sought and was not given. A clear no-go, certainly not only today, but also at the time. International standards at the time rightly declared clinical research with human participants in the absence of voluntary first-person informed consent unethical.
What else was unethical, even at the time? Withholding food (e.g. milk) from the involuntary trial participants to establish a particular clinical baseline for comparison was unethical. Not providing dental care to the involuntary participants in order to see clearer what impact the nutritional supplements or their absence would have was unethical.
Why was it unethical? It was unethical because the involuntary trial participants did not agree to the risk to their health. In addition to that: one principle of research ethics has always been that any worsening of a trial participant’s baseline needs to be well justified and is usually subject to some kind of compensation. For instance, today trial participants in sub-Saharan Africa who become HIV-infected in HIV vaccine trials typically receive a life-time supply of AIDS medication to compensate them for the harm they incurred as a trial participant.
The aboriginal Canadians received nothing to make up for the harm they incurred while they were reduced to the status of involuntary lab rats by government researchers.
Since the nutritional research was unearthed it has been reported that other research was also undertaken. This time it’s not just about nutritional supplements. It turns out clinical research was undertaken in aboriginal communities and residential schools. Again, informed consent was neither sought nor given. Incredibly, the drugs that were investigated were – insofar as they were successful – provided to the general population but not to the aboriginal communities without whose involuntary sacrifices they were made possible in the first place.
This kind of exploitation increases the ethics failures of those involved at the time quite significantly. Whereas in the initially reported nutritional supplement research we could have given the investigators the benefit of the doubt at least as far as their motives were concerned, we cannot reasonably do the same with regard to the now-reported clinical research.
What I find most disturbing about this widening scandal is that these events occurred right during the Nuremberg trials. Admittedly, there was no Internet at the time, but still, the crimes of the Nazi doctors in the German concentration camps were front-page news the world over. And yet, in Canada, at about the same time, a vulnerable, arguably captive population of aboriginal Canadians was essentially coerced into clinical research as if the reports about what happened during Nazi researcher Josef Mengele’s reign didn’t happen.
The 1947 Nuremberg Code established that first-person informed consent is essential for any clinical research involving human participants to be ethical. It is clear now that Canadian clinicians continued their research as if the worldwide outcry about the Nazi research never occurred. Nazi research happened in Nazi Germany; clearly that had nothing to do with their activities here in Canada, or so they must have thought.
There is a court order in place asking our government to turn over all related documents to these events to the Truth and Reconciliation Commission. This court order was issued in January. The Commission is still waiting for the relevant documents to be forwarded to its staff. Given that the Commission’s mandate expires in less than a year’s time, time is of the essence.
Udo Schuklenk holds the Ontario Research Chair in Bioethics and Public Policy at Queen’s University. Follow him on Twitter @schuklenk

Ethical Progress on the Abortion Care Frontiers on the African Continent

The Supreme Court of the United States of America has overridden 50 years of legal precedent and reversed constitutional protections [i] fo...