Abbreviated Journal Title | {2014} Total Cites | Impact Factor |
AM J BIOETHICS | 1363 | 5.288 |
DEV WORLD BIOETH | 238 | 2.054 |
HASTINGS CENT REP | 988 | 1.684 |
J MED ETHICS | 2845 | 1.511 |
BMC MED ETHICS | 404 | 1.495 |
BIOETHICS | 982 | 1.483 |
NEUROETHICS-NETH | 205 | 1.311 |
J EMPIR RES HUM RES | 365 | 1.25 |
PUBLIC HEALTH ETH-UK | 190 | 1.182 |
J LAW MED ETHICS | 1189 | 1.097 |
HEALTH CARE ANAL | 357 | 0.958 |
KENNEDY INST ETHIC J | 285 | 0.867 |
J MED PHILOS | 675 | 0.851 |
ACCOUNT RES | 173 | 0.826 |
J BIOETHIC INQ | 165 | 0.747 |
NANOETHICS | 146 | 0.703 |
MED HEALTH CARE PHIL | 448 | 0.7 |
CAMB Q HEALTHC ETHIC | 344 | 0.682 |
MED LAW REV | 153 | 0.65 |
THEOR MED BIOETH | 325 | 0.537 |
INT J FEM APPROACHES | 48 | 0.486 |
REV ROM BIOET | 112 | 0.462 |
ETHIK MED | 56 | 0.326 |
ACTA BIOETH | 45 | 0.074 |
Rules of engagement: 1) You do not have to register to leave comments on this blog. 2) I do not respond to anonymous comments. 3) I reserve the right to delete defamatory, racist, sexist or anti-gay comments. 4) I delete advertisements that slip thru the google spam folder as I see fit.
Showing posts with label impact factor. Show all posts
Showing posts with label impact factor. Show all posts
Friday, June 19, 2015
2014 JCR Impact Factor for Bioethics and Medical Ethics Journals
Caution: this list does not reflect 'quality', or 'best' unless you assume that a lot of people citing and criticising obviously flawed content published in a journal would demonstrate that that journal is a high quality outlet. All it does is indicate how frequently articles in that journal were cited over a predetermined period of time. It doesn't tell us anything about the reasons for the citations. I included journals I found in JCR's medical ethics as well as its ethics list. - Not that it should matter, but for the sake of it, I co-edit 2 and 6 on this list.
Friday, October 25, 2013
Scientific research is in a serious crisis
This weekend's column from the Kingston Whig-Standard.
A short while ago I wrote a column attacking homeopathy and other pseudo-scientific solutions to serious health problems. Not unexpectedly my Twitter feed gathered followers among friends of homeopathy, slamming me variously for not reporting the amazing scientific research supporting the use of homeopathy as well as for reporting uncritically mainstream science’s take on the issue.
Well, as to the former, there is zero evidence that I am aware of, to date, that homeopathic concoctions work. There is no homeopathic ‘remedy’ that will protect you against the flu, for instance.
As to the second point though, while there is no alternative to scientific research, not all is well in the scientific research enterprise either. And it’s not, as the friends of homeopathy would have us believe, one big conspiracy instigated by nasty pharmaceutical multinationals. It turns out, there is a lot of scientific evidence — how ironic — accumulating that things are going pretty badly wrong due to the way scientific excellence is measured today by folks in the business of ranking university excellence, as well as research funders, governments and sadly many administrators in the academy.
Scientific knowledge relies fundamentally on evidence that can be replicated. Say, I undertake a clinical study with a particular experimental agent and it turns out that this agent works — by some standard — better than an alternative drug given to trial participants, my trial could eventually provide sufficient evidence for the experimental agents to become an approved drug.
Of course, much relies on my trial having been methodologically sound, my statistical analysis of the results having been correct and so on and so forth. Typically I will publish my results in a scientific journal where my data and analysis are scrutinized by specialist peer reviewers. These reviewers are tasked by the journal’s editors with checking that my trial was ethical, that it was methodologically sound, and that my conclusions are actually supported by my data. Of course, even the best reviewers make mistakes, they might not have the competencies to evaluate the relevant stuff and so bad science slips through and gets published.
Sadly we have plenty of evidence for the failure of large numbers of peer reviewers to pick even the most basic errors in scientific manuscripts. This might have to do with the fact that they are typically expected to volunteer their time and that their university employers usually don’t give them credit for this sort of work. In fact, as a journal editor, I can tell you that frequently the most seasoned academics refuse to undertake these vital reviews because that work doesn’t add to their CV, peer recognition, name it.
Well, the error control mechanism that science is based on is that someone will try to replicate scientific studies and then the erroneous research studies will come to light. It really is trial and error. That’s the theory. The practice is that very many, if not most, scientific studies are not replicated. The reason for this is that there is little incentive by research funders to do so. The buzz word is ‘innovation.’ Oh yes, the academy has not remained above the vacuous babble of modern management talk. We all want to be ‘excellent,’ ‘innovative’ and ‘path-breaking’ at pretty much everything that we do.
In fact, our research funders expect no less of us. You are not innovative if you simply check whether someone else has done a proper job. You don’t have to be a scientist to realize how foolhardy such funding policies are. Some efforts at replicating so-called landmark studies have been made. The British magazine The Economist reports that only six of 53 landmark studies in cancer research could be replicated. Another group reported that it managed to replicate only about 25% of 67 similarly important studies.
The good news is that this has been done at all. The bad news is that this isn’t standard fare in the sciences, biomedical or other. Verification of other scientists’ research just isn’t a good career move in a research enterprise that doesn’t value such vital work.
Another problem is that scientists are demonstrably reluctant to tell us if their experiment failed. The reason could be that a commercial sponsor doesn’t want the world to find out too quickly that one of the ‘promising’ drugs in its pipeline is actually a fluke. Commercial confidentiality agreements stand in the way of serving the public interest. Shareholder interests typically trump the public good, and scientific researchers more often than not collude. The likely outcome of this situation is that at some point someone else will test the same component again. Time and resources are therefore wasted. So-called negative results currently are featured in only about 14% of published scientific publications. Of course, in reality the odds are that very many more research studies fail. Scientific progress, to a large extent, depends on failure.
Alas our current systems don’t reward the reporting of failure. Academic journals have their quality measured by a foolish tool, devised in Canada, called Impact Factor. Basically this tool measures how frequently articles published in a journal are cited over a two-year period. Obviously, you won’t be able to become a high-impact journal with papers that report failed studies. People rarely cite such results. Accordingly many researchers don’t even submit such important study outcomes. Don’t we all just love success? Have you ever seen a university marketing department celebrating a researcher’s failure? Me neither. It’s not how we roll in the academy. To be fair to the marketing folks, not many media outlets would report Professor C. Ancer’s failure to replicate a landmark, breakthrough cancer study, despite the fact that the much-reported landmark study has so been shown to be of questionable quality, if not outright flawed.
Some efforts are currently under way to register all trials and to ensure that outcomes are reported, if not in scientific journal then in some other forum that’s easily accessible. The same holds true for the raw data gathered in a trial. Still, progress on this front is far from satisfactory.
New commercial publishing models in the academy further aggravate some of the problems just described. A new business model called Open Access relies overwhelmingly on authors (as opposed to subscribing university libraries) paying for the publication of their work. The journals’ commercial success depends on uploading as many academic outputs to their webservers as possible. The more they publish, the higher their profit. Recently a science journalist submitted an error-ridden manuscript to 304 such Open Access journals. A total of 157 accepted it happily for publication, subject to the article upload fee I mentioned earlier. From there said paper would have gone straight into relevant biomedical databases as a peer reviewed paper. On the face of it, a sound scientific publication.
Sadly, due to the publish-or-perish mentality that isn’t a myth in the academy, quite a significant number of academic researchers engage in academic misconduct in some form or shape. One recent survey reports that about 28% of scientists know of researchers who engage in scientific misconduct in the research they undertake. It is not clear whether all of that misconduct necessarily translates into fraud or useless research outputs, but a significant amount of it almost certainly does.
I could go on in this vein for quite a while because there is plenty of dirt to be found where there is scientific research. It is high time universities and researcher funders have a serious look at the kinds of systems they have created to measure and incentivize research activities. It does appear to be the case that what is in place currently incentivizes unethical conduct to a significant extent. That must change.
And yet, keeping Winston Churchill’s dictum in mind that ‘democracy is the worst form of government except for all those other forms that have been tried from time to time,’ much the same can be said for scientific research. It is the best we’ve got, but that shouldn’t stop us from fixing the problems we are aware of.
Udo Schuklenk holds the Ontario Research Chair in Bioethics at Queen’s University, he is a Joint Editor-in-Chief of Bioethics the official publication of the International Association of Bioethics, he tweets @schuklenk
Thursday, July 14, 2011
CMAJ Impact Factor and Impact on Authors
I got an interesting email from the Canadian Medical Association Journal today. The CMAJ informs me that its Impact Factor has increased from 7.3 to 9. So, in the average a paper gets cited 9 times per year during a two year window period right after publication. Congratulations to my colleagues at the CMAJ editing that paper. The journal I edit jointly with Ruth Chadwick, Bioethics, improved its Impact Factor sufficiently to jump into second places among journals publishing primarily bioethics content. We're currently standing at 1.64. This gives us about twice the impact of reportedly more 'prestigious' journals such for instance Ethics which languishes in the vicinity of 0.8 if I am not mistaken. Philosophers, no doubt, will point to the amazing 'quality' of what Ethics publishes, suffice it to say that that quality doesn't seem to result into a great deal of citations (ie use). Now, if a journal does great quality publishing but there's not much evidence of interest in that quality in terms of academics actually using it in their own published research, how do those claiming 'quality' demonstrate quality? I'm not suggesting that impact equates quality either by the way, but at least impact points to utility, peer reviewed content is demonstrably being used by academics in their peer reviewed outputs. It's a reasonable start toward measuring a journal's relevance as an academic outlet.
Anyhow, I digress, I meant to write about the CMAJ email. Its marketing spiel (marked as 'this is not spam') is aimed at attracting authors to the journal based on its improved impact. Here's the offending line from said email: 'This is good news for authors who publish in CMAJ and hope to have their work cited.' This seems nonsense to me, to be honest. An improved Impact Factor as such is neither here nor there for authors who hope to see their work cited. Here is the reason: Most academics searching for research papers relevant to their own work will not look for particular journals. They will key in keywords in specialist databases (as well as google scholar possibly). Once they find relevant content they will download it via their library's on-line services. Nobody will go any longer into the library to browse a particular journal issue in the hope of finding relevant content there. It would be highly inefficient to do something like that. What determines whether someone cites your work, in this day and age, is whether the journal is widely available on-line, and whether the content of the journal is indexed widely in the relevant data-bases, whether you got the right title, keywords and abstract as well as the right content The Impact Factor as such has no impact on these crucial features that determine whether your paper will be cited. What it does tell us is that the editors of the journal made prudent choices aimed at increasing citations with regard to the papers they accepted, no more, no less. As any investment guru will tell you, current performance is no guarantee of future performance, so as an author you are on your own on this. There's no way you could ride (ie 'benefit') on the coat tails of the journal's improved Impact Factor. It's as simple as that. Let that not stop you at all from submitting relevant content to the CMAJ, just keep in mind that whether or not a paper they accept will be cited or not is up to factors other than their current (or future) Impact Factor.
Anyhow, I digress, I meant to write about the CMAJ email. Its marketing spiel (marked as 'this is not spam') is aimed at attracting authors to the journal based on its improved impact. Here's the offending line from said email: 'This is good news for authors who publish in CMAJ and hope to have their work cited.' This seems nonsense to me, to be honest. An improved Impact Factor as such is neither here nor there for authors who hope to see their work cited. Here is the reason: Most academics searching for research papers relevant to their own work will not look for particular journals. They will key in keywords in specialist databases (as well as google scholar possibly). Once they find relevant content they will download it via their library's on-line services. Nobody will go any longer into the library to browse a particular journal issue in the hope of finding relevant content there. It would be highly inefficient to do something like that. What determines whether someone cites your work, in this day and age, is whether the journal is widely available on-line, and whether the content of the journal is indexed widely in the relevant data-bases, whether you got the right title, keywords and abstract as well as the right content The Impact Factor as such has no impact on these crucial features that determine whether your paper will be cited. What it does tell us is that the editors of the journal made prudent choices aimed at increasing citations with regard to the papers they accepted, no more, no less. As any investment guru will tell you, current performance is no guarantee of future performance, so as an author you are on your own on this. There's no way you could ride (ie 'benefit') on the coat tails of the journal's improved Impact Factor. It's as simple as that. Let that not stop you at all from submitting relevant content to the CMAJ, just keep in mind that whether or not a paper they accept will be cited or not is up to factors other than their current (or future) Impact Factor.
Sunday, July 06, 2008
Academic shenanigans - Impact factors and such
Conflict of interest declaration: I am an editor of two academic journals published by a commercial publisher.
The 'standing' of academic journals in the scientific community is these days evaluated by two sets of criteria.
The old-fashioned one: A journal's relative importance is measured by peer esteem, ie how many really really famous people (let's call them peers) publish in a given journal, how long does it take the journal to review a manuscript (inefficiency of the review process is here taken as a measure of the journal's importance), how many manuscripts are being rejected (if you publish only one manuscript per year, you'd probably be the most competitive to get into, ergo best journal) and so on and so forth. Very much like the ongoing evaluation of graduate philosophy programs on a US based website that relies on peer gossip (ie someone chooses someone else as a peer, enough people play along, voila, you have a 'system' of evaluation). Most, if not all of this old-fashioned stuff really is quasi-religious in nature and can safely be discarded.
The supposedly scientific one: Well, Thomson Scientific has more or less cornered the market with its ISI. What they do, roughly, is to measure a journal's impact based on how frequently an article is cited in a 2-year period after its publication - that's then weighted against the overall number of papers published in the journal in the same time frame. This, of course, equals importance and quality with citations - ie a quantitative measure. Disciplines like medicine and law are well served by this, because it's part and parcel of those disciplines' academic papers to reference meticulously. It also helps, of course, that many more people work in such disciplines then in, say theology, so there's more people's publications, and more citations going around. The result is that such journals tend to rank much higher in terms of impact than even the best theology journal. It's also easy to manipulate this impact factor game, simply by publishing content that is likely to be sufficiently controversial to generate lots and lots of citations. Someone pointed out, rightly so, that the paper published by the fraudulent South Korean cloning guy, that has since been withdrawn, helped the journal's impact factor, because it gets cited by everyone as an example of scientific misconduct.
None of this is really newsworthy, however. What is newsworthy, is this: Journal editors at scientific-research based Rockefeller University in New York City have bought data sets including their journals from Thomson to replicate their findings (ie their journals' impact factor). They could not reproduce Thomson's results, using even Thomson's own method. Worse, on request Thomson was unable to verify its own results. This is of serious concern, because many academics, myself included, despite misgivings about the counting game to which Thomson reduces academic excellence, thought that that is the best there is. Well, it turns out that it is clearly a very unreliable best there is.
Perhaps it is time to junk the ISI and go along with the Rockefeller scientists' suggestion that ' publishers to make their citation data available in a publicly accessible database, and thus free this important information from Thomson Scientific's (and other companies') proprietary stranglehold.' This seems a sensible proposition. Surely it cannot be permitted to continue that academics' careers depend on proprietary commercial data that cannot be independently verified, and that - as the Rockefeller people have shown - cannot even be verified by the company itself.
The 'standing' of academic journals in the scientific community is these days evaluated by two sets of criteria.
The old-fashioned one: A journal's relative importance is measured by peer esteem, ie how many really really famous people (let's call them peers) publish in a given journal, how long does it take the journal to review a manuscript (inefficiency of the review process is here taken as a measure of the journal's importance), how many manuscripts are being rejected (if you publish only one manuscript per year, you'd probably be the most competitive to get into, ergo best journal) and so on and so forth. Very much like the ongoing evaluation of graduate philosophy programs on a US based website that relies on peer gossip (ie someone chooses someone else as a peer, enough people play along, voila, you have a 'system' of evaluation). Most, if not all of this old-fashioned stuff really is quasi-religious in nature and can safely be discarded.
The supposedly scientific one: Well, Thomson Scientific has more or less cornered the market with its ISI. What they do, roughly, is to measure a journal's impact based on how frequently an article is cited in a 2-year period after its publication - that's then weighted against the overall number of papers published in the journal in the same time frame. This, of course, equals importance and quality with citations - ie a quantitative measure. Disciplines like medicine and law are well served by this, because it's part and parcel of those disciplines' academic papers to reference meticulously. It also helps, of course, that many more people work in such disciplines then in, say theology, so there's more people's publications, and more citations going around. The result is that such journals tend to rank much higher in terms of impact than even the best theology journal. It's also easy to manipulate this impact factor game, simply by publishing content that is likely to be sufficiently controversial to generate lots and lots of citations. Someone pointed out, rightly so, that the paper published by the fraudulent South Korean cloning guy, that has since been withdrawn, helped the journal's impact factor, because it gets cited by everyone as an example of scientific misconduct.
None of this is really newsworthy, however. What is newsworthy, is this: Journal editors at scientific-research based Rockefeller University in New York City have bought data sets including their journals from Thomson to replicate their findings (ie their journals' impact factor). They could not reproduce Thomson's results, using even Thomson's own method. Worse, on request Thomson was unable to verify its own results. This is of serious concern, because many academics, myself included, despite misgivings about the counting game to which Thomson reduces academic excellence, thought that that is the best there is. Well, it turns out that it is clearly a very unreliable best there is.
Perhaps it is time to junk the ISI and go along with the Rockefeller scientists' suggestion that ' publishers to make their citation data available in a publicly accessible database, and thus free this important information from Thomson Scientific's (and other companies') proprietary stranglehold.' This seems a sensible proposition. Surely it cannot be permitted to continue that academics' careers depend on proprietary commercial data that cannot be independently verified, and that - as the Rockefeller people have shown - cannot even be verified by the company itself.
Subscribe to:
Posts (Atom)
Ethical Progress on the Abortion Care Frontiers on the African Continent
The Supreme Court of the United States of America has overridden 50 years of legal precedent and reversed constitutional protections [i] fo...
-
The Canadian Society of Transplantation tells on its website a story that is a mirror image of what is happening all over the w...
-
The Supreme Court of the United States of America has overridden 50 years of legal precedent and reversed constitutional protections [i] fo...
-
Canada’s parliament is reviewing its MAiD (medical assistance in dying) legislation. This is because there were some issues left to be a...