Wednesday, August 31, 2011

My top 5 reasons for rejecting a manuscript

Here are five manuscript transgressions that make me hit "Reject" faster that you can blink. The first four in particular do not instill confidence in what you actually did in the study.

1. Matched cohort masquerading as a case-control
This happens quite a bit with papers submitted to third and fourth tier journals, but watch out for it anywhere. The authors claim to have done a matched case-control study, where there is indeed matching. However, the selection of participants in the study is based on the exposure variable, rather than the outcome. Why is this important? Well, for one, the design informs the structure of the analyses. But even more fundamentally, I am really into definitions in science because they allow us to make sure we are talking about the same thing. And the definition of a case-control study is that it starts with the end -- that is to say, the outcome defines the case. So, if you are exploring whether a Cox-2 inhibitor is associated with mortality from heart disease, do not tell me that your "cases" were defined by taking the Cox-2 and controls were the ones that did not take it. If you are enrolling based on exposure, even if you are matching on such variables as age, gender, etc., this is still a COHORT STUDY!  It is a different story that this may not be the most efficient way to answer the particular example question, and a real case-control might be better. In order to call your study case-control, you need to define your cases as those who experienced the outcome, death in our example, making the controls those that did not die. I know that this explanation leaves a thick shroud over some of the very important details of how to choose the counterfactual, etc., but that is outside the scope here. Just get the design right, for crissakes

2. Incidence or prevalence, and where is the denominator?
I cannot tell you how annoying it is to see someone cite the incidence of something as a percentage. But as annoying as this is, it alone does not get and automatic rejection. What does is when someone tells me this "incidence" in a study that is in fact a matched cohort. By definition, matching means that you are not including the entire denominator of the population of interest, so whatever the prevalence of the exposure may seem to be in a matched cohort is the direct result of your muscling it into this particular mold. In other words, say you are matching 2:1 unexposed to exposed and the exposure is smoking, while the outcome of interest is the development of lung disease. First, if you are telling me that 10% of the smokers developed lung disease in the time frame, please, please, call it a prevalence and not an incidence. Incidence must incorporate a uniform time factor in the denominator (e.g., per year). And second, do not tell me what the "incidence" of smoking was based on your cohort -- by definition in your group of subjects smoking will be experienced by 1/3 of the group. Unless you have measure the prevalence of smoking in the parent cohort BEFORE you did your matching, I am not interested. This is just stupid and thoughtless, so it definitely gets an automatic reject (or a strong question mark at the very least). 
  
3. Analysis that does not explore the stated hypothesis
I just reviewed a paper that initially asked an interesting question (this is how they get you to agree to review), but turned out to turn the hypothesis on its head and ended up being completely inane. Broadly, the investigators claimed to be interested in how a certain exposure impacts mortality, a legitimate question to ask. As I was reading through the paper, and as I could not make any heads or tails out of the Methods section, it slowly began to dawn on me that he authors went after the opposite of what they promised: they started to look for the predictors of what they set up as the exposure variable! Now, this can sometimes still be legit, but the exposure variable needs to be already recognized as somehow relating to the outcome of interest (hey, surrogate endpoints, anyone?). This was not the case here. So, please, authors, do look back on your hypothesis once in a while as you are actually performing the study and writing up your results.

4. Stick to the hypothesis, can the advertising
I recently rejected a paper that asked a legitimate question, but, in addition to doing a shoddy job with the analyses and the reporting, did the one thing that is an absolute no-no: it reported on a specific analysis of the impact of a single drug on the outcome of interest. And yes, you guessed it, the sponsor of the study was the manufacturer of the drug in question. And naturally, the drug looked particularly good in the analysis. I am not against manufacturer-sponsored studies, and even those that end up shedding positive light on their products. What I am against is random results of random analyses that look positive for their drug without any justification or planning. So, all of this notwithstanding, the situation might have been tolerable, had the authors made a credible case for why it was reasonable to expect this drug to have the salutary effect, citing either theoretical considerations or prior evidence. They of course would have had to incorporate it into their a priori hypothesis. Otherwise this is just advertising, a random shot in the dark, not an academic pursuit of knowledge.

5. Language is a fraught but important issue
I do not want to get into the argument about whether publishing in English language journals brings more status than in non-English language ones. This is not the issue. What I do want to point out, and this is true for both native and non-native English speakers, is that if you cannot make yourself understood, I do not have either time or the ability to read your mind. If you are sending a paper into an English language journal, do make your arguments clearly, do make sure that your sentence structure is correct, and do use constructions that I will understand. It is not that I do not want to read foreign studies, no. In fact, you have no idea just how important it is to have data from geopolitically diverse areas. No, what I am saying is that I volunteer my time to be on Editorial Boards and as a peer reviewer, and I just do not have the leisure to spend hours unraveling the hidden meaning of a linguistically encrypted paper. And even if I did, I assure you, you are leaving a lot to the reviewer's idiosyncratic interpretation. So, please, if you do not write in English well, give your data a chance by having an editor take a look at your manuscript BEFORE you hit the submit button.

1 comment:

  1. I love reason #4. In my former (academic) life it was one of my greatest pet peeves...

    ReplyDelete