Thursday, December 22, 2011

3 ways to sink a new drug

I don't just rant about methods and evidence -- in my work life I also rant about health economics and outcomes! This is why I was so interested in this post by the health economist Ulf Staginnus called
"New Models for Market Access." I want to give a hat tip to for pointing me here.

The thesis of the article is that we need to refocus our discussion from market access to true innovation in the biopharma sector. There are some priceless quotes here, like this one, for example:
It is amusing, at least to me, to see the continued flood of articles, consultant presentations, blogs, congress announcements, workshops, summits, reorganizations, speeches, etc. all over the place, basically suggesting how the industry just needs to throw a few more people with fancy titles here and there, coupled with slight organizational changes, onto the problem and involve stakeholders and—guess what?!—actually talk to patients and perhaps even payers and all of a sudden, like Alice in Wonderland, everything will be good, after all.
The uncomfortable truth is, it won't be. All this “noise” is only good for one thing, paying the bills of the consultants, which is fine, too, as I have been one myself so I can understand. But it will not address the problem the research-based pharmaceutical industry and its employees are facing. Without a substantial increase in R&D productivity, the pharmaceutical industry's survival (let alone its continued growth prospects), at least in its current form, is in great jeopardy.
Don't you love it? It is hard to disagree. He also calls for more of a focus on the long-term returns than the short-term (duh!), as well as more internal honesty, or having the courage to stand up to the pathologic internal enthusiasm about a late-stage product that will obviously go nowhere. And all of this is on target.

He has this to say about health economics and outcomes and such:
Of course, you need experience in areas such as HE, outcomes research, pricing, economics, policy, advocacy, etc. and all needs to work in sync and early on and with the payer in mind and, yes, most people have understood that by now. So the problem is essentially not in the capabilities, although some are more advanced than others, but rather in the company cultures.
And this, I think, is where I have to disagree with him. In my experience there are glaring deficits in the approach to HEOR within biopharma, though of course there are exceptions to the rule. It starts with the fact that disease burden, and especially its costs, are initially assessed through less than, shall we say, rigorous methods. I have seen this critical information get pieced together through market "research", where 5-10 "thought leaders" are asked for their opinions, and the quantification is based on this tiny non-representative sample of nothing more than guesses. This is a shame because the data usually exist which can give a much more bona fide estimate of the extent of the problem.

The second problem is that articulating the value proposition of a nascent technology is usually an afterthought. In fact it is self-evident that drug pricing must be fed using the information on the burden of disease, and the impact the new technology can make in mitigating such burden. Unfortunately, time and time again I see companies backing into a price simply in reaction to what their Boards perceive the returns should be. And frequently this is based on the overly optimistic market projections flowing from, you guessed it, market "research."

So, the direct result of all this short-sightedness and business as usual is that even innovative useful products are driven into oblivion because there is no realistic look at what the technology is worth or where best to use it. And fixing the problem after the drug or device is on the market is a much bigger challenge for several reasons. First, the acquisition costs of new technologies are bound to be higher than of those already in use. This puts them at a disadvantage in that they get niched into populations that have much greater burdens of illness and therefore less of a chance of doing well. In other words, they are used as a last ditch therapy, which very rarely ends well. Ironically, these are usually not the populations who were studied in the pre-approval studies, and thus the use turns out to be off-label. But here is the real problem: When these technologies are in the "kitchen sink" category, they will almost always end up looking worse in terms of the outcomes than their older counterparts. And to the untrained eye, or an eye who does not have the time to discern the truth, particularly in the setting of perceived high expenses on the new product, this rings the death toll for the drug. But the reality, of course, is that the abysmal outcomes are the result of confounding by indication, where the drug was inappropriately given to those patients who were very unlikely to benefit from in the first place. But you see how this early lack of attention to the articulation of appropriate populations and health economic data can snowball into failure of a promising therapy.

So, if you want your drug to fail, do the opposite of what I recommend below. In other words DO NOT
1. Develop your market understanding
Do it not from the opinions of a handful of "experts" -- experts will rarely tell you the truth. Instead, do epidemiology studies to understand your population and its subpopulations so as to get the most reasonable idea of the disease.
2. Start thinking about the value proposition early 
At the end of a successful Phase 2 program is a good time to do this. The surprise to most companies is how little HEOR studies cost in comparison with their clinical trials program. Yet, as you can see from above, this drop in the ocean can make or break a product.
3. Focus on transparent pricing methods
When pricing the technology, be very very sure that you have all of the ducks in a row, meaning:
                    a. do understand your market 
                    b. do understand the burden and costs of the disease
                    c. do understand how your product impacts these costs
                    d. do price the product to reflect this balance
It is truly embarrassing to have to admit that your price reflects nothing more than the greed of your investors. Trust me, you will not score points with your customers.

Staginnus makes one other important point which I generally agree with:
And let's face it, if you need a major workshop and intensive external “coaching” to help define the value of your product … well, there actually is little to none. If it was really good, it would have been obvious from the start. So maybe we ought to stop beating around the bush and move on if there is nothing to be done anymore. 
There is a nuance here, however, as in most things. Given what I have said above, products are more likely to gut than sell themselves. So, while I agree that you do not need a throng of consultants in suits and hair gel to pollute your offices, you do need to understand how to articulate this value, even when it appears obvious.

Tuesday, December 13, 2011

When end of life is not

Twenty years ago, I helped save a man's life.
So begins this New York Times essay by Peter Bach, MD, where he talks about the inadequacy of resource use at the end of life as a policy metric. Now, I am not very fond of policy metrics, as most of you know. So, imagine my surprise when I found myself disagreeing vehemently with Peter's argument. Well, to be fair, I did not disagree with him completely. I only disagreed with the thesis that he constructed, skillfully yet transparently fallaciously (wow, a double adverb, I am going to literary hell!) Here is what got me.

He describes a case of a middle-aged man who was experiencing a disorganized heart rhythm, which ultimately resulted in dead bowel and sepsis. The man became critically ill, the story continues, but three weeks later he went home alive and well. This, Dr. Bach says, is why end of life resource utilization is a bad metric: if this guy, who had a high risk of dying, had in fact died in the hospital, the resources spent on his hospital care would have been considered wasted by the measurement. And I could not agree more that lumping all terminal resource use under one umbrella of wasteful spending is idiotic. Unfortunately, knowingly or not, Peter presented a faulty argument.

The case he used as an example is not the case. Indeed it is a straw man constructed for the cynical purpose of easy knock-down. When we talk about futile care, we are not referring to this middle-aged (presumably) relatively healthy guy, no. We are talking about that 95-year-old nursing home patient with advanced dementia being treated in an ICU for urosepsis, or coming into the hospital for a G-tube placement because of no longer being able to eat or drink. We are talking about patients with advanced heart failure and metastatic cancer, whose chances of surviving for the subsequent three months are less than 25%. And yes, we are also talking about some middle-aged guy with gut ischemia, sepsis and worsening multi-organ failure whose chances of surviving to hospital discharge are close to nil; but in his case, instead of being clear from the beginning, the situation evolves.

So, yes, the costs of end of life care, and specifically hospitalizations, are staggering. But more importantly, among patients with terminal illnesses like metastatic cancer, advanced heart failure and dementia, hospitalizations and heroic interventions at the end of life cause unnecessary pain and suffering, and without much, if any, benefit in return. Their families and caregivers suffer as well, and many studies suggest that these caregivers are not interested in prolonging suffering, provided they are aware of the prognosis. Unfortunately, just as many studies suggest that communication between doctors and patients' families about these difficult issues is less than stellar.

So, let me play the devil's advocate and pretend that I support end of life resource utilization as a quality metric. If I did, I certainly would not be interested in depriving Dr. Bach's middle-aged acutely ill patient of the chance to survive. In fact, my aim would be to make sure that we align resource use with where it can do most good, and turn away from interventions that are apt merely to prolong dying.        

Tuesday, November 22, 2011

Lessons from Xigris

I have been wanting to write for a while about the demise of Xigris, but work and other commitments have stalled my progress. But it is time.

Here is my disclosure: I have received research funding from BioCritica, a daughter company of Eli Lilly, the manufacturer of Xigris. I also happen to know well and hold in high esteem the depth of knowledge and integrity of several colleagues who worked on Xigris internally at Lilly.

But on to the story. Xigris has had a short and bumpy life. When the PROWESS study, the Phase III Xigris trial, was first published in the NEJM in 2001 [1], it was the first therapy to succeed in sepsis, reducing mortality by 6% from about 31% to about 25%, yielding the number needed to treat of 16. This was huge, as so many trials to date had failed, and no progress had been made in sepsis management for years. These data opened the door to the FDA approval, despite a hung advisory committee, where equal numbers of members voted for and against approval. The controversy centered on concerns for bleeding complications, as well as some protocol changes during the trial and a switch in the manufacturing process. The latter concern was allayed by the Agency's detailed analysis and the finding of equivalence. There was a signal in a subgroup analysis that the drug might have been most effective among the most ill patients with a high probability of death, but not in their less ill counterparts. And despite the fact that the pivotal trial was not specifically performed in these patients, the approval for use specified just such a population.

So, despite the controversy, the drug was approved, though several post-marketing commitment studies were mandated. ENHANCE [2, 3] was an international study whose findings broadly confirmed the safety and efficacy of the drug, while the ADDRESS study [4], done in patients at low risk for death, was terminated early for lack of efficacy.

It seemed that PROWESS ushered in an era of positive results in sepsis. Shortly after its publication, other studies on the use of early goal-directed therapy [5], low-dose steroids [6] and tight glucose control [7] appeared in high impact journals, and the years of failure in sepsis management seemed to be over.  

In the meantime, and amid further controversy [8], Lilly supported the creation of the Values, Ethics and Rationing in Critical Care (VERICC) Task Force [9, 10], in addition to giving funding for the international Surviving Sepsis Campaign (SSC), which has resulted in the evidence-based practice guideline for sepsis management [11, 12] and an implementation program for the sepsis bundles, jointly sponsored by the SSC and the Institute for Healthcare Improvement [13]. The latter 2-year program enrolled over 15,000 patients world-wide, and achieved a doubling of bundle compliance from 18% to 36% with a concurrent drop in adjusted mortality of 5%. Because of several methodological issues and the lack of transparency about what it took to implement the bundle, it has never been clear to me a). whether there was causality between the bundle and mortality, and b). whether this effort was cost-effective.

But that aside, Xigris continued to stir up controversy, and there were still safety concerns. Some very well done observational studies, however, continued to confirm its effectiveness and safety in the real world setting [14]. Yet the final trial, PROWESS-SHOCK (done because of fears of an increase in bleeding complications), where patients in septic shock received Xigris as a part of their early management, brought doom. It was this study, whose preliminary results appeared in the press release from October 25, 2011, that prompted Lilly to pull the drug off the world market, since no difference in the 28-day mortality was detected between placebo and Xigris arms. Ironically, the preliminary reports indicate that no excess bleeding was noted in the treatment arm.

So, after roughly 10 years and millions of dollars, Xigris disappeared. But what can we learn from its story? There are many lessons that we should carry away, some about the way we do research, some about marketing practices, but all of them are about the need for a higher level of conversation and partnership. The biggest elephant in this room is whether a manufacturer should be allowed to fund guideline development. It is a complicated issue, particularly given our native proneness to cognitive bias, but in my opinion yes. This certainly cannot be done in a quid pro quo way. Perhaps this is naïve but should it not simply be a question of good data? And why wouldn't a manufacturer give money for the development of sensible guidelines without strings attached when the data are good?

Unfortunately, to me, Xigris is the poster child for how broken our research enterprise is, as I have discussed in this JAMA commentary [15]. Until all stake holders start talking to each other and arriving at common, useful and achievable goals, this is a story that will repeat itself again and again. The fact that regulatory trials, with all of their expensive and flashy internal validity, concern themselves only with statistical issues and care nothing about what happens in the real world is a travesty on many levels. The fact that it costs nearly $1 billion to bring a drug to market means that only big Pharma can bankroll such a gamble, and in return must demand big profits. The fact that this $1 billion fails to bring us studies that help clinicians and policy makers understand fully how to optimize the use of a drug once it is on the market is inexcusable. What we need is more intellectually honest discussions leading to novel pragmatic ways to answer the relevant questions in a timely manner and without bankrupting the system.

So, does the obvious financial interest mean that manufacturers should stay out of these discussions? I happen to think that they need a prominent place at the table. I actually think that the current fiasco is largely the result of too little interaction and too little cross-pollination of ideas: when we all sit around the table a nod in agreement, there is little progress. Deeper and novel understanding is built on disagreement and debate. Therefore, to leave the manufacturers out would invite further irrelevance. The bottom line is that we are all conflicted, and, according to the editors of PLoS, non-financial conflicts of interest, though more subtle and difficult to discern, may present an even bigger threat to much of what we do [16]. Elbowing out a party with an obvious conflict may have the unintended consequence of leaving some of the more insidiously conflicted others to run the show. And although we can argue whether profit is the healthiest driver for performance in healthcare, the reality is that our entire healthcare "system" is built around profit-making. Therefore it is disingenuous to single out one player over others.

On the positive side, the halo effect around Xigris brought a ton of attention to sepsis and its management. As Wes Ely conjectured in this piece, our improved understanding of sepsis (largely due to all the attention Xigris brought to it, in my opinion), is probably what rendered the drug useless in PROWESS-SHOCK. So, after all the hype, the noise and the hoopla, what is left is a company less one drug and hundreds of millions of dollars, and a disease area with a whole lot of what amounted to public health investment, with a vastly improved understanding of the disease state. How much is this benefit worth?


[1] Bernard GR, Vincent JL, Laterre PF, et al: Efficacy and safety of recombinant human activated protein C for severe sepsis. N Engl J Med 2001; 344:699–709
[2] Bernard GR, Margolis BD, Shanies HM, et al. Extended Evaluation of Recombinant Human Activated Protein C United States Trial (ENHANCE US). A Single-Arm, Phase 3B, Multicenter Study of Drotrecogin Alfa (Activated) in Severe Sepsis. Chest 2004;125:2206-16
[3] Vincent JL, Bernard GR, Beale R et al.
Drotrecogin alfa (activated) treatment in severe sepsis from the global open-label trial ENHANCE: further evidence for survival and safety and implications for early treatment. Crit Care Med 2005;33: 2266-77
[4] Abraham E, Laterre P-F, Garg R, et al. Drotrecogin Alfa (Activated) for Adults with Severe Sepsis and a Low Risk of Death. New Engl J Med 2005;353:1332-1341
[5] Rivers E, Nguyen B, Havstad S, et al. Early goal-directed therapy in the treatment of severe sepsis and septic shock. N Engl J Med 2001;345:1368-1377
[6] Annane D, Seville B, Charpentier C, et al. Effect of treatment with low doses of hydrocortisone and fludrocortisone on mortality in patients with septic shock. JAMA 2002;288:862-871
[7] van den Berghe G, Wouters P, Weekers F, et al. Intensive insulin therapy in the critically ill patients.  N Engl J Med 2001;345:1359-1367 
      [8] Eichacker PQ, Natanson C, Danner RL. Surviving Sepsis – Practice Guidelines, Marketing Campaigns and Eli Lilly. N Engl J Med 2006;355:1640-2 
      [9] Sinuff T, Kahnamui K, Cook DJ, et al. Rationing critical care beds: A systematic review. Crit Care Med 2004;32:1588-97
      [10] Truog RD, Brock DW, Cook DJ, et al. Rationing in the intensive care unit. Crit Care Med 2006;34:958-63
      [11] Dellinger RP, Carlet JM, Masur H, et al: Surviving Sepsis Campaign guidelines for management of severe sepsis and septic shock. 2004;32:858-73 
      [12] Dellinger RP, Levy MM, Carlet JM, et al: Surviving Sepsis Campaign: International guidelines for management of severe sepsis and septic shock: 2008. Crit Care Med 2008;36:296-327. Erratum in Crit Care Med 2008;36:1394-96
      [13] Levy MM, Dellinger RP, Townsend SR, et al. The Surviving Sepsis Campaign: Results of an international guideline-based performance improvement program targeting severe sepsis. Crit Care Med 2010;38:367-74
      [14] Lindenauer PK, Rothberg MB, Nathanson BH, et al. Activated protein C and hospital mortality in septic shock: A propensity-matched analysis. Crit Care Med 2010;38:1101-7
      [15] Zilberberg MD. The clinical research enterprise: Time for a course change? JAMA 2011;305:604-5 
      [16] The PLoS Medicine Editors (2008) Making Sense of Non-Financial Competing Interests. PLoS Med 5(9): e199. doi:10.1371/journal.pmed.0050199

Monday, November 21, 2011

Massachusetts' unwinnable gamble

It is ironic how, just a few days following the startling (?) confirmation by the Robert Wood Johnson Foundation-funded research that an ounce of prevention is indeed worth a pound of cure, the Massachusetts legislature with reckless abandon ushered in yet another mechanism for the erosion of public health: legalized gambling. Really, I have nothing against a little gambling. The issue is that this legislative move does not just open the door to a trickle of small local gambling operations. No, what it does is turn the crank to open a fire hose of "big box" gambling establishments descending upon our state. And it is not just anywhere in the state: it is in the Western part, far removed from the back yards of the legislators who are salivating over the projected licensing and tax revenues.

But I don't want to get into the NIMBY aspect of this misguided bill. I would rather stick to the real issue: selling us out to raise short-term revenue. The move projects 15,000 new jobs (menial with no benefits mostly), $40 million annually in tax income, on top of $85 million licensing fees from each of the three casinos, all this in addition to construction investment and the like. Already the bill allocates $50 million to overhauling healthcare reimbursements in the state. As well, there is a $25 million provision to shore up research into and prevention of problem gambling. And even people who are in staunch opposition to legalizing gambling seem appeased by this provision, which they say makes it the best bill of its kind. But we still have to ask, if prevention is better than cure, why settle for good mitigation strategies when we have the best prevention available to us already: keep casinos out!

Some of you will probably say that I am naive. After all, reason fades when we are talking about such big bucks for the state coffers. Well, just because this kind of a trade-off is something we have come to expect from our politicians does not mean that we should tolerate it. Others will bring up the old free will argument. No, I am not against people exercising their personal decision making, but haven't you read "Nudge?" We are all deeply flawed human beings, and in the face of temptation we fail miserably! And since we know that casinos increase the risk of problem gambling, why not just steer clear of them altogether? This is simply not a winnable gamble.

I hope that some of you are hearing echoes of the food-obesity debate. We deem it an individual rather than a societal problem, and look how well we have done mitigating the obesity epidemic! There is no rocket science here, and it is disingenuous to say that we do not understand the causes of obesity. Human physiology has not changed over a couple of generations, no. What has changed is our constant access to high-calorie cheap concoctions that pass for food; what has changed is our limited access to physical activity; and what has changed is the degree to which we as a society are willing to sign on to corporate and political propaganda designed to get votes and make money at the expense of our health.

So, am I shocked that this casino bill is likely to become law? Not at all. Am I surprised that the public is allowing this to happen in a pathetic perversion of personal freedom? Of course not. Am I going to shut up about what a mistake this is? You bet I am not. And in a decade I will say "I told you so." But I am sure that then, not unlike now, no one will be listening.              

Thursday, September 29, 2011

An open letter to my past and future students

As most of my readers know, I teach Public Health graduate students and the University of Massachusetts, sometimes on campus, and sometimes online. This is an open letter to all of my past and future students. 

First, I want to thank everyone in my June course who took the time to complete the evaluation -- the feedback is very helpful to me. I also want to thank the whole class (and all my previous and future classes) for the privilege of learning together with you. Looking at this and previous rounds of evaluations made me realize that I need to make a public statement that all students contemplating taking my courses can read before they commit.

My evaluations tend to be bimodal -- a peak around "love", a (smaller, thankfully) peak around "hate", and mostly a trough in between. Oddly, the reasons for the love and the hate are the same: not too much structure, not too much interim formal performance evaluations, a lot of opportunities for discussions and questions. Everyone seems to appreciate my effort to make the material interesting and relevant, but a substantial number do not seem to like it.

I am actually fascinated by the convergence of the reasons for liking and disliking. By way of inference, I am going to suggest that where you fall out on the teaching should tell you more about your learning style than how I teach. To clarify, let me spell out my philosophy of teaching.

Whether I teach on campus or online, I limit my classes to graduate students. The reason for this is not that I do not think that undergrads cannot handle the material. Rather it is because I believe nothing replaces time with a topic to develop a depth of understanding and discussion about it. So, I view my classes as incubators of ideas. I do not see myself as the oracle delivering answers. My role is to get you excited about the questions. Furthermore, it is not my questions that should excite you, but the questions that you come to at the limits of your knowledge seen through the prism of the class material and discussions.

To be sure, I realize that this not a comfortable place for many. Most of us glide through an educational system that convinces us that there is a single correct answer, and, after teaching us to parrot it compliantly, punishes us if we stray. So, swimming in the sea of questions, seemingly answering them only to realize that the answers lead to further questions is disquieting. Yet it is at this edge that we gain access to the next level of understanding of our universe. Here, the feedback is not about an arbitrary letter, but about the exuberance of ideas, discussion generated and the richness of asking the questions.

I cannot tell you how much I love the learning environment that we create together. I gain something from each and every one of you, and I hope that each of you walks away with at least one idea that is new. What I suggest to you if you are a potential student considering taking a course with me in the future is to contemplate the boundaries of your own comfort zones in learning. If you like the feeling of vertigo that you get when old dogmatic answers are shattered and uncertainty reigns, take my classes. If you are worried about how it might feel, but curious to try, I will meet you where you are and help you weave a net to cradle your fall. But if you know that uncertainty cripples you, that you would rather have a map for every step of the way, my classes may be the wrong stop along your educational path at this time. But perhaps sometime in the future?

Again, thanks to everyone for enriching my learning. I miss you and look forward to future opportunities for exchanging ideas.                    

Friday, September 23, 2011

Clinician as the Politbureau of medicine?

Do you think that medicine in the US is centralized? I do, but not in the way that we generally understand centralization. And furthermore, it is this centralization that I believe is making the idea of shared decision making so intimidating to some. Here is what I mean.

If you read management texts, centralization refers to an organization that is run predominantly top-down. In other words, a couple of oligarchs at the top of the ladder make all the decisions without consulting anyone below. In this way all the power is concentrated in the hands of the few. In an antithesis to this, in a decentralized organization, grassroots input and initiatives are incorporated into the fabric of the organization. And while in the times of a great crisis, when rapid decisions are necessary, the benefits of centralization may outweigh its risks, during normal day-to-day operations, such unilateral power can result in obviously negative consequences, from discontent among the employees to making the wrong choices. Furthermore, as organizations grow in size, it gets that much more difficult to run them effectively within the centralized paradigm.

Now, let us look at medicine. The traditional model of the doctor-patient relationship relies on the clinician to know what is right for the patient: take this pill and don't worry about the side effects, dear. Now, clearly, when someone shows up to the emergency room in septic shock, there is very little room for a democratic process; we want the doctor to do rapidly what needs to be done to save the patient. But this is a catastrophic exception to the rule of what modern medicine cares for. From pre-diabetes to pre-hypertension to "borderline cholesterol" to osteopenia to mild depression, these are the "diseases" that are prevalent in the office of the 21st century. None of these is particularly urgent or life-threatening. And if we are honest with ourselves, even a devastating diagnosis of cancer does not demand an instantaneous intervention: in the vast majority of cases there is ample time for discussion and contemplation. So, the centralized approach is the wrong way to go. Thus enter the robust discussion about shared decision making. 

Another reason that centralization of medical decisions is crumbling is the expanding patient panels that clinicians need to engage with in order to stay solvent, all within the context of increasing compliance and regulatory burdens along with decreasing reimbursements. Without an equal growth in one's cognitive ability to multi-task, this escalating imbalance is creating a rising risk for unilateral decisions to be plain wrong.

So, in my mind, this is yet another argument for all parties to embrace shared medical decision making to the extent we as patients are willing and able to do so. Because what is the alternative?          

Wednesday, September 21, 2011

Why patient lab data should be liberated, with a few caveats

I am admittedly not an expert on health IT, but I am a firm believer in the empowerment of patients to be the driver of her/his health decision making. So this whole discussion about lab data being available directly to the patient is of great interest to me. But it does seem like yet another instance of the two sides coming together not to listen to each other but to be heard by the other side. And as well know, this works so well for any relationship!

Each side's view is represented roughly thusly:
Patients -- these are my data and I have the right to access them as soon as they are available.
Doctors -- we are worried that the sheer volume, complexity and irrelevance of (much) of the data will make it confusing and unnecessarily alarm the patient
Both arguments are valid, of course. But it is important to ask what lurks below the visible portion of each iceberg.

Let's take the patient view. Why do I want immediate access to my data? Well, obviously, because it is mine, it represents the results of testing on my body, and the record should belong to me. I should be able to access it freely whenever I damned well please. I am also more than a little exasperated with having to wait sometimes days to hear from my doctor's office about a result that has been available for a while, but was buried under the reams of paperwork on the MD's desk or his/her assigning a low priority to my data. And I am most exasperated when my lab results get lost or otherwise never make it to me at all. Perhaps if I have direct and unfettered access, this will make thing more efficient for me as an individual.

The doc's view, on the other hand, is that the patient does not necessarily understand what the notation of "low"connotes in reference to, say, total bilirubin, or how to interpret the RDW data. Even more importantly, what if there is an outrageously abnormal value for some important test? Surely the patient will desire an immediate explanation of it and its implications.

So, clearly, both sides have valid concerns. I do think that those of access predominate, as ethically it just makes sense. But for a non-medical person, looking at a lab sheet is like trying to read information about yourself in Chinese: your success in understanding is largely dependent on your ability to read and understand Chinese. So, before that horse leaves the barn, we should think through how to execute this most sensibly. For example, perhaps it is not sensible to have the lab computer directly vomit all of the inane values that no one really looks at right to the patient's account. And backing up a step, perhaps it is time for our lab use to be driven not by the lab equipment packages and processes, but to test only for factors that are of value. If I want to know the patient's creatinine, maybe the other 6 components of the Chem-7 should not be run, or at least not reported. And obscure values like the ones I mentioned above, e.g., RDW, MCHC, etc., should only be available when the situation actually makes them useful, and not just distracting.

I can see a potential positive unintended consequence of this development as well: maybe clinicians will be less trigger-happy ordering all kinds of labs for all kinds of oblique reasons. Maybe, just maybe, this apprehension about the patient's access to all the labs will result in more Bayesian thinking in the office and a lot less shot-gunning. Finally, it will not be all patients that choose to access their data. Let us hope that the selection bias does its job and assures that only those who are truly ready to be educated and empowered decide to do so.

All in all, I am looking forward to the liberation of my lab data. What I worry about is all the calls I will be getting from friends and family to help them understand them. All the same, I will do my part for the education and empowerment that absolutely needs to happen for this to be a successful and meaningful change.        

Tuesday, September 20, 2011

Eminence or evidence, or how not to look like a fool when reporting your own data

A study presented a the ICAAC meeting was reported by the Family Practice News that piqued my interest. Firstly, it is a study on C. difficile infection treatment, and secondly it is counter to the evidence that has accumulated to date. So, I read the story very carefully, as, alas, the actual study presentation does not appear to be available.

Before I launch into the deconstruction of the data, I need to state that I do have a potential conflict of interest here. I am very involved in the CDI research from the health services and epidemiology perspective. But equally importantly, I have received research and consulting funding from ViroPharma, the manufacturer of oral Vancocin that is used to treat severe CDI.

And here is an important piece of background information: the reason the study was done. The recent evidence-based guideline on CDI developed jointly by SHEA and IDSA recommends initial treatment with metronidazole in the case of an infection that does not meet severe criteria, while advocating the use of vancomycin for severe disease. We will get into the reasons for this recommendation below.  

OK, with that out of the way, let us consider the information at hand.

My first contention is that this is a great example of how NOT to conduct a study (or how not to report it , or both). The study was a retrospective chart review at a single VA hospital in Chicago. All patients admitted between 1/09 and 3/10 who had tested positive for C. difficile toxin were identified and their hospitalizations records reviewed. A total of 147 patients were thus studied, of whom 25 (17%) received vancomycin and 122 (83%) metronidazole. It is worth mentioning that of the 122 initially treated with vancomycin, 28 (23%) were switched over to metronidazole treatment. The reasons for the switch as well as their outcomes remain obscure.

The treatment groups were stratified based on disease severity. Though the abstract states that severity was judged based on "temperature, white blood cell count, serum creatinine , serum albumin, acute mental status changes, systolic blood pressure<90, requirement for pressors," the thresholds for most of these variables are not stated. One can only assume that this stratification was done consistently and comported with the guideline.

Here is how the severity played out:

Nowhere can I find where those patients who were switched from metronidazole to vancomycin fell in these categories. And this is obviously important.

Now, for the outcomes. Those assessed were "need for colonoscopy, presence of pseudomembranes, adynamic ileus, recurrence within 30 days , reinfection > 30 days post therapy, number of recurrences >1, shock, megacolon, colon perforation, emergent colectomy, death." But what was reported? The only outcome to be reported in detail is recurrence in 30 days. And here is how it looks:

The other outcomes are reported merely as "M was equivalent to V irrespective of severity of illness (p=0.14). There was no difference in rate of recurrence (p= 0.41) nor in rate of complications between the groups (p=0.77)."
What the heck does this mean? Is the implication that the p-value tells the whole story? This is absurd! In addition, it does not appear to me from the abstract or the FPC report as if the authors bothered to do any adjusting for potential confounders. Granted, their minuscule sample size did not leave much room for that, but a lack of attempt alone invalidates the conclusion.

Oh, but if this were only the biggest of the problems! I'll start with what I think is the least of the threats to validity and work my way to the top of that heap, skipping much in the middle, as I do not have the time and the information available is full of holes. First, in any observational study of treatment there is a very strong possibility of confounding by indication. I have talked about this phenomenon previously here. I think of it as a clinician noticing something about the patient's severity of illness that does not manifest as a clear physiologic or laboratory sign, yet is very much present. A patient with this characteristic, although looking to us on paper much like one without a disease that is that severe, will be treated as someone at a higher threat level. In this case it may translate into treatment with vancomycin of patients who do not meet our criteria for severe disease, but nevertheless are severely ill. If present, this type of confounding blunts the observed differences between groups.

The lack of adjustment for potential confounding of any sort is a huge issue that negates any possibility of drawing a valid conclusion. Simply comparing groups based on severity of CDI does not eliminate the need to compare based on other factors that may be related to both the exposure and the outcome. This is pretty elementary. But again, this is minor compared to the fatal flaw.

And here it is, the final nail in the coffin of this study for me: sample size and superiority design. Firstly, the abstract and the write-up say nothing of what the study was powered to show. At least if this information had been available, we could make slightly more sense out of the p-values presented. But, no, this is nowhere to be found. As we all know, finding statistical significance is dependent on the effect size and variation within the population: the smaller the effect size and the greater the variation, the more subjects are needed to show a meaningful difference. Note, I said meaningful, NOT significant, and this they likewise neglect. What would be a clinically meaningful difference in the outcome(s)? Could 11% difference in recurrence rates be clinically important? I think so. But it is not statistically significant, you say! Bah-humbug, I say, go back and read all about the bunk that p-values represent!

One final issue, and this is that a superiority study is the wrong design here, in the absence of a placebo arm. In fact, the appropriate design is a non-inferiority study, with a very explicit development of valid non-inferiority margins that have to be met. It is true that a non-inferiority study may signal a superior result, but only if it is properly designed and executed, which this is not.

So, am I surprised that the study found "no differences" as supported by the p-values between the two treatments? Absolutely not. The sample size, the design and other issues touched on above preclude any meaningful conclusions being made. Yet this does not seem to stop the authors from doing exactly that, and the press from parroting them. Here is what the lead author states with aplomb:
              "There is a need for a prospective, head-to-head trial of these two medications, but I’m not sure who’s going to fund that study," Dr. Saleheen said in an interview at the meeting, which was sponsored by the American Society for Microbiology. "There is a paucity of data on this topic so it’s hard to say which antibiotic is better. We’re not jumping to any conclusions. There is no fixed management. We have to individualize each patient and treat accordingly."
OK, so I cannot disagree with the individualized treatment recommendation. But do we really need a "prospective head-to-head trial of these two medications"? I would say "yes," if there were not already not 1 but 2 randomized controlled trials addressing this very question. One by Zar and colleagues and another done as a regulatory study of the failed Genzyme drug tolevamer. Both of the trials contained separate arms for metronidazole and vancomycin (the Genzyme trial also had a tolevamer arm), and both stratified by disease severity. Zar and colleagues reported that in the severe CDI group the rate of clinical response was 76% in the metronidazole-treated patients versus 97% in the vancomycin group, with the p=0.02. In the tolevamer trial, presented as a poster at the 2007 ICAAC, there was an 85% clinical response rate to vancomycin and 65% to metronidazole (p=0.04).

We can always desire a better trial with better designs and different outcomes, but at some point practical considerations have to enter the equation. These are painstakingly performed studies that show a fairly convincing and consistent result. So, to put the current deeply flawed study against these findings is foolish, which is why I suspect the investigators failed to mention anything about these RCTs.

Why do I seem so incensed by this report? I am really getting impatient with both scientists and reporters for willfully misrepresenting the strength and validity of data. This makes everyone look like idiots, but more importantly such detritus clogs the gears of real science and clinical decision-making.

Monday, September 19, 2011

Adventures with American or Flying, American Style

So, we all know that the time for a doctor's appointment is merely a suggestion, not a mandate, as the doctor is hardly ever on time. We have even started of necessity to apply this theory to air travel. But to end up arriving 5 hours late and... to the wrong city? Well, this was a new one on me. Here is what happened.

I went to ICAAC for the day yesterday to present some of our data on predictors of a mixed skin and soft tissues infection. If it had not been a podium presentation, I would have considered skipping the whole meeting, since it was my son's birthday. Instead, I decided to swoop in for the day.

I chose American Airlines, as it is one of the few carriers that can get me to Chicago without a lay-over. The flight there was fine, and I had plenty of time to get myself to McCormick Convention Center, have lunch with a colleague, get to the session and do my thing. My schedule was such that I had to get in the cab immediately after presenting to get to my flight, which I did.

Getting to the airport was a challenge, as after 3:00 PM the perennial Chicago traffic jams are just a fact of life. But get there I did, with time to spare. The lines at the check-in counters were daunting, and luckily I was able to find a self-service kiosk, where I swiped my credit card. With a predictable automaticity I went to press the "Print boarding pass" icon when something caught my attention: this did not look my itinerary! I had been meant to fly to Bradley International Airport on the 5:25 PM flight. Now the screen was trying to push someone else's trip on me that went from Chicago to Bradley via Dallas-Fort Worth. Luckily there was an option on the screen to decline the itinerary as not owned by me, which I did. Alas, the second try resulted in the same baffling error.

A lovely young man with the American Airlines uniform on standing next to my kiosk noticed my confusion and asked how he could help. I pointed to the screen. He smiled broadly and sweetly and delivered the bad news that my flight had been canceled, and I was being re-routed via Texas, and that instead of getting to Hartford around 8:00 PM, I would be getting there well after midnight. The shock prevented me from stopping him from printing out the boarding pass. Before I had recovered my ability to speak, he furrowed his brow while examining the pass. Glancing at his watch, which read 3:30 PM, seemingly speaking to himself, he said "I wonder why they put you on a 3:05 flight when it is already 3:30?" As I was catching my breath, he continued "Oh, this flight is tomorrow afternoon!"

With these words my hopes of seeing my son before the end of the day of his birth hopelessly vanished. Yet I was not to be defeated yet. Although I was told that a later flight to Hartford was still happening, but was already overbooked, I was not ready to give up. Instead, I asked how close to Hartford they could get me on the same night. Turned out that there was a 5:05 to Boston, which was now going to be delayed until 6:30, but there was a coveted seat available and did I want it. Well, given my choices, I desperately wanted it, and luckily got it. So, now, instead of getting home by 10:00 PM I was looking at getting into Boston at a yet undetermined time, renting a car to get myself to Bradley, picking up my car from the garage there and driving home. If luck was with me, I would be home before 3:00 AM. Well, at least I would not have to go to Texas. Tomorrow.

Now that I had indefinite time before boarding, I treated myself to a latte and a book, Paolo Coelho's The Alchemist. It felt somehow luxurious to be suspended in this space without time, where I could just concentrate on the rich story and language and question my Personal Legend, though I was pretty sure that awaiting a late flight at O'Hare only to land in the wrong city and then drive for hours to get home was not it. Nevertheless, peculiarly, this departure from the rush of traveling felt like a little spa break, albeit in the midst of a chaotic throng craning their necks for the arrival of the aircraft.

Well, we finally were airborne a little after 8:00 PM, the flight was uneventful, I rented a car, got to Bradley before Avis closed (no, they are not open 24 hours), got into my car and got myself home just before 3:00 AM. All in all, it could have been a lot worse (I could have had to go through Dallas... today). And everyone was so frightfully nice, especially the young man with large brown eyes and a feline manner at O'Hare. The epiphany for me was that I could not even be angry -- there was no one or nothing to be angry at, and anger would have been disempowering, impotent. Instead, the lesson was that this is air travel in the 21st century -- crowded, uncertain, thoroughly unappealing. The only thing one can do, aside from avoiding it, is to accept it for what it is and go on. Though I have to say that being screwed is much nicer with a smile.      


Wednesday, September 14, 2011

So I got my son a tracfone...

Wanted to post some follow up to my absurd interaction with AT&T, the beginning of which can be found here. Briefly, all I wanted to do was add one more line to my family plan for $9.99 per month. Instead of making it simple, the web site demanded personal information (including my social security number) for the purpose of doing a credit check. Now, I already have a monthly family plan for which I pay over an order of magnitude more than $9.99, and on time. Yet now they wanted to subject me to an additional credit check. Well, I said no and wrote the post in question.

What was interesting and even encouraging to me was that I got the following comment from an AT&T customer service rep, and responded in kind:

attcathyw said...
Hi Marya- I am with AT&T and saw your post. We would like to assist with and answer any questions/concerns that you may have. Please email your contact information to one of our managers at and include your name in the subject line. Thanks.
Marya Zilberberg said...
Thanks to all of you who tweeted and facebooked (as a verb?) me about this post. My impression that there is growing dissatisfaction out there with the way business is done is being confirmed. I wanted to update you all on what the follow-up has been. I e-mailed Cathy (see comment above) yesterday. Last night I got this message from her coworker: "Hi Dr. Zilberberg, Cathy is out of the office so I am reaching out on her behalf. In order for me to research your wireless account further, I would need either the account number or the mobile number in question. In additionally, a good contact number for you would be great. Once I receive the information, I will partner with a wireless customer service manager to see what we can possibly do. I do hate to see the frustration in your post. While I cannot guarantee, I will do my best to see what can be done to provide the extra line without the credit verification. Of course we will not make any changes without your consent. Please email your contact information, account number or mobile number to me at and include your name in the subject line. Please provide a best time to contact you. Thanks and have a good evening! Thanks, ATTAnnelle M" OK, so again I am the one wasting my time having to contact yet another person because the original individual I was told to contact for customer service is not there? Wow! Nevertheless, this morning I e-mailed Annelle and let her know that I simply do not have the time to keep explaining the issue, and that unless she can do what I need without wasting my time, she should go ahead and let me know by mail. I reiterated what the issues were and also that this is a corporate culture problem that needs to be rethought in light of its ethical implications. Will let you know what happens.

The following correspondence ensued between me and Annelle (emphasis mine; numbers x'ed out to protect my privacy, which I am still interested in maintaining, despite Mark Zuckerberg's assertions to the contrary):

September 8: me
Dear Annelle,

Thank you for your e-mail. Unfortunately, I do not have the time to spend explaining (again!) what I need and waiting for you to see what can be done without any guarantee of the outcome. I hope that you see my point about the futility of a credit check for a $10/month line, since for the last 6 years I have paid my approximately $xxx monthly mostly on time. Additionally, no credit checks were necessary for the last x family lines added, and I was also able to increase my service by about $xx without a credit check. And this does not even get into the minimum $20 for the texting feature on my son's line -- one of my family lines has 50 messages for $2.99, but this option no longer seems to be available.

I realize that you are trying to help me, and I thank you for it. My cell number is xxx-xxx-xxxx. But as I said I simply do not have the time to spend on these endless phone calls with your company. If you can resolve something for me, great, let me know via e-mail. If not, it is fine, as I plan to visit my local AT&T store shortly. 

The bigger issue is how user-unfriendly this whole process has become, and it is your marketing executives that need to do some soul searching about whether this is an ethical or sustainable way to continue doing business.

Thanks once again.
September 8: Annelle
Hello Dr. Zilberberg,
I appreciate your candid response and your patience.  I know your time is valuable therefore I will keep this brief...
What kind of wireless device were you interested in for your son?  I know from your email that you want to keep your same plan and to place him on texting pay per use at a cost of .20 per text (incoming/outgoing, each are charged per text).
Please advise and I will continue to work on this end.  Hopefully you will not have to travel to the store if I can get this done for you.
Thanks so much!
Annelle M
September 8: me

Thank you for following up.
I want to get him a very simple flip phone that is free or very cheap. It does not have to have a camera or any other fancy features.

Thank you once again.
 September 8: Annelle
Good evening Dr. Zilberberg,
After great effort, I have found that ALL entities (including the store) will require a credit verification for new lines of service even though the monthly cost is not very great. The other option, without performing the credit verification, is prepaid. 
I know this is disappointing and I hate to be the bearer of bad news.  However, if you do consent to the credit verification, I can have a nice phone ordered for you son at no cost sent directly to your home in roughly 2 days. There are approximately 15 different devices available for new activations such as samsung solstice 2, palm pixi, sharp fx, pantech impact, samsung strive, etc.
Please kindly respond and let me know how you would like to proceed either postpaid or prepaid.  I will be looking for your response.
Annelle M
September 9: me
Dear Annelle,

Thank you for your diligence.
There is absolutely no way I am consenting to disclosing this highly sensitive information for a $9.99/month fee. Furthermore, my impression was that AT&T charges one month ahead, so there is no credit involved. Is this not correct?  
This feels like a frivolous usurpation of what is reasonable. And given the frequent reports of confidential data breaches, I am not interested in subjecting myself to this risk. I will have to withhold this portion of my business form AT&T and go with the pay-as-you-go scheme. 
Thank you. I would appreciate if this issue were brought to the attention of the management of AT&T so that a more reasonable policy can be developed. 
This was the last correspondence, and I purchased a Tracfone for him. Researching it made me think seriously why more people are not using it and why we continue to allow ourselves to be strong-armed by these mammoth corporations who in a purely Orwellian example of doublespeak want to convince us that handing over the reins to them is a good thing for us. Tracfone: no activation fee, no contract, no intrusive credit check, pay as you go, full texting, web and e-mail capabilities, reasonable rates. What more does one need, especially as an emergency use phone for a kid?

So, thank you, AT&T, I am eternally impressed with and grateful for your customer service.  


Tuesday, September 6, 2011

Supersize me, the AT&T way

File this under "etc."

I had one simple goal -- add a line to my AT&T family wireless plan for my son. I know, I know, I did check other carriers' plans and deals, since my contract with AT&T ran out long ago. But for various reasons I decided to stay with them. Why trade a known headache for an unknown one? Anyway, I went online with every intention of being finished in the space of 10 minutes. How naïve...

I wanted to get a basic plan, with just voice and minimal text messaging -- say 20 texts per month -- so that he can reach us in case of an emergency, but not abuse this disembodied conversation mode. And you would think that it would be easy to get this, right? Well, not so much. The only text option that came up on the screen was unlimited texting for $20/month. This was infinitely more than I needed, so I initiated a chat with a representative. He was effusively polite, and every time I volunteered information or responded to a question he thanked me very much for sharing this information. After several volleys, in which I thought I had conveyed my dilemma clearly and succinctly, he came back with "So, if I am understanding you correctly, you wish to change the text messaging plan on your phone." I did a rapid-fire "No, no, no," trying to get ahead of his rogue fingers as I imagined them poised to hit "change plan." You see, I am still traumatized from a recent experience with the cordial AT&T customer service representatives who "helped" me with an issue.

Just to give you the flavor for that episode, in the process of resolving a signal issue, they implemented such changes in my texting and data plans as to require several weeks of phone calls with the AT&T business office, a call and a correspondence with the Attorney General of Massachusetts, and a follow-up call with AT&T on the heels of their communication with the AG. So, no, I am not interested in having them "help" me with plan changes. I politely excused myself from the chat and decided to tackle it on my own.

But first I chose to distract myself from the problem at hand by doing another task that I had meant to do: increase my monthly minutes. This I was able to do without any glitches, and my success encouraged me to try again with the new line. Perhaps I missed something the first time?

Having considered my choices at this point, I made the decision that I would pay for a limited number of text messages for my son, at $.20/message, and this would take care of things. Smugly congratulating myself on such a creative solution, I went to complete the purchase. After addressing just a couple of minor issues stemming from the fact that my billing address is a PO Box and not a street address, and because lately everyone except the USPS has decided that I have played a joke on them by giving a non-existent address (don't get me started on the joys of living in rural America), I was almost home. I just needed to input my credit card information and... Wait a second! What's this? A credit check consent? I had to give them my social security number and consent to a credit check? Because I tacked an additional $9.99 monthly service fee to my (much larger than that) bill? After being a customer for nearly 6 years? After being able to up my monthly minutes by more than $9.99/month, without being subjected to a credit check?!!

Well, I did what anyone in my position would have done: I ignored this prompt hoping that it was optional. But it wasn't. Fill it in or else take yourself to a brick-and-mortar store to get this settled. Which is what I am choosing to do. But there is a larger moral here.

How did we get to this place, where our anti-trust protections have resulted in basically two gargantuan corporations essentially screwing the public in any way they see fit? Why do they get to dictate the devices and services that I need to purchase? How is this a free market? And how is it that this technology, whose intent is to make our lives so much easier, made me go through a bewildering amount of useless machinations only to end up with what? An offer to take my social security number and subject me to a credit check? Really?

And lest AT&T feel singled out by my rant, this is the trend with many events and purchases in life. Home insurance, for example, which, despite rising premiums, does not give you a penny towards rebuilding a retaining wall that collapses in a flood. In fact, the system is set up in such a way as to require you to file a claim, get it rejected and give the company the reason to fire you as a customer for filing too many claims. Health insurance (I don't have to remind you the galloping pace of the rise in those premiums), which covers less and less every year. Cable companies, computer manufacturers, automobile vendors, they are the ones that seem to know better than I what it is that I need, and they constantly and with impunity wrestle me into straightjackets of their packages. Where am I, the customer in all this? This old familiar strategy to maximize returns has been so successful in the food business that its legacy is the obesity epidemic, proliferation of chronic disease and shortening life spans. Is this really how we want to continue?  

I will get that line for my son, and I will get only what I need. I would prefer not to be so dependent on this stuff; alas, I am. But mark my words, there is enough bad taste building among my fellow humans to start exploring alternatives. I only wish that the government were really in the business of protecting its citizens from unethical practices rather than pandering to the highest bidder. I am ready to stop being viewed as a giant walking ROI potential, and start being respected as a citizen and a human. How about you?                

Thursday, September 1, 2011

You want to know #6?

Actually, it should really be #1. I am referring to the list I blogged yesterday of my top 5 reasons for rejecting a manuscript. The most important reason, which I failed to mention is...

... drum roll, please...

6. No "Limitations" paragraph
This is something that no manuscript should neglect, as every study, even the most well designed and executed randomized controlled trial, has limitations. So, in every paper that I write, my third paragraph from the end is devoted to the laundry list of limitations. And it should not be merely a laundry list, no. Each limitation mentioned needs to be put into the context of how it may have influenced the results, directionally and magnitudinally (oh, whatever), if applicable.

So, no limitations paragraph, no "Accept" from me!  

Wednesday, August 31, 2011

My top 5 reasons for rejecting a manuscript

Here are five manuscript transgressions that make me hit "Reject" faster that you can blink. The first four in particular do not instill confidence in what you actually did in the study.

1. Matched cohort masquerading as a case-control
This happens quite a bit with papers submitted to third and fourth tier journals, but watch out for it anywhere. The authors claim to have done a matched case-control study, where there is indeed matching. However, the selection of participants in the study is based on the exposure variable, rather than the outcome. Why is this important? Well, for one, the design informs the structure of the analyses. But even more fundamentally, I am really into definitions in science because they allow us to make sure we are talking about the same thing. And the definition of a case-control study is that it starts with the end -- that is to say, the outcome defines the case. So, if you are exploring whether a Cox-2 inhibitor is associated with mortality from heart disease, do not tell me that your "cases" were defined by taking the Cox-2 and controls were the ones that did not take it. If you are enrolling based on exposure, even if you are matching on such variables as age, gender, etc., this is still a COHORT STUDY!  It is a different story that this may not be the most efficient way to answer the particular example question, and a real case-control might be better. In order to call your study case-control, you need to define your cases as those who experienced the outcome, death in our example, making the controls those that did not die. I know that this explanation leaves a thick shroud over some of the very important details of how to choose the counterfactual, etc., but that is outside the scope here. Just get the design right, for crissakes

2. Incidence or prevalence, and where is the denominator?
I cannot tell you how annoying it is to see someone cite the incidence of something as a percentage. But as annoying as this is, it alone does not get and automatic rejection. What does is when someone tells me this "incidence" in a study that is in fact a matched cohort. By definition, matching means that you are not including the entire denominator of the population of interest, so whatever the prevalence of the exposure may seem to be in a matched cohort is the direct result of your muscling it into this particular mold. In other words, say you are matching 2:1 unexposed to exposed and the exposure is smoking, while the outcome of interest is the development of lung disease. First, if you are telling me that 10% of the smokers developed lung disease in the time frame, please, please, call it a prevalence and not an incidence. Incidence must incorporate a uniform time factor in the denominator (e.g., per year). And second, do not tell me what the "incidence" of smoking was based on your cohort -- by definition in your group of subjects smoking will be experienced by 1/3 of the group. Unless you have measure the prevalence of smoking in the parent cohort BEFORE you did your matching, I am not interested. This is just stupid and thoughtless, so it definitely gets an automatic reject (or a strong question mark at the very least). 
3. Analysis that does not explore the stated hypothesis
I just reviewed a paper that initially asked an interesting question (this is how they get you to agree to review), but turned out to turn the hypothesis on its head and ended up being completely inane. Broadly, the investigators claimed to be interested in how a certain exposure impacts mortality, a legitimate question to ask. As I was reading through the paper, and as I could not make any heads or tails out of the Methods section, it slowly began to dawn on me that he authors went after the opposite of what they promised: they started to look for the predictors of what they set up as the exposure variable! Now, this can sometimes still be legit, but the exposure variable needs to be already recognized as somehow relating to the outcome of interest (hey, surrogate endpoints, anyone?). This was not the case here. So, please, authors, do look back on your hypothesis once in a while as you are actually performing the study and writing up your results.

4. Stick to the hypothesis, can the advertising
I recently rejected a paper that asked a legitimate question, but, in addition to doing a shoddy job with the analyses and the reporting, did the one thing that is an absolute no-no: it reported on a specific analysis of the impact of a single drug on the outcome of interest. And yes, you guessed it, the sponsor of the study was the manufacturer of the drug in question. And naturally, the drug looked particularly good in the analysis. I am not against manufacturer-sponsored studies, and even those that end up shedding positive light on their products. What I am against is random results of random analyses that look positive for their drug without any justification or planning. So, all of this notwithstanding, the situation might have been tolerable, had the authors made a credible case for why it was reasonable to expect this drug to have the salutary effect, citing either theoretical considerations or prior evidence. They of course would have had to incorporate it into their a priori hypothesis. Otherwise this is just advertising, a random shot in the dark, not an academic pursuit of knowledge.

5. Language is a fraught but important issue
I do not want to get into the argument about whether publishing in English language journals brings more status than in non-English language ones. This is not the issue. What I do want to point out, and this is true for both native and non-native English speakers, is that if you cannot make yourself understood, I do not have either time or the ability to read your mind. If you are sending a paper into an English language journal, do make your arguments clearly, do make sure that your sentence structure is correct, and do use constructions that I will understand. It is not that I do not want to read foreign studies, no. In fact, you have no idea just how important it is to have data from geopolitically diverse areas. No, what I am saying is that I volunteer my time to be on Editorial Boards and as a peer reviewer, and I just do not have the leisure to spend hours unraveling the hidden meaning of a linguistically encrypted paper. And even if I did, I assure you, you are leaving a lot to the reviewer's idiosyncratic interpretation. So, please, if you do not write in English well, give your data a chance by having an editor take a look at your manuscript BEFORE you hit the submit button.

Friday, August 26, 2011

Botox and empathy: Less is more

I am kind of stuck on this whole Botox-empathy thing. A recent study from researchers at Duke and UCLA implied that people who get Botox to attenuate their wrinkles also seem to attenuate their empathic ability. Somehow their inability to mimic others' facial expressions impairs the firing of their mirror neurons and they top feeling empathy. Wow!

But think of it -- Botulinum toxin, arguably one of the most potent poisons known to humans, is being used essentially recreationally as a drug, quite possibly an addictive one. Who thought this was a good idea? OK, don't answer that.

To be sure, the same toxin in a therapeutic preparation can help people with paralysis release painful contractures, and this is a wonderful advance. Just as morphine is a terrific pain reliever under the right circumstances. But used recreationally? Everyone is aware of the havoc it can wreak, both personally and societally. So, how did we justify allowing this most potent of all poisons to be injected into perfectly healthy (and beautiful, I might add) aging faces?

File this under "Go figure." Another opportunity for "less is more."

Thursday, August 25, 2011

Side effects: The subject must become the scientist

A few weeks ago someone I know, a normally robust and energetic woman, began to feel fatigued and listless, and had some strange sensations in her chest. She presented to her primary care MD, who obtained an EKG and a full panel of blood tests. The former showed some non-specific changes, while the latter was entirely normal. Although reassured, she continued to experience malaise. When she fetched her EKG, she received a copy with the computer interpretation indicating that, in its wisdom, the program could not rule out a heart attack. Given that her symptoms continued, and now anxiety was piled on top, she presented to the ED, where a heart attack was excluded, and she was scheduled for a stress test. In the subsequent weeks the symptoms continued off and on, and the stress test turned out to be negative for coronary disease. Great, mazel tov!

What I failed to mention was that just prior to the onset of her symptoms, she had been started on 5-fluorouracil cream for a basal cell skin cancer. And while she did not commit my current device of omission with her doctors (including the dermatologist who prescribed the drug), all denied her constellation of symptoms as a potential side effect. And granted, when I looked it up, there was no mention of anything like fatigue and listlessness. So, does it mean that it is not within the realm of the possible that this drug was responsible?

Not at all. And here is why. Our adverse event reporting is essentially a discretionary system. Here is what the FDA says about their Adverse Event Reporting System (AERS):
Reporting of adverse events from the point of care is voluntary in the United States. FDA receives some adverse event and medication error reports directly from health care professionals (such as physicians, pharmacists, nurses and others) and consumers (such as patients, family members, lawyers and others). Healthcare professionals and consumers may also report these events to the products’ manufacturers. If a manufacturer receives an adverse event report, it is required to send the report to FDA as specified by regulations. 
What this means is that, when a patient complains to a doctor of a symptom, even when its onset is in obvious proximity to a particular medication, the doctor is not compelled to report it. The most an average physician will do is look up the known AE profile of the drug and at best look up its interactions with other medications. But one is not generally inclined to use one's imagination (and the constraints of the shrinking appointments spread across exponentially growing cognitive loads conspire against it too) to entertain the possibility that the current problem is related. And yet since many AEs are particularly rare, the knowledge about them must necessarily rely on scrupulous reporting by the prescribers into a central repository. This is what is missing: not the repository, but the impetus to report.

So, when we go looking up side effects of a given medication, we must take the information for what it is: a woefully incomplete list of what has been experienced by other patients. And when someone asks "Do statins make you stupid," instead of denying the possibility, we should just admit that we don't know. Because once drugs are released by the FDA into the wild of our modern healthcare, by relying on others' reports of AEs we become inadvertent enablers of our ignorance about them.

My friend's symptoms abated after she finished the course of the 5-FU cream. None of the MDs bothered to report her symptoms to the AERS, and nor did she. I am not even sure that any of the players were aware of the possibility. Oh, well, an opportunity lost. We need to feel responsible for gathering this knowledge. The subject must be empowered to become the scientist; this is the only way we can get the full picture of the harm-benefit balance of our considerable and unruly pharmacopeia.

If you want to report a possible side effect of a medication, this FDA web page will guide you through the process.

Wednesday, August 17, 2011

Counterfactuals: I know you are, but what am I?

It occurs to me that as we talk more and more about personalized medicine, the tension between the need for individual vs. group data is likely to intensify. And with it, it is important to have the vocabulary to articulate the role for each.

Scientific method, in order to disprove the null hypothesis, demands highly controlled experimental conditions, where only a single exposure is altered. While this is feasible when dealing with chemical reactions in a beaker, and even, to a great extent, with bacteria and single cells in a petri dish, the proposition becomes a whole lot more complicated in higher order biology. In this way, the phrase "all things being equal" must really apply to the individuals or groups under study.

We call this formulation "the theory of counterfactual," and it is defined in the following way by the researchers at the University of North Carolina (see slide #3 in the presentation):
Theory of Counterfactuals
The fact is that some people receive treatment.
The counterfactual question is: “What would have happened to those who, in fact, did receive treatment, if they had not received treatment (or the converse)?”
Counterfactuals cannot be seen or heard—we can only create an estimate of them.
Take care to utilize appropriate counterfactual
So, essentially what it means is figuring out what would have happened to, for example, Uncle Joe if he had not smoked 2 packs of cigarettes per day for 30 years. Now, our complexity as the human organism makes it impossible (so far) to replicate Uncle Joe precisely in the laboratory, so we must settle for individuals or groups of individuals that resemble Uncle Joe in most if not all identifiable ways in order to understand the isolated effect of heavy smoking on his health outcomes.

So, you see the challenge? This is why we argue about the validity of study designs to answer clinical questions. This is why a randomized controlled trial is viewed as the pinnacle of validity, since in it, just by the sheer force of randomness in the Universe, we expect to get two groups that match in every way except the exposure in question, such as a drug or another therapy. This is why we work so hard statistically in observational studies to assure that the outcome under examination is really due to the exposure of interest (e.g., smoking), "all other things being equal."

But no matter how we slice this pie, this equality can only be approached, but never truly reached. And this asymptotic relationship of our experimental design to reality may be OK in some instances, yet not nearly precise enough in others. We just cannot know the complete picture, since we only have partial information on how the human animal really works. And this is precisely what makes our struggle to infer causality problematic, and precisely what introduces uncertainty into our conclusions.

What is the answer? Is it better to rely on individual experience or group data? As always, I find myself leaning inward toward the middle. Because an individual's experience is prone to many influences, both internal, such as cognitive biases, and external, such as variations in response under different circumstances, it is not valid to extrapolate this experience to a group. In the same vein, because groups represent a conglomeration of individual experiences, smoothing out the inherent variabilities which ultimately determine the individual results, study data are also difficult to apply to individuals. For this reason medicine should be the hybrid of the two: the make-up of the patient can partly fit into the larger set of persons with similar characteristics, yet also jut out into the perilous territory of idiosyncratic individuality. This is precisely what makes medicine so imprecise. This is precisely the tension between the science and the art of medicine. Because "counterfactuals cannot be seen or heard," Uncle Joe!