The P&T committee members shuffled into the 7:30 a.m. meeting, eagerly sniffing the air for familiar signs of coffee and bagels. The agenda began, as they all begin, with approval of the previous meeting’s minutes, which virtually nobody had read; an announcement that we would have to move at a clip, since there was much to cover; and the list of the bimonthly new formulary contestants—most with unpronounceable generic names.
The tempo was maintained at an even keel for the first 15 minutes, with bleats of “all-those-in-favor-approved” occurring at regular intervals. And then it was time to introduce one of the staff physicians, making his case for approval of an experimental drug therapy. He greeted the audience most graciously, the way I do with ticket agents when trying to sneak an oversized piece of carry-on luggage onto an airplane. canadian antibiotics
Before describing the proposed treatment, the physician said that the particular target subgroup of patients needing the experimental drug was often neglected, that heart disease was prevalent, and that there weren’t too many existing therapies. With the therapeutic “worthiness” of the patient population established, the physician then described the intervention and calmly asked for approval to implement a new intravenous (IV) drug protocol.
The staff pharmacists, who had previewed the meeting agenda, sat erect in their chairs, poised to pounce. The attending physicians looked conflicted.
“What is the evidence for safety?” somebody ventured. “What is the evidence for efficacy? Why should we approve an intravenous, risky medication when there is a paucity of data?”
“Show me the evidence,” the crowd began to growl.
The speaker responded that he had access to a multicenter database of patients who had received the experimental therapy and said that he would be delighted to share this with the group.
“Is the information published?” someone inquired querulously.
“No, but it’s solid data,” the physician responded, “and you can feel free to look through all the files.” He added, “Several other prominent institutions are using this drug around the country.”
A few pharmacists looked at me, the supposed evidence-based medicine “expert” who perhaps should have given some sage advice about what constituted enough evidence. But I sank perceptibly lower in my chair, just as helpless as my colleagues, in fact perhaps more so, because I had often asked myself these questions: When is there enough evidence to support the use of a new treatment? What type of evidence is acceptable? When is there enough evidence to dictate a change in therapy? Whose threshold should we use?
The physician-presenter clearly felt attacked and didn’t understand why he was bearing the brunt of his peers’ skepticism—why other drugs and therapies seemed to have been approved so much more easily. Was he merely subject to the random nature of P&T meetings? Five drugs had been approved for the formulary before he spoke; did the committee perhaps think it was time to reject? Was there any rhyme or reason for the members’ criticism?
Once more, he offered his database to the tribunal. “We don’t want observational data,” proclaimed one clinician. “Come back when you can show us some randomized, controlled trials. We want to see the rigorous evidence before we use a potentially harmful IV medication.”
This scenario is more straightforward than most: the advocate’s database had not been published and had not even been provided to the P&T committee at the time of the physician’s presentation; the physician had not sought institutional review board (IRB) approval; there were signs of disagreement among his fellow cardiologists at the P&T meeting; and the drug itself was classified as experimental.
In contrast, most cases are rarely as clear-cut as this one. Often, we P&T committee members are presented with apparently robust evidence from manufacturers, but just how rigorous are the study designs? Can we rely upon the potentially biased observations of the pharmaceutical industry in deciding which new drugs to approve or reject? Should we embrace therapies that have been requested (for formulary consideration) by physicians who have “personal preferences” for use of drugs? What should we do when P&T committee members have potential conflicts of interest—does serving as a speaker or a panel member on behalf of a pharmaceutical company disqualify one as a judge for P&T formulary approval? kamagra uk
The physician, who had solicited our approval so politely, departed from the room after being denied, likely hot under the collar and confused, perhaps feeling like the victim of an arbitrary committee ruling. I, too, experienced a bit of a philosophical conundrum at the end of the meeting, knowing that we had made the right decision but wondering whether we had really understood our own rationale. We asked for evidence, but what did we mean? Would one randomized trial be enough? How about four or five articles? When is there enough evidence to justify adding drugs to the formulary or approving them for use in our patients?
Examples of such dilemmas abound: When was there enough evidence to indicate that beta-blockers, long contraindicated for congestive heart failure, should be used in a therapeutic manner for this condition? Did primary care practitioners need to wait for cardiologists to pave the way—and only then jump on the bandwagon? When was there enough evidence to indicate that radical mastectomy should be abandoned as a mainstay therapy for breast cancer and that a modified radical mastectomy or a simple mastectomy or lumpectomy could be substituted? How many women needed to undergo removal of their pec-toralis major muscles before we said “enough”? When was there sufficient evidence to supplant beta agonists with inhaled cor-ticosteroids as maintenance therapy for patients with asthma?
When was the evidence adequate to change recommendations about the use of hormone replacement therapy in postmenopausal women?
kamagra soft tabs
These are the relatively obvious examples—we have had randomized, controlled trials to guide us in these situations. In medicine today, far more controversy abounds. For instance: At what age should we begin general screening for breast cancer? When should we order prostate-specific antigen (PSA) tests for prostate cancer? When should we offer surgical therapy for squamous cell lung cancer? When should we begin to treat high cholesterol levels?
Some people can render their verdicts on medical issues with confidence when the scientific jury is out, but will “average” physicians, pharmacists, nurses, and patients agree with one another? Whose value system do we use? Why does evidence-based medicine leave us so bereft at times like this?
Although medicine strives to be completely scientific, it is riddled with art and anecdote. Knowing when there is enough evidence is like knowing a comfortable room temperature when one feels it; it’s all in the skin of the beholder.
We have guidelines on how to critically appraise literature, but we have no irrefutable rules about how to reconcile the frequently disparate findings from studies. Understanding the limitations of evidence-based medicine is therefore a crucial skill, and this should help guide our application of it. Deciding which drugs should be added to formularies is often at best a subjective activity, even if we exclude the issue of cost-effectiveness and affordability, which is increasingly entertained in P&T meetings across the country.
One of the favorite tactics of P&T committees is to rely upon other groups to decide for us first—we wait for the Food and Drug Administration to approve, for the IRB to endorse, or for another committee in our hospital to give its blessing. Yet, are we not then merely rubber-stamping the decisions of others without carefully reviewing the evidence for ourselves? Kamagra Oral Jelly
I think that we were right to send the speaker back to the laboratory to gather more supportive data and then to have the data examined in a peer-reviewed manner before being asked for our approval. However, we did not adequately explain to him and to the well-intentioned knights of our round table what we were seeking: why some drugs meet with quick approval and others fall to the cutting-room floor. Occasionally at P&T committee meetings, we do accept drugs that have a strong political groundswell behind them and not much evidence; sometimes we try to keep up with the Joneses at other institutions, or there might be fiscal agendas, or we worry that we might be depriving our patients of something useful.
Serving on P&T committees is not for the faint-hearted. If one has a conscience, one is always asking: Did I do the right thing? Did I apply my review criteria correctly and as objectively as possible? Did I read the thick handout before the meeting and examine all the relevant facts, or did I just nod my head in agreement, without subjecting the material to serious scientific scrutiny? Did I and others allow a single vociferous physician to derail a new institution-wide policy on avoiding overuse of a certain antibiotic, increasingly associated with pathogen resistance, and approve the use of it on a “case-by-case basis” merely because we were striving for compromise and goodwill, and because it was too difficult to say no?
It’s easy to chant “show me the evidence” to the proponents of new, unproven therapies. It’s a bit more difficult to remove therapies already on the formulary when there are clashing institutional and individual priorities and when the evidence might be conflicting or not crystal-clear. Understanding the nuances inherent in medical evidence, what constitutes an evidence-based review, and to what standard we should hold ourselves in making formulary decisions is quite a different kettle of fish.
buy levitra uk
I think that it would be helpful for us to concede openly that different levels of evidence exist, that observational studies might be the best form of evidence available for some diagnostic and therapeutic issues, and that case reports might be the only evidence in existence at a given point in time. We would do well to stay mindful of the phenomenon of “publication bias” as we assess the medical literature and do our best to weigh the traditionally more abundant evidence on benefits along with the scarcer evidence on harm, if both types can be found, before we embrace new therapeutics.
It would also be wise to acknowledge the huge role that consensus has played, and still plays, in medical and formulary decision-making—consensus that is largely based on so-called expert opinion and personal experience, often because there really is no other evidence.
Don’t get me wrong; evidence-based medicine can be a great teaching tool and a valuable instrument in our little black bags. This approach has helped us to comprehend the subtleties of study designs, has encouraged us to apply finer methodologies in research, has helped us understand how to assess the value of clinical trials, and has made us more cautious in our application of research. Yet many people are using the label of “evidence-based medicine” in capricious ways; they are applying it when they do a partial review of the literature and find material to support a particular position, or they are minimizing the importance of controversial new research results when the findings challenge historical notions of how to treat patients, with the justification that the evidence is too anecdotal or limited.
Evidence-based medicine is an imperfect science, and we should develop a greater appreciation for its promises and pitfalls. The “evidence” is a moving target, and we need to be open to revisiting decisions based upon the changing literature landscape. Moreover, when we demand certain strengths of evidence for some diagnostic tests and therapies and not for others, and when we accept a new drug because physicians on the committee “want access to it” regardless of existing evidence, then we are misapplying the evidence-based tool.
I doubt that we can or should develop a one-size-fits-all approach to P&T committee decision-making, just as we cannot or should not do so with regard to clinical decision-making. Even though the knowledge base of evidence is ever-increasing and even though some of the most rigorous evidence exists for medication issues, we do not have evidence to answer all relevant questions that are raised at P&T committee meetings. Yet becoming more self-aware about our decisions, and acknowledging that they are generally not based on an even application of the evidence, might help us to refine our review technique so that we might make more internally consistent and honest decisions. And for those who occasionally lose sleep after a P&T meeting, such reflection might help assuage a bit of guilt.