Intratumor Heterogeneity (aka, Things Have Many Parts)

To say that personalized medicine has hit a “bump in the road” with a study published this week in the New England Journal of Medicine reflects the expectancy that targeted drugs for cancer (or other diseases) ought to be around the corner. After all, now that we know that it’s possible to target the abnormal cellular mechanisms triggered by genetic mutations with laboratory-synthesized chemicals (Gleevec for CML, the recently approved Xalkori (generic name, crizotinib) for lung cancer), shouldn’t we just figure out the abnormalities in each kind of tumor and then create drugs to address whatever haywire situation results? Calling this study a “bump,” as the Wall Street Journal does, makes it seem as if this weren’t an entirely new field. No – this new study simply adds more to the story about genes and cancer.

Researchers from the UK’s Medical Research Council, Cancer Research UK, and other UK institutions, examined “intratumor heterogeneity.” That is, they looked at the amount of genetic variation within a single tumor. They found that more than 60% of the known somatic (ie, spontaneous, rather than inherited) genetic mutations were not present in every sample from a tumor. In other words, there is a lot of variety going on inside tumors.

That’s an enormous amount of variation. It means the odds that a tumor biopsy will contain the essential genetic information may be quite slim. It also points to the extraordinary capacity that tumors have to evolve rapidly, in part explaining why cancer often becomes resistant to whatever drug is attacking it.

The concern here is that if a researchers has a biopsy from one region of a tumor, the mutations present elsewhere in a tumor could be missed. The idea that a mutation could be seen in one sample and not another also adds another layer of difficulty to the task of sorting out which are the “driver” mutations—that is, which genetic mutations are triggering the cancer—and which mutations are more passive.

If you have tens of thousands of genes you’re working with in a particular tumor sample, sorting out the causative variations from the changes that happen as the tumor evolves is a long, arduous task. It’s for this reason that the idea of personalized medicine—and here we are talking specifically about drugs targeted against the genetic make-up of an individual cancer, not about a whole-person regimen for life based on your personal DNA quirks—is one that has to be held with a long-view. It took decades for the first useful chemotherapy drug to be discovered. If we absorb the notion that targeted therapy is still in its nascent stage, then this new study isn’t a bump in the road, but rather another description of the scenery.

Category: cancer, Targeted Therapy | Tagged , , , , , | 1 Comment

The Story of Peyton Rous and Chicken Cancer

“Tumors destroy man in a unique and appalling way, as flesh of his own flesh which has somehow been rendered proliferative, rampant, predatory and ungovernable.”
Continue reading »

Category: cancer | Tagged , , , , , , , , , | Comments Off

How Much Money Do Drug Companies Pay the FDA?

The Prescription Drug User Fee Act became a law in 1992. PDUFA (or as most people say, padoofa) allows the FDA to collect fees from pharmaceutical companies filing new drug applications. A new drug application, or NDA, is the process by which the FDA reviews new drugs. Updated performance goals, which have been part of PDUFA from the start, for 2013 through 2017 were issued this past September.

PDUFA was mainly created as a response to complaints among consumers, the pharmaceutical industry and the FDA that drug approvals were taking too long. The FDA said that without more money, there would never be enough staff support to churn out approvals at a rate that met with public and industry approval, and that met the needs of patients awaiting better treatments. In that light, the arrangement seems reasonable: drug companies are private, for-profit businesses that require regulatory approval of products they want to bring to market, so it seems right that they should have to pay for the time a government agency has to spend reviewing the compound.

On the other hand, another point of view holds that PDUFA spells not just padoofa, but also trouble, because its puts the FDA in the pockets of the drug industry. In the same way that doctors are accused of subjecting themselves to bias when they receive consulting or speaker fees from a drug company, so has the FDA been accused of kowtowing to the pharmaceutical industry, approving drugs that maybe shouldn’t be approved for one reason or another, and allowing the committees assembled to review NDAs to be stacked with conflicts of interest.

All of the information about what fees are paid to the FDA is public information. But few people who have tried to navigate through the FDA’s labyrinthine website have lived to tell the tale. So, apropos of nothing, here are some of the relevant numbers.

In FY 2012, the fee for filing an NDA that requires clinical data is $1,841,500. For an application that does not require clinical data, the fee is $920,750.

The most recent year for which the payment amounts are available is 2010. In FY 2010, the total amount paid to the FDA for application fees was $172,238,150. Establishment fees (another, smaller component of PDUFA) totaled $183,328,513. Product fees (yet another, still smaller component) came to $173,709,880. (Fun task: See if you can make heads or tails of the definitions on establishment fees and product fees at this FAQ on PDUFA) That brings the grand total of PDUFA fees collected in FY 2010 to $529,276,543.

According to the FDA, this total almost covered all of the expenses associated with NDA reviews, which are: personnel compensation and benefits; travel and transportation; rent; communications; contract services; equipment and supplies; and other. In FY 2010, that total came to $573,258,400. (In 2009, the total was $512,051,400. You can see the two breakdowns here.)

Interestingly, the total number of NDAs filed in 2010 was 86, the lowest number in the past five years. However, the number of priority NDAs — applications for more urgently needed drugs, such as those to treat rare diseases with highly limited treatment options — remained steady. Approval times for priority applications was about 9 months in 2009, the most recent year with meaningful data. In 2010, the percentage of approvals made during the first cycle of review decreased for the third straight year, which might indicate a more stringent review process. (According to the FDA, about 80% of all filed applications will eventually be approved.)

In one sense, it could be said that if the branch of the FDA involved in reviewing NDAs has its budget mainly covered by PDUFA fees, rather than taxpayer dollars, the turn that part of the agency is almost like a private company. At the same time, it would seem weird if taxpayers were covering the NDA expense and then being charged again for drug purchases. Yet another tangled web woven.

___________________________________________________
FDA image from Flickr, Creative Commons License, by marcospozo

Category: Drug Development, Housekeeping | Tagged , , , , | 1 Comment

“A Spot on His Lung”

My father-in-law is from a small Greek island called Chios. Ocean breezes permeated his youth, which was spent making toy boats out of sticks and whatever else he and his friends could find, making whatever mischief he could, and fully enjoying being alive.
Continue reading »

Category: Diagnostic testing, personal stories | Tagged , , , , , | Comments Off

Avastin for Ovarian Cancer

You’ve probably already seen the headlines noting the disappointing outcomes of two phase III studies of bevacizumab (brand name Avastin, made by Genentech) for ovarian cancer. I’ve been trying to familiarize myself with the data this morning—I am pretty certain that practicing interpreting study reports can add years to my brain functioning—and figured I’d post a quick summary here.

There were two phase III studies published in the New England Journal of Medicine on December 29. (And don’t you want to know how that came about? Were the studies planned from the get-go to end at around the same time? Or were the analyses specifically timed to be reported simultaneously? And were there women with ovarian cancer being treated with Avastin off-label in the meantime, while these studies, funded in part by public dollars, were being prepared for publication?) One study looked at a couple of different ways of incorporating the drug into primary treatment. The other study was a direct comparison of two regimens.

First, the more complicated trial. Here, 1,873 women with stage III or IV ovarian cancer were randomly assigned (double-blind) to one of three groups:

Group 1: Chemotherapy in cycles 1–6, with a placebo added for cycles 2–22.
Group 2: Chemotherapy plus bevacizumab, cycles 2–6; chemotherapy plus placebo, cycles 7–22
Group 3: Chemotherapy plus bevacizumab, cycles 2–22

The primary endpoint was progression-free survival. Maybe this is common knowledge by now, but in case not, let’s remember that PFS is not a measure of real benefit. Rather, it’s a surrogate marker, indicating a potential for actual benefit. The real benefit would be if the addition of bevacizumab improved overall survival by a clinically meaningful amount of time. PFS does not necessarily indicate (in fact, it rarely indicates) an extension in actual survival time.

But, the primary endpoint was PFS, so here are those numbers. For group 1, the median PFS was 10.3 months. Group 2 was 11.2 months, and Group 3 was 14.1 months. The study duration was for up to 10 months. The authors note that there was no significant different in overall survival at the time the data were analyzed, although it’s important to remember that OS wasn’t the primary endpoint, so even a negative outcome there isn’t necessarily meaningful. And the rate of hypertension requiring medical intervention was at least double among women receiving bevacizumab (7.2% for Group 1, 16.5% for Group 2, and 22.9% for Group 3).

The conclusion in the study’s abstract is interesting. It reads:

The use of bevacizumab during and up to 10 months after carboplatin and paclitaxel chemotherapy prolongs the median progression-free survival by about 4 months in patients with advanced epithelial ovarian cancer.

Would you agree with that statement, based on the numbers above?

Now, the other study. Here, 1,528 women from 11 countries were treated with either carboplatin + paclitaxel, or this chemotherapy combination plus bevacizumab. I haven’t read the full study, only the abstract, so I don’t know what the primary endpoint, determined before the study launched, was intended to be. However, the abstract states that the outcome measures were PFS analyzed per protocol, then PFS updated, and interim overall survival. From this, I’d infer that the primary endpoint was PFS at 36 months (the updated PFS was at 42 months).

So, the results. PFS at 36 months for the control arm (standard chemotherapy) was 20.3 months, meaning that women receiving combination chemotherapy went for, on average, 20.3 months without cancer progressing. Women who received chemotherapy + bevacizumab experienced disease progression at a median 21.8 months. At 42 months, the PFS was 22.4 months and 24.1 months, respectively.

The authors also provide a subset analysis for women who were at high risk for progression when they enrolled in the study; that was about 30% of the study population. Among this group, PFS at 42 months was 14.5 and 18.1 months, respectively. Overall survival for these women was 28.8 months and 36.6 months, respectively.

The abstract conclusion for this study reads:

Bevacizumab improved progression-free survival in women with ovarian cancer. The benefits with respect to both progression-free and overall survival were greater among those at high risk for disease progression.

Again, would you agree with that statement? Here is a great example of distinguishing between numbers that are statistically significant and numbers that are clinically meaningful.

Many news outlets reported on these studies. One interesting article by the Los Angeles Times provided a look at the cost of this treatment. This article (a blog post, actually) draws on a study from the Journal of Clinical Oncology (JCO) analyzing the cost-effectiveness of bevacizumab in ovarian cancer. This study was published back in April 2011, six months before the NEJM reports of these two phase 3 trials.

The JCO cost-effectiveness analysis was based on preliminary data from the three-group study discussed above (the trial is also known as Gynecology Oncology Group, or GOG, 218). The cost analysis estimated the cost of treating 600 patients on each arm of that study, using the baseline estimates of PFS and also bowel perforation, a side effect of bevacizumab. According to the analysis, the cost of treating women who received chemotherapy alone was $2.5 million. The cost for Group 2 was $21.4 million. The cost for Group 3 was $78.3 million. According to these numbers, the “incremental cost effectiveness ratio”—which measures the cost per year of progression-free life saved—was over $400,000 for both bevacizumab-containing groups. Reducing the cost of this drug by 25% brought that number down to $100,000 for Group 3. But, treatment of bowel perforation adds costs that drive that figure back up.

That cost-effectiveness analysis concluded:

The addition of bevacizumab to standard chemotherapy in patients with advanced ovarian cancer is not cost effective. Treatment with maintenance bevacizumab leads to improved PFS but is associated with both direct and indirect costs. The cost effectiveness of bevacizumab in the adjuvant treatment of ovarian cancer is primarily dependent on drug costs.

You can have nothing but compassion for anyone experiencing ovarian cancer, and women with this disease are definitely in need of improved treatments. It appears that bevacizumab probably isn’t next on the horizon, though.

All that being said, you have to wonder if the search for biomarkers that might indicate the likelihood of someone benefiting from Avastin will yield fruit. Time will tell. But it might be a long, long time.

Category: cancer, Drug Development | Tagged , , , , | 3 Comments

Sharing News

Hello out there, and happy 2012.

Work in Progress has gone a little quiet lately, and that will change soon. Though not today. In the meantime, I wanted to share some nice news, which is that my book-in-progress is now official. “The Philadelphia Chromosome – The Epic Quest to Tame a Single Deadly Gene” will be published in Spring 2013 by The Experiment, a NY-based independent publishing company.

This story will chart the forty years of pathbreaking discoveries that first unraveled the link between DNA and cancer, and the subsequent struggle to bring to market the drug that cures one particularly deadly cancer (chronic myelogenous leukemia).

Many readers of this here blog will be familiar with some parts of this story; in particular, the development of Gleevec and some of the tales of woe and heroism that led to this drug’s success and blockbuster status. But there is so much more to tell. So, so, so much more – from the first spotting of the Ph mutation in 1959, back in time to how science figured out that humans have 46 chromosomes (a fascinating story in its own right that involves castrated patients from psychiatric institutes among other strange facts, twists, and turns), and into the advent of personalized medicine, or at least our fixation with its potential. Most importantly, this story provides a clear-eyed, behind-the-scenes look at how drugs are made: the science, the medicine, the money, the bureaucracy, and all of the other very human aspects that guide the bringing of a new medicine into the world.

I am totally jazzed, thrilled to the gills, and over the moon to have a chance to put it all down on paper. After living with a book proposal for almost five years, I’m still pinching myself. Now the task is to do the story justice. It’s an incredible tale that has so much relevance to our world today.

Thanks for reading this little update. I’ll still be posting here regularly, hopefully even more frequently, during the next few months, so – more soon.

Category: Books | Tagged | 1 Comment

In the Headlines: Use Less Medicine

Just a quick post today, because I wasn’t meaning to write here today but these two headlines jumped out and screamed to be linked to.

First, there’s this one, from Reuters, an impressively lengthy article about cancer screening and the difficulty in conveying the message that more screening is not always better. Beginning with a recap of the 2009 recommendation by the US Preventive Services Task Force that women under 50 could skip routine mammograms (Dr. Ned Calonge, who presided over that Task Force, received two death threats), the article looks at where things are at now with regard to informing the public about the need to dial back on screening for some conditions.

I won’t summarize the entire article here because my daughter is going to wake up from her nap any minute and we have another headline to get to. But what’s interesting about this article — aside from it’s length, which really is impressive, considering the short shrift so often given to anything without the word “breakthrough” in it — is that it’s focused on a movement toward reducing the amount of health care we receive, specifically mammograms and PSA tests. It also shows how closely tied screening recommendations are to political issues.

Next up is this article from US News & World Report on new concerns that targeted radiation (also known as brachytherapy) is being overused. The article is based on a new study in the Dec 16 online issue of the Journal of the National Cancer Institute (which, by the way, is run independently of the National Cancer Institute) recognizing the dramatic increase in targeted radiation treatment for breast cancer. The main point of the article is that this treatment modality is being used on women who are considered unsuitable according to the latest guidelines from ASTRO (American Society for Radiation Oncology). Importantly, this increase in use was happening before ASTRO released its latest guidelines. But that doesn’t justify the use – in fact, it highlights a key issue in healthcare, especially cancer treatment, which is the integration of new treatments into care before there is concrete evidence of true benefit.

According to the JNCI study, among 138,815 U.S. women diagnosed with breast cancer from 2000 to 2007, about 2.6% received brachytherapy. Of this 2.6%, 29.6% were considered “cautionary” candidates for the treatment, and 36.2% were considered “unsuitable.” Again, these classifications are being made after the fact—there were no guidelines classifying women as cautionary, unsuitable, or suitable candidates for brachytherapy during the years from which the data were collected. But the point that targeted radiation may be overused (or, was overused in recent years) is clear regardless of the timings. There are more details in the news article and the published study.

And then there’s this press release from UCSF about widespread mammogram use detecting lower-risk breast cancer, based on this new study (PDF) in Breast Cancer Research and Treatment. The message and purpose of this particular study are a little confusing, but I think they are trying to show that screening can help separate high-risk and low-risk tumors, so that women with low-risk breast cancer do not receive unnecessary treatment. Here is the last paragraph of the study’s discussion section:

The observation that a substantial fraction of screen-detected cancers have low and ultralow risk is valuable information. These types of cancers may account for the cases that others consider ‘‘overdiagnosis’’ [24]. However, when we initiate screening, we do not know which women are likely to develop ultralow risk or IDLE tumors. We can, however, recognize that such tumors are commonly identified today, discuss this with our patients, and perform tests that elucidate the underlying biology of the tumors detected. We can use this information to guide treatment recommendations and as the basis for the development of clinical trials that test the safety of less aggressive treat- ments for patients with the lowest risk tumors.

So, two headlines and one sort-of newsy item focused on the idea of overdiagnosis and overtreatment. Because the usual mix of stories in any given day usually isn’t themed along those lines, I thought it was noteworthy to see two in a row.

But lest you think you have entered some kind of alternative medical universe, fear not! There’s also:
this
(“…generating buzz about a potential breakthrough that could transform cancer treatment.”
and this
(Avastin for nasopharyngeal cancer?)
and this
(disputing the alleged risks of brachytherapy for breast cancer discussed at a recent presentation at the San Antonio Breast Cancer Symposium, having nothing to do with the JNCI study above)
and this
(it may help you now, but it will hurt you later)

All right, I lied, not such a short post after all. Who knew the nap would be this long? Maybe I am the one in the alternative universe.

Category: Cancer Screening, Diagnostic testing, Medical Evidence | Tagged , , , , , , | Comments Off

Smoke-Screening

My past couple of posts have included lots of praising of others—the speakers who’ve been presenting sessions at the Medical Evidence Boot Camp this week—and I hope you’re not sick the praises, because today was no different. Barnett (Barry) Kramer, MD, MPH, Director of the Division of Cancer Prevention at the National Cancer Institute completely pulled the curtain back on cancer screening, with evidence and insights that revealed screening to be rife with smokescreens.

I hope to write about this area of cancer care more extensively some time, but a few snapshots for now. First, a simple understanding about how screening can improve survival rates without changing anything at all. Many readers might know about all this already, but – it’s called lead time bias. The simplified explanation that Barry Kramer gave today was this: say 100 people are diagnosed with a type of cancer that leads to death four years later for all of them, without exception. What is the 5-year survival rate? 0. But now say you detect the cancer earlier – say, two years earlier. Those 100 people are still living just as long. But now the 5-year survival rate is 100%. Nothing has changed. The cancer hasn’t been treated better, the people haven’t actually lived any longer. They’ve only lived for a longer amount of time with cancer.

Dr. Kramer spent a few hours discussing screening with us, and to be clear, he’s not “anti-screening.” And in fact he talked about which screening tests he thinks are good, or worth considering, and for which data are emerging that will help define the potential usefulness. But he raised several red flags of caution. Selection bias is another problem – the population of people who get screened may not represent the larger population of people who get a particular type of cancer overall. (By analogy: Why do Volvos have such a low accident rate? Because people who tend to drive safely think, “I’m a safe driver! I’m going to buy the safe car. And they don’t get into accidents because they drive safely, and so Volvos keep on having the lowest rate of accidents.) Then there is a length bias in screening, which has to do with the fact that some cancers are very slow growing. This is oversimplified, but basically, at any given point there are more people with slow-growing cancers than there are with fast-growing cancers.

The slow-growing ones may not be harmful at all—many will not grow or will disappear on their own, or will grow so slowly that the individual will die of something else before the cancer does any harm—but more of them are detected by screening because more of them exist in the world. Rapidly progressing cancers, which are the ones that tend to be more dangerous, can miss detection via screening because they grow quickly; a tumor can emerge and grow between screenings. Length bias leads to overdiagnosis, because tumors that never would have caused harm end up being subjected to treatment that can leave a person worse off than they were or ever would have been. As Dr. Kramer put it (in my paraphrase…) treating people who are healthy rarely leads to anything good.

There was so much more to think about, but one of the big messages that finally started sinking in today is that there are things in healthcare that do more harm than good. We all know this, right? It’s something I’ve heard again and again and again. Even in the course of these few days, we’ve been hearing the question: does the benefit outweigh the risk? But by the end of today it was finally taking hold of my little brain: there are interventions that do more harm than good. That’s something to just stop and think about. It’s like we have a default setting that believes interventions are good, and so even when we hear the words “this may do more harm than good,” the message is sort of watered down by this underlying adherence to the belief that no, no, no, really there must be some good in it, just maybe not that much good. But that just isn’t true. I know screening is a hugely contentious issue, and personally I find it very difficult to believe that certain screening tests are not helpful, and could be, or maybe will most definitely be, harmful. And yet.

Early detection can lead to unnecessary treatments which can have severe, negative consequences on a person’s health and well being. Just the fact of being labeled as a “cancer patient” can have terrible fallout, especially considering that a person may never have had any negative effects on his/her health from cancer in their entire lifetime.

Again, this is not to say that all screening is bad, or that it should not be done. But clearly there is a lot more room for thoughtful discussion, and for having the courage to sometimes just say no.

Category: Cancer Screening | Tagged , , , , , , , , | 2 Comments

We’ve Been Framed

Today’s sessions at the “boot camp” were gripping in so many ways. First, I have to recommend looking up any and all research done by Steve Woloshin and Lisa Schwartz. This dynamic duo are co-directors of the Center for Medicine and the Media, at The Dartmouth Institute for Health Policy and Clinical Practice, and co-directors of the VA Outcomes Group, VAMC, in White River Junction, VT. They are wonderful speakers—plain-spoken, fact-based, and charismatic. It doesn’t hurt that they’re also very funny. I wrote here several months ago about Overdiagnosed, a book that they co-authored with Dr. Gil Welch, and today’s sessions with them were along the same lines as that book – facts and insights that are revelatory because of how different they are to the messages we normally here and to the way of thinking that we’ve allowed ourselves to be trained into in life.

One of the key points made today is the importance of framing, something that advertisers know and study, but which we regular folks might not think about too much. Or at least not this regular folk. So as an example was the Evista ad that claimed a 68% reduction in vertebral fractures in one year versus a placebo. But 68% of what? In who? When you parse the data of the study from which that 68% figure was derived, it turns out that the over 1 year, the risk of vertebral fracture with Evista was one third that seen with placebo. But here’s the thing: the actual risk percentages were .83% versus .27%. Those small numbers strike quite a different chord than the large-font, bold-faced 68%. Sometimes data can be visually framed in a way that causes distortion. An effect can look very dramatic if you scale it in a way that causes it to look dramatic, like having a Y axis that goes from 0 to 1, where small differences will end up looking very dramatic.

They also gave a wonderful analogy for understanding terms like “absolute risk,” “relative risk” and the like. The analogy is with shopping: Absolute risk can be thought of as the regular price of something. That’s the control group. Absolute risk is the sales price – that is what we use to understand the effect in the intervention group. Relative risk, which is what we’re talking about when we say that something has an X times higher risk than something else, is a fraction where the sales price is the numerator and the regular price is the denominator. Relative risk reduction can be thought of as how much of a percentage off is the sales price from the regular price. And finally, the absolute risk reduction can be thought of as the savings, or in other words, the regular price minus the sales price. I love this analogy and how it allows us to understand data presented in an abstract or full study in a way that, well, maybe doesn’t make our brains hurt so much, and definitely clarifies exactly what is being said and what really happened in the study (as you can probably guess, what *actually* happened in a study is often far more minimal than any subsequent presentation of the data, whether in an ad, a press release, a scientific meeting, or newspaper article, might lead us to believe).

There was much, much, much more, and all I can say is if you ever have a chance to attend a course with Steve Woloshin and Lisa Schwartz, run, don’t walk. In the meantime, here’s a link to their latest book.

Equally vital was our session with Sidney Wolfe, Director of the Public Citizen’s Health Research Group. This session was a real Matrix moment for me, filled with information that crumbled some ways I was thinking about drug development and the pharmaceutical industry. I was really holding out hope that it wasn’t as bad as many people thought, and now, well, I have to reexamine my thinking, particularly in light of the many examples Dr. Wolfe gave today about how slow the FDA has been about recalling unsafe drugs. Maybe I can post more about that another day, but for now I wanted to make sure to give a link to HRG’s website, www.citizen.org/hrg.I really recommend checking it out when you have the time.

Category: Medical Evidence | Tagged , , , , , , , , | Comments Off

What Is a P Value?

This week I am incredibly fortunate to be attending the Knight Science Journalism’s Medical Evidence Boot Camp. Today was Day One, with Jennifer Croswell, MD, MPH, of the Agency for Healthcare Research and Quality, and was thoroughly compelling. Breaking down, piece by piece, how medical studies are designed, how to understand how risk is quantified—for example, the very critical difference between absolute risk and relative risk—the meaning of various statistics, and how clinical guidelines are created.

I would like to put all of it down in a blog post because the information is so critical to understanding medicine and being able to parse the constant influx of results and associated clamor for our attention. But alas, as is so often the case, the spirit is willing but the flesh is weak. In other words, it’s almost time for dinner. So, in lieu of a full recall, here’s a tidbit that I loved (among the many tidbits that I loved equally as much).

The P value. The statistic to which we’ve trained our eyes to run when presented with an abstract. Here is something I didn’t know: what the P stands for. Simply, probability. Speaking of probability, probably many of you already knew this. But I didn’t. I always thought, “P means statistically significant” but somehow never managed to connect P with probability. A small matter, but still.

Also, .05, you know the number that we always consider to be the barometer of significance? It’s an arbitrary number, an agreed-upon value that indicates the likelihood that the result is not due to chance.

And, here is my favorite gem: a lower P value does not mean that the findings are more dramatic. Here is where I risk looking foolish! I always thought that if the P value was really, really, really low – like .00000001 – that meant the study findings were somehow better than if the P value was simply .04. Nope! It doesn’t mean that at all. Both of those P values tell you the same thing: that the observed effect was probably not due to chance. (As an example of how P values can be misused, the excellent Jennifer Croswell showed us an advertisement in which a company is touting a P value of .000009 for its recent study.)

Lastly, P values are NOT a measure of the quality of the study. A study could have been put together in a terrible way, but the P value can still be statistically significant. P values are not a mark of trial quality, and they are not a mark of a measure of the importance of the finding. Croswell gave us the imaginary (I think it was made up…) example of a drug that lowers fever in children by .1 degree. The study was statistically significant, but is lowering a fever from 103.5 to 103.4 meaningful? No.

So, hopefully more to come this week, but a little bit from today’s incredible wealth of information and accompany rich and valuable discussion.

Category: Medical Evidence | Tagged , , , | 5 Comments