RSS Feed Print
Technical question (for Lane or anyone) about "P<" jargon in clinical trial results
onward
Posted: Sunday, April 14, 2013 9:00 AM
Joined: 12/20/2011
Posts: 217


Lane (or anyone), can you help me understand something?  (I suspect you may have already explained this somewhere but I've forgotten.)

In papers about clinical trials, I'm always seeing results reported as something like, for example, "P<0.01" or "P<0.05."

Can you explain in simple language what the heck that means?

And in these examples - "P<0.01" or "P<0.05" - which result indicates greater improvement?

Sometimes reports of clinical trial results for nutritional supplements may indicate improvement, but if the improvement is miniscule, it's hardly worth even bothering to try the supplement.

What sort of "P" result should we be looking for to indicate a really significant improvement that's likely to really noticeably help the "patient" and be visible to a caregiver?  Thanks.

 


Lane Simonian
Posted: Sunday, April 14, 2013 9:31 AM
Joined: 12/12/2011
Posts: 4863


Years ago I took a statistics class and did very poorly in it, so I am probably (no pun intended) not the best person to answer this question (and perhaps someone like SunnyCA will jump in with a more thorough and better explanation). 

 

P values are based on the percentage that something was a result of chance rather than as a result of the treatment.  The lower the p number the more confidence that the result was not by chance.   

 

 

According to one website, "In most sciences, results yielding a p-value of .05 are considered on the borderline of statistical significance. If the p-value is under .01, results are considered statistically significant and if it's below .005 they are considered highly statistically significant."

 

So a p value of .01 (one in a hundred) shows a lower probability that something happened by chance than a p value of .05 (five in a hundred).

 

The p value is based on a distribution of results.  The larger the sample size the more confidence you can have in the p value.

 

The degree of improvement would be based on something else: tests to measure cognition, pyschiatric behavior, functions of daily living, etc.

  

 

 


onward
Posted: Sunday, April 14, 2013 10:29 AM
Joined: 12/20/2011
Posts: 217


Thanks, Lane.  That helps. 

If I'm understanding right, it makes sense that, in general, lower P scores should correlate with greater clinical improvement, because the greater the clinical improvement, the less likely that it happened by chance...??  (Does that make sense?)
 

 

I guess what I'm really trying to figure out is... 

 

when I read a paper giving the results of a "successful" clinical trial, is there a simple way to tell whether the improvement was big enough to make a really significant, noticeable, noteworthy change in a person's condition? 


Lane Simonian
Posted: Sunday, April 14, 2013 11:54 AM
Joined: 12/12/2011
Posts: 4863


That's right, Onward--a lower p score would indicate a lower probability that the improvement happened by chance.  The hard part is determining the value of the test used to assess something.  I read that on solanezumab there was about a forty percent improvement in a subgroup with mild Alzheimer's disease, but someone commented that was the difference in remembering a word or two (just located the article).  And the drug seemed to have no effect on functions of daily living. 

 

http://www.huffingtonpost.com/2012/10/08/solanezumab-alzheimers-drug_n_1948969.html 

 

So first you have to see if the treatment produced significant results (i.e. not due to chance) and then you have to assess whether the results from the test itself were meaningful.