Here's a great piece in Nature on why p-values are not a great idea for science: http://www.nature.com/news/scientific-method-statistical-errors-1.14700.

I particularly enjoyed the part on effect sizes: "Critics also bemoan the way that P values can encourage muddled thinking. A prime example is their tendency to deflect attention from the actual size of an effect." I've been ranting at my colleagues about this for years: you may have measured so many cells that you can detect a 1% change in some signal with p < 0.01, but does that mean anything for the actual biological system?

Instead of demanding a particular p-value, do experiments to determine what effect size might actually matter for your system, and then use (more appropriate) statistics to figure out whether your measured effect is at least this large.