It seems simple enough to answer the question whether something poses a risk or not.  The answers can only be "yes", "no", or "we don't know".  A "yes" response would then be qualified by the probability or likelihood of risk entailed, as well as the context in which it exists.  A "no" should be definitive and not have any exceptions, while a "we don't know" is ambiguous enough to suggest that there is no definite answer, as yet.

A recent article on the risks of cell phones illustrates one of the reasons why the public tends to be distrustful of many of these findings.  It is clear that games are being played and agendas being driven.

Let me say that, at present, it appears that the only truthful answer to the question of cell phone risk is in the direction of "we don't know", although there are indications that it isn't likely to be an unequivocal "no".

In the first place we find that some of the more positive research results were funded by the industry itself and appear to have been less than rigorous.
"But a subgroup of studies that employed more rigorous methodology -- most conducted by the same research team in Sweden -- reported a harmful effect, whereas a set of less rigorous studies -- most funded by an industry consortium -- found a protective effect."
Simply being funded by an interested party isn't sufficient grounds to ignore the results or assume bias, however what is more disconcerting is the prepared statement issued by the industry regarding these studies.
"In addition, there is no known mechanism for microwave energy within the limits established by the [U.S.Federal Communications Commission] to cause any adverse health effects," he said. "That is why the leading global heath organizations such as the American Cancer Society, [U.S.] National Cancer Institute, World Health Organization and the U.S. Food and Drug Administration all have concurred that wireless devices are not a public health risk."
What makes people suspicious is the carefully chosen phrases such as "within the limits established" and "wireless devices".  In short, if studies have been published, why resort to an argument from authority to establish a risk assessment?  In truth, none of the mentioned organizations has provided conclusive research and results on cell phone use, so to invoke them is misleading at best. In truth, the industry knows that it cannot make the statement that "cell phones pose zero risk to users".  

The senior author of the study indicated that "clearly there is a risk", but even with that statement there is no indication of the probability of harm, nor the context in which it could occur.

This raises the other issue of trustworthiness because many of the risks being evaluated are based on statistical analysis, which can be useful in general population studies, but often doesn't convey much useful information to the individual.  As an example, consider the risk of being struck by lightning.  The National Safety Council (NSC) website gives a lifetime risk of 1 in 79,399.  However, it would hard to argue that the risk is the same for a 2 month old infant in a crib and a golfer on the course in a thunderstorm.  In addition, it appears that you have a greater likelihood of being legally executed (70,577:1), than you have being struck by lightning1.

While there is nothing wrong with these statistics as a general statement of expectations for a population, it would be erroneous to suggest that they convey any real information to an individual unless a context is clearly established.  The risk of being bitten by a venomous reptile is obviously proportional to your likelihood of exposure to such reptiles.  Therefore to discuss lifetime risks of such events is disingenuous without more clarification2.  Simlarly when statistics are used to assess other types of risks, it becomes increasingly important to understand what the conditions and exceptions are.

What it all comes down to is that the public is becoming increasingly suspicious of parsed statements, legal loopholes, and the misapplication of statistics to make decisions about what is "statistically significant" in assessing risks.  It makes little difference whether we think someone is paranoid or foolish, the only person that should be making decisions about what constitutes an acceptable risk is the individual that is exposed to it.  

Often risks are minimized by suggesting that driving a car is such a high risk activity that almost everything else pales in comparison and yet we continue to do it.  However, it is interesting that according to the NSC statistics, an even higher risk comes from yourself.  You are almost twice as likely to kill yourself (117:1) than to be assaulted (210:1) or dying in a car accident(261:1).    

2 This specifically needs to be considered when examining the risks of drugs like Gardasil.  While I'm not indicating nor suggesting that there is any risk, it is incorrect to use "lifetime" statistics when the first decade of life carries zero risk, but by including it reduces the overall assessment of risk.  Especially when this is coupled with short-term studies, it is completely misleading to suggest that the risk can be projected when one hasn't even followed a patient for more than a few years. It would be like projecting the risk of getting a traffic ticket or accident, as a driver, when one is 8 years old.