This routine-sounding phrase is of potentially enormous significance to the way that risk assessment and chemical regulation is conducted in this country. The referenced document is a 2009 National Research Council report that made controversial recommendations on how dose-response curves should be formulated. The NRC committee that prepared the report wrote that:
Non-cancer effects do not necessarily have a threshold, or low-dose nonlinearity…. Scientific and risk-management considerations both support unification of cancer and non-cancer dose-response assessment approaches. (Summary, p. 8)
The idea of “linearity” –that an extrapolation of data points along the dose-response curve should pass through the intercept of the x and y axes—makes sense from a statistical standpoint. Indeed, statisticians were well represented on the committee.
However, linearity of non-cancer effects departs from decades of accepted understanding of biological mechanisms and how those shape dose/response. Through the processes of detoxification and cell-repair, organisms respond to lower levels of toxic exposure without adverse health effects.
This phenomenon gives rise to the concept of a threshold of effect. Naturally occurring levels of numerous toxicants provide evidence of the threshold effect at work. Without the ability to respond to these exposures with detoxification and repair, human life would have long ago been extinguished on this planet.
It allows them to establish and enforce protective yet workable exposure limits for chemicals in food, consumer products, occupational settings, and environmental media such as remediated soil.
Discounting the existence of a threshold, what the committee calls “non-linearity,” would introduce both theoretical confusion into the field of risk assessment and also practical difficulties into the work of regulators. Linearity implies that each incremental increase in dose above zero can be expected to produce health effects. If moving from zero to one part-per-trillion of a particular chemical carries a statistically calculated, theoretical risk, does that obligate a regulatory agency to address that level of exposure?
This question is especially pertinent for statutory frameworks that lack other standard-setting criteria, such as technical feasibility or a balancing of costs and benefits. The National Ambient Air Quality Standards (NAAQS) provisions of the Clean Air Act, for example, require EPA to set allowable levels for particulate matter, ozone and other pollutants at a level sufficient to “protect human health and the environment, with an adequate margin of safety.” In such a context, the question arises whether the legally required level is “zero,” and if so, how does EPA—and American society—get to that level?
2 comments:
I like this post a lot. TSCA detail is always good to hear about - please continue. If you don't mind, I might link a blog story to your post.
- Chris
Thanks for your comment, Chris. Please feel free to link.
Post a Comment