Friday, December 4, 2009

Environmental Clean-Up with Chain Saws and Birthday Cakes

I wrote the following after reading the article "The General Electric Superfraud: Why the Hudson River Will Never Run Clean" in Harper's Magazine, December 2009 (link provided below). It´s a depressing account of the environmental cluster-fuck that is the Hudson River, a place with a long history of abuse and exploitation, ignorant and intentional. The article highlights, in part, the struggles with elemental questions in contemporary environmental clean-up. These questions include: how clean is clean? what is an acceptable risk? how much confidence is there in the current characterization of the contamination?

I have, since 1995, moved in the world of environmental science, clean-up, regulation, Superfund, the National Priorities List (NPL) and the like, serving as a mediator/facilitator and public involvement specialist. I have been privy to, and often facilitated, discussions on "how clean is clean?" - technological and financial feasibility, and deciding on what is an acceptable human health risk based on human health risk assessments. Ultimately, the experience I have has left me with hoards more questions than answers, and frankly, an ever waning confidence in much of modern environmental science. And this is not primarily because of the competency or good intentions of regulators and scientists, but rather the complexity of the problems and the limitations of our current technology and understanding of things.

How Clean is Clean? What risk is acceptable?
These questions are fundamental in deciding how and what to what degree a contaminated site will be cleaned. At first glance, these questions may seem like a no-brainers. How clean is clean? Totally fucking clean is clean, right? What is an acceptable risk to human health and the environment? Zero risk is the knee-jerk-answer. But things aren´t quite that simple. There are many confounding factors in considering the question of "how clean is clean?" First, there are technological limits for detecting many constituents deemed harmful to human health and the environment. Contemporary technologies all have a threshold of detection ability, but depending on the particular chemical constituent, this may be above or below levels believed to pose a significant risk to humans and/or the environment. There are some constituents, such as radioactive isotopes, that are considered by many scientists to present a no-threshold risk. In other words, there is no exposure that is considered safe to humans, all exposure´is thought to increase, to some degree, an individuals cancer risk. Yet our technology is limited in it´s ability to thoroughly detect the presence of many such constituents. So how much dirt should be dug up and hauled to a landfill? The answers to these questions can have huge financial impacts...and in the case of NPL sites, can mean 100s of millions of taxpayer dollars....a resource that is finite. These questions are far from easy to answer.

The Cosmos Are Naturally Dirty and We Helped Fuck it Up Some More
Another confounding factor is that there are many naturally occurring substances, such as radioactivity, arsenic, etc., that pose a risk to human health and the environment. In the case of radioactivity, there is also ubiquitous man-made radioactive material resulting, mainly, from decades of above ground nuclear testing. Other naturally occurring substances have been mined, concentrated, and accumulated by humans and now pose a risks. Scientists and regulators are challenged to determined what is naturally occurring and what has been cause by human actions. Suddenly, the question of "how clean is clean?" becomes much more complicated. And concomitantly, the question of what is an acceptable risk becomes, although uncomfortable to most, very relevant. I feel for the regulators and scientists that must answer these never thoroughly answerable questions.

Human Health Risk Assessments (for Cancer)
I have casually in the above paragraphs tossed out references to "human health risk assessment" with no explanation which is misleading as there is nothing casual about them. HHRAs are a primary tool in deciding "how clean is clean'", and yet, they are quite limited in many ways. HHRA are probabilistic models, statistical models based on existing information about known carcinogens and they include very conservative assumptions about exposure pathways built into the models. These models are not predictive, a very important point. They simply provide relative information on the risk based on model parameters. They do not predict whether people will develop cancer. The distinction is often difficult for people to understand and thus HHRA results can scare the shit outta people. Perfectly clear, right? I´ll try to break it down a little more.

Clear as Mud
Some contaminants have a lot of data about their carcinogenic affects on humans while others do not, many have only data from exposure to rats or other similar lab animals. The data going into the HHRA can vary greatly in it´s robustness depending on the contaminant. The second key factor that is plugged into the model is the assumed land use of the contaminated site. If it is residential use, the models make conservative assumptions such as a person will live on the site for thirty years, spend the majority of their time at home, eat vegetables grown in their yard, their children will incidentally ingest X pounds of soil a year, and so on. The logic is to assume the worst case exposure to off-set some of the uncertainty in the modeling. For industrial and recreational land use scenarios, the assumptions are less conservative, such as assuming people will not be on the site day and night and children will not be playing in the dirt, etc. Then a calculation is made and a cancer risk number is popped out. Remember, some contaminants are naturally occurring and pose a risk at any exposure. And there is a baseline cancer risk for all human beings just by being alive in this world.

All clear now? Ok, here´s another confounding issue in determining the risk at a given location. How the contamination is characterized and quantified impacts the outcome of the HHRS. Does the modeler use a high concentration sample at a local sample site or a composite sample that more evenly distributes the risk? Is it fair to assume the modeled child will eat dirt from only the dirtiest location at the site? What if the contamination is extremely heterogeneous and contains locations with high concentrations and locations that are non-detect when sampled? And if scientists are dealing with a large site with multiple contaminants, do you parcel the areas and calculate the risk or combine the entire area into one risk assessment? These are not simple questions and they do not lend themselves to simple answers. The real bummer about these questions is that there seems to be no single right answer. Judgments have to be made, compromises are inevitable, and no one is sure what the ultimate affect of these decisions will be.

Chain Saws and Birthday Cakes
When I first started working in the environmental field I was eager to learn about everything as I was lost beyond belief in this complex world of science, regulations, and a new vocabulary that appeared to have no words, only acronyms. So one day I cornered a toxicologist, a woman with extensive experience in conducting HHRAs, and I asked her to explain it to me. She patiently went through the processes in layman's terms and answered my questions. After about an hour, I cocked my head and said, "well, I gotta tell ya, this all does not sound very certain or clear cut." My colleague candidly responded with, "It isn´t. Ít´s very crude. I liken doing a HHRA to cutting a birthday cake with a chain saw....but it´s the best tool we have." I have quoted this clever woman many times through the years...her candor and use of metaphor made a huge impression on me.

So how do people answer these fundamental questions regarding environmental cleanup? Well, some of the answer lies in regulatory standards that have been developed through protracted and complicated processes and then established either through regulation or precedent. There are some benchmarks for decision makers to use, but they are far from comprehensive. Even with these benchmarks and regulations, the kinds of questions I have described above are often still extremely difficult to answer. They are fraught with all the social factors one could conceive of....understandable fear from those living on or near a site, degrees of financial impact/feasibility, political posturing and advocating, and the sometimes talented and informed (and sometimes not) scrutiny of environmental advocates and watchdog groups. The decision making often involves all of these stakeholders participating in, and/or contributing to, the decision making process (and I haven´t even touched on the fact that there is often a great diversity of opinion on these issues within the scientific community). And this is where I join the fray.

Born to Help....So I Like to Think
In my work I try my hardest to help these various stakeholders have productive yet difficult conversations and make the difficult decisions. My job is not to make any technical or policy decisions and I never opine on content, only process. I have often explained my job as such, "I help other people make difficult decisions." For most of my projects, past and present, these conversations are almost always messy, often unwieldy, and inherently complicated. But the folks at the table show up, start sorting through the complexity and slowly move through the various options and required decisions. Some sites are forever starting and stopping and re-evaluating, some blow-up politically, and others are more straightforward. Almost all major environmental clean-ups take decades.

Am I a Big Wimp? Maybe.
I have often considered that maybe I am a big wimp in that I have chosen a profession that doesn´t put me in the decision-making seat. I don´t advocate for anything but a productive process and moving towards a stated goal of consensus among some or all parties. But I love my work. I absolutely get off on helping these folks make progress on issues and questions that can´t and should not be ignored. And I hope to do it in such a way that at the end of the day they can all shake hands and go home and not kick their dogs and yell at their kids because their work is so frustrating. I think I succeed in many instances and I am certain I fail in some.

A few years ago an article in the respected journal Conflict Resolution Quarterly (CRQ) did an analysis and literature review of research on multi-stakeholder decision making processes and the efficacy of ADR professionals like myself who provide facilitation and mediation support. The upshot was that there is no way to conclusively assess whether the contributions of folks like me actually help to produce better outcomes. The question, they concluded, is currently unanswerable as there are too many variables and the processes are too complex....they likened it to the weather. The only data on the efficacy of facilitators/mediators in complex multi-stakeholder processes are from stakeholder´s self-reporting. This is an inherently problematic method as it lacks any controlled reference.

In the early days of my career I privately set a goal to at least do more good than harm in my contributions to a proceeding or meeting. I have always felt that I achieved that by whatever margin...and I think, overall, my ratio has improved through the years. But I got nothing to back that up...except, generally, more satisfied stakeholders than pissed off ones. But trust me, there are ALWAYS pissed off stakeholders.

No comments: