Skip to content

For this week's TWIST (This week in infrastructure systems) post, I want to do things just a bit differently and focus on a topic that is crucial for any infrastructure system: uncertainty framing.

Of course, it is very difficult to agree on how to define uncertainty, and once it's defined, it can be difficult to select robust tools for managing the types of uncertainties we see in infrastructure systems. Since infrastructures are characterized by long life cycles, large geographic and demographic scope, and substantial interconnections within and between lifeline systems, one wonders how any problems are selected for analysis. The web of intricacies faced by analysts and policy makers can be intractable, and the ways that the unknowns influence the likelihoods of the possible consequences makes every choice high-stakes. Some professionals call these problems "wicked," and prefer to "muddle-through" them, take a garbage can approach, or just admit that optimal solutions are probably not possible and accept the best feasible option--to our knowledge--at the time. Others call these "deep uncertainties" and even wonder whether resilience analysis is more appropriate than risk analysis for infrastructure systems.

However you choose to sort all that out, this issue is of critical importance to infrastructure enthusiasts today. In the US, we face a crisis of governance, in which the public trusts neither government nor experts, the center no longer holds--making it impossible to provide legislative/political stability for public engagement over the scientific debates, and our most important issues are fraught with uncertainties that make it impossible to provide an unequivocally recommended course of action. Of course, infrastructure is impossible without both strong governance and strong science (or trans-science, if you prefer). With that in mind, two articles stood out from Water Resources Research this week:

  • Rival Framings: A Framework for Discovering how Problem Formulation Uncertainties Shape Risk Management Tradeoffs in Water Resources Systems. In this paper, Quinn et al. explore how rival problem (read: uncertainty) framing could lead to unintended consequences as a result of inherent bias in the selected formulation. Of course, this is unavoidable for even modest problems in critical infrastructure systems, and so they provide some guidance for carefully exploring the possible consequences that can be foreseen under alternative problem formulations.
  • Towards best practice framing of uncertainty in scientific publications: a review of Water Resources Research abstracts. In this paper, Guillaume et al. describe how awareness of uncertainty is addressed within WRR abstracts/papers. They develop an uncertainty framing taxonomy that is responsive to five core questions: "Is the conclusion ready to be used?"; "What limitations are there on how the conclusion can be used?"; "How certain is the author that the conclusion is true?"; "How thoroughly has the issue been examined?"; and, "Is the conclusion consistent with the reader’s prior knowledge?". Of course, as the authors acknowledge, the study of uncertainty framing is inter-disciplinary, and achieving an uncertainty framing that is responsive to these questions is an art in itself.

Uncertainty, to me, is both fearsome and beautiful. I hope these two articles, or some of the other links shared, provide some useful thoughts for managing uncertainty in your own study or management of infrastructure systems.

Herb Simon's book, The Sciences of the Artificial, has instantly become one of the more indispensable books on my shelf. Even though I spent five years across the quad from a building with his name on it, I never really learned what he did or why his work was so important. So it is with a bit of embarrassment that I admit this book was an unexpected pleasure.

I stumbled across Simon's book as an accident. One of my students recommended we read Ethiraj and Levinthal's "Modularity and Innovation in Complex Systems" to inform our discussion about information sharing in support of infrastructure system emergency preparedness. One of their references to "The Architecture of Complexity" seemed interesting, and I wanted to learn more about system architecture so I could understand what one of my newest colleagues, David Broniatowski, was saying when he discussed the role of architecture in system flexibility and controllability. So I set out in search of "Architecture of Complexity," and the librarian instead pointed me to The Sciences of the Artificial. What a blessing!

I truly want you to read the book, so I won't say too much. For me, my most cherished insight from Simon was the following:

A man [An ant], viewed as a behaving system, is quite simple. The apparent complexity of his behavior over time is largely a reflection of the complexity of the environment in which he [it] finds himself [itself].

To me, the simplicity and elegance of this hypothesis characterizes the entire book. Although we may disagree on the specific mechanisms, or on the plausibility of this hypothesis, its influence on the practice of engineering and policy design cannot be doubted. I also see the practical results of exploration of this hypothesis everywhere I look in research and technical literature. This hypothesis and many other insights (e.g., satisficing, hierarchical organization of complex systems, valuing the search vs. valuing the outcome, etc.) immediately resonated with my experiences and pulled me all the way through the book.

Because I was trained as a civil engineer, it has taken a decade after my undergraduate to encounter Simon's work. I believe I can say that it has been worth the wait.

In view of recent events concerning Edward Snowden and some of my more recent thinking about chemicals policy in the United States, I've been trying to understand how we've reached such an uncomfortable situation in protecting Americans' Constitutional rights to privacy.  While I don't know much about privacy law, the Patriot Act, or the Constitution, I do think some ideas in chemicals regulation can help explain what's going on with the privacy debate.

I have been reading and chewing on some passages in Brickman, Jasanoff, and Ilgen's Controlling Chemicals: The Politics of Regulation in Europe and the United States, and the differences between European approaches and American approaches to chemical regulation are so striking I couldn't help but speculate whether those factors contribute to the privacy challenges we are now facing.

Since the book was written in 1985, take these thoughts with a grain of salt. However, the observation Brickman et al. made which had me thinking goes something like this: because of America's laissez faire approach to business, unparalleled levels of access to legal and administrative proceedings, and commitment to protection of individuals' rights, American chemicals policy has complexity and cost unparalleled by that in other industrialized democracies. (I try to keep these posts short, so I can't break that down further.)

The key concept in that thought making Snowden, Manning, and other issues of transparency so ironic is the commitment we make to transparency and protection of individual interests. Perhaps we are so upset and appalled by the violations of our Constitution (or at least admitted lying to Congress by NSA staff) is that we are so used to openness and privileged access afforded few of the world's other citizens to their governments. Just a thought.

So, environmental policy wonks, don't get so frustrated in the face of gridlock. Apparently, that's one of the many costs of "freedom". 😉

 

In the world of chemical or human health risk analysis there seem to be several clouds forming over the horizon: mixture-based toxicology and interpretation, data-poor extrapolation to human exposure, and high-dose chronic to sub-chronic low-dose dose-response extrapolation.  These opportunities force us to approach risk analysis as an art, and necessitates the inclusion of decision analysis into chemical screening procedures.

One problem whose urgency is increasing is data-poor extrapolation from animal to human dose-response relationships.  Not only are there tens of thousands of compounds that are not regulated and have no publicly available data, but there are also entirely new types of chemicals produced by technological innovation for which existing toxicological approaches may not be appropriate.

Traditionally, risk scientists make this approximation (and similar others) by proposing a reference dose.  The reference dose (RfD) is an unenforceable standard postulating a daily oral human exposure for which no appreciable risk of adverse effects attributable to exposure to the given compound likely exist.  The reference dose is obtained from a point of departure for which either the lowest dose producing effects, or the highest dose for which no effects have been observed (i.e., LOAEL or NOAEL) that has been divided by uncertainty factors reflecting the uncertainties introduced by extrapolation between species and data quality contexts. Roger Cooke (and several commentators) discuss the RfD, concluding that the approach needs to be updated to incorporate probabilistic interpretation of these uncertainties, but there seems to be disagreement on how to update the RfD. In his Risk Analysis article, “Conundrums with Uncertainty Facors,” Cooke argues that this approach not only relies on inappropriate statistical independence assumptions, but that this is analogous to the engineering design application of safety factors.  By not employing a probabilistic approach, we promulgate uneconomic guidelines at best, while at worst we are overconfident in the in our risk mitigation.

Cooke’s paper illustrates a probabilistic approach to obtaining estimates of dose-response relations from combined animal, human, data-poor, and data-rich results in a chemical toxicity knowledge base founded on Bayesian Belief Networks (in his example, non-parametric, continuous BBNs).  He demonstrates the possibility of employing nonparametric or generalizable statistical methods to obtain a probabilistic understanding of the response of interest in the context of the chemical’s toxicological knowledge base.  This in in contrast to the uncertainty factor approach which presupposes there is only limited understanding of the dose-response relationship at relevant human exposures which we might hope to obtain.  While we are a ways away from abandoning the RfD approach, Cooke acknowledges that it may be difficult to rely only on dose-response modeling.  His approach initializes on current practice, while promising a rapid and simple inference mechanism capable of deriving indicators in toxicological indicators and amenable to inclusion in broader decision-making models.

The Exxon Mobil oil spill in the Gulf of Mexico this past year brought to light one of the most unfortunate aspects of the socio-technical systems that define our society. Because of the complexity and technical sophistication of our most critical infrastructures and crucial goods and services the parties responsible for making regulatory decisions are often not in possession of the data required to make decisions about risk mitigation and management that offer the most public protection, especially in the context of disaster response and risk management.  This becomes more of a problem when the environment in which these decisions are promulgated is characterized by a lack of trust between the regulator, the regulated, and third-party beneficiaries.

In an environment where trust exists between the regulated and regulator, opportunities for mutual collaboration towards broader social goals may be more prevalent.  These opportunities may also be more likely to be identified, formulated, and implemented in ways that my promote more trust and improve overall efficiencies both regulatory and economic. But when trust is broken, the adversarial nature of the regulatory relationship can bring gridlock.

We are very familiar with the image of gridlock in a transportation network from our time stuck in traffic during rush hour in many of our North American cities, and 2011 has made us more and more acquainted with partisan gridlock in Congress, but what about regulatory gridlock?  I am stil thinking this one through but am borrowing from the idea of economic gridlock developed by Daniel Heller to construct these ideas. In my opinion, regulatory gridlock occurs when, in an adversarial arrangement, the intended consequences of a complex technical system (CTS) are well known and integrated while the undesirable consequences of a CTS’s deployment are unpredictable and fragmentary.  The adversarial relationship makes it nearly impossible to facilitate effective communication between owners of the CTS that has failed and the stakeholders who are affected.  In addition, the adversarial relationship activates a feedback loop between perceived transparency of the CTS innovation cycle within the CTS ownership and the willingness of stakeholders to accept non-zero risk.  As this feedback loop promotes increased negative perception of transparency and decreased willingness to accept risk, risk mitigation becomes less economically effective while increasing the overall costs to society of CTS management and innovation.

In 2012, as economic and political pressure to make government more efficient and promote economic recovery increases, will we see the need for navigating this potential gridlock increase?  How will we address this challenge, ensuring that the potential for disasters doesn’t divert our focus from the important work of improving our economic and social welfare through technological innovation within our lifeline infrastructures?

Thank you for contacting the SEED Research Group.  Let us help you engage your interests in environmental and infrastructure decision making!  Here is a short description of our research vision, also reproduced on our “Research Vision” page.  Feel free to contact by email or phone for more details!

Our overall research vision is SEED—Planted, where SEED means “Sustainable [Urban] Ecologies, Engineering, and Decision-making.”  Our research interests include:

  1. Infrastructure management, including sustainability assessment and risk analysis;
  2. Urban sustainability definition and decision making;
  3. Regulatory risk assessment and policy-focused research, especially for environmental contaminants and infrastructure systems; and,
  4. Statistical/mathematical modeling approaches to decision support.

As you may know, it is difficult to unify these themes under one umbrella.  We tend to think of ourselves as operating under the Earth Systems Engineering and Management (ESEM) paradigm for civil/environmental systems design[1].  ESEM is an approach to engineering research and education that seeks to address the irreducible complexity of tightly coupled environmental, social, and technical systems in attendant design and analysis.

To organize my research mission into tractable parts under the ESEM vision, and due to the tight coupling of infrastructure systems engineering and policy, my research group’s activities will be organized according to the “policy-as-learning” paradigm[2]:

  1. Technical Learning
  2. Conceptual Learning
  3. Social Learning

To understand a little better what these types of policy learning mean, consider the following emerging problems in infrastructure management and policy-focused environmental modeling and analysis:

Technical Learning. Technical learning involves an evolution of engineering tools supporting decisions in the context of well-defined, static policy goals.  In this area, my research group would focus primarily on innovation in the development and interpretation of statistical methods to enable real-time management of drinking water distribution systems. Recently, the National Academies has evaluated drinking water distribution system research and policy needs.  Specifically, the NRC suggests “distribution system integrity is best evaluated using on-line, real-time methods,” but that “research is needed to better understand how to analyze data from online, real-time monitors in a distribution system.”  To address this problem, I plan to employ data mining techniques and probabilistic graphical models to support the development of “intelligent” and adaptive distribution system management, focusing on distribution system rehabilitation planning.  For example, Bayesian Belief Networks may be used in conjunction with supervisory control and data acquisition methods to not only predict unintentional contaminant intrusion events (e.g., pipe breaks), but also minimize population exposure to such contaminants in real-time.  The objective of this research is to develop distribution system design and retrofitting techniques that facilitate real-time risk management.

Conceptual Learning. Conceptual learning involves the evolution of engineering tools supporting decisions in the context of changing policy goals.  Conceptual learning also involves the definition of new concepts representing the challenges of unsolved problems for which technical learning is inadequate.  The concept of “sustainability” is hotly debated, and provides a rich conceptual learning context for my research interests in infrastructure management.  In this area, my research group would focus primarily on development of decision analysis tools for evaluating the sustainability of urban infrastructure systems.  Consider for a moment the sustainability of drinking water systems.  As a postdoctoral fellow, I am currently working with Dr. Seth Guikema to identify infrastructure performance metrics for drinking water networks.  As a new faculty member my research group would start with a focus on developing software that will use evolutionary optimization algorithms to facilitate integration of financial goals, technical requirements, and ecological constraints in the design of drinking water treatment plants and distribution systems using the metrics Dr. Guikema and I will have developed.  These tools would then be extended to other urban infrastructure systems, including buildings, electricity and energy, and transportation.

Social Learning. Social learning involves the evolution of engineering tools supporting decisions in the context of not only changing policy goals, but also changing social preferences, perspectives, and capabilities.  Consequently, social learning requires that relationships between stakeholders be explicitly considered as critical components of technical and policy solutions.  In this area, my group will explore the relationships among urban infrastructure network topology, urban ecological space[3], and vulnerability to natural or man-made hazards.  My research group will employ statistical learning methods, decision analysis tools, economic input-output life cycle analysis, and agent-based modeling techniques to answer the question to answer the question “How will perceptions of global environmental problems change the sustainability of cities, especially as cybernetic[4] (e.g., smart grid) infrastructures are developed?”


[1] Allenby, B. (2005) “Earth Systems Engineering and Management.”  Environmental Science and Technology[2] Fiorino, D. (2005) The New Environmental Policy.  MIT Press, Cambridge MA.

[3] Alberti, M. (1996) “Measuring urban sustainability.”

[4] de Rosnay, J. (2000). The Symbiotic Man: A New Understanding of the Organization of Life and a Vision of the Future. McGraw Hill.