Skip to content

For this week's TWIST (This week in infrastructure systems) post, I want to do things just a bit differently and focus on a topic that is crucial for any infrastructure system: uncertainty framing.

Of course, it is very difficult to agree on how to define uncertainty, and once it's defined, it can be difficult to select robust tools for managing the types of uncertainties we see in infrastructure systems. Since infrastructures are characterized by long life cycles, large geographic and demographic scope, and substantial interconnections within and between lifeline systems, one wonders how any problems are selected for analysis. The web of intricacies faced by analysts and policy makers can be intractable, and the ways that the unknowns influence the likelihoods of the possible consequences makes every choice high-stakes. Some professionals call these problems "wicked," and prefer to "muddle-through" them, take a garbage can approach, or just admit that optimal solutions are probably not possible and accept the best feasible option--to our knowledge--at the time. Others call these "deep uncertainties" and even wonder whether resilience analysis is more appropriate than risk analysis for infrastructure systems.

However you choose to sort all that out, this issue is of critical importance to infrastructure enthusiasts today. In the US, we face a crisis of governance, in which the public trusts neither government nor experts, the center no longer holds--making it impossible to provide legislative/political stability for public engagement over the scientific debates, and our most important issues are fraught with uncertainties that make it impossible to provide an unequivocally recommended course of action. Of course, infrastructure is impossible without both strong governance and strong science (or trans-science, if you prefer). With that in mind, two articles stood out from Water Resources Research this week:

  • Rival Framings: A Framework for Discovering how Problem Formulation Uncertainties Shape Risk Management Tradeoffs in Water Resources Systems. In this paper, Quinn et al. explore how rival problem (read: uncertainty) framing could lead to unintended consequences as a result of inherent bias in the selected formulation. Of course, this is unavoidable for even modest problems in critical infrastructure systems, and so they provide some guidance for carefully exploring the possible consequences that can be foreseen under alternative problem formulations.
  • Towards best practice framing of uncertainty in scientific publications: a review of Water Resources Research abstracts. In this paper, Guillaume et al. describe how awareness of uncertainty is addressed within WRR abstracts/papers. They develop an uncertainty framing taxonomy that is responsive to five core questions: "Is the conclusion ready to be used?"; "What limitations are there on how the conclusion can be used?"; "How certain is the author that the conclusion is true?"; "How thoroughly has the issue been examined?"; and, "Is the conclusion consistent with the reader’s prior knowledge?". Of course, as the authors acknowledge, the study of uncertainty framing is inter-disciplinary, and achieving an uncertainty framing that is responsive to these questions is an art in itself.

Uncertainty, to me, is both fearsome and beautiful. I hope these two articles, or some of the other links shared, provide some useful thoughts for managing uncertainty in your own study or management of infrastructure systems.

At this year's European Safety and Reliability Association (ESRA) annual meeting, ESREL 2013, Dr. Francis presented a discussion paper co-authored with GW EMSE Ph.D. student Behailu Bekera on the potential for using an entropy-weighted resilience metric for prioritizing risk mitigation investments under deep uncertainty.

The goal of this metric is to build upon tools such as info-gap decision theory, robust decision making approaches, scenario and portfolio analysis, and modeling to generate alternatives in order to develop resilience metrics that account for a couple ideas.  First, we felt that if an event is very extreme, it would be quite difficult to prepare for that event whether or not the event was correctly predicted.  Thus, resilience should be "discounted" by the extremeness of the event.  Second, we felt that if an event is characterized by an uncertainty distribution obtained through expert judgment, the degree to which the experts disagree should also "discount" the resilience score.  In our paper, the goal was to present the entropy-weighted metric we'd developed in our RESS article to the ESREL audience in order to engender some discussion about how to evaluate resilience under these conditions.  This work was inspired by a talk Dr. Francis attended by Woody Epstein of Scandpower Japan in PSAM11/ESREL12 Helsinki.

The paper and short slides now appear on the publications page of the SEED research blog.

I have been away from writing on the blog, even my personal opinions on current research topics (OK, that's what almost all of this writing is) due to travel, deadlines, and other obligations.  I do want to take an opportunity to announce that a new paper from the SEED research group co-authored by Dr. Francis and Behailu Bekera has just been accepted for publication in the journal Reliability Engineering and System Safety.  I am very excited about this, because I enjoy reading articles from this journal, and have found this research community engaging and interesting in person, as well as on paper.  I'll write a more "reflective" entry about this sometime later, but if you'd like to take a look at the paper, please find it here.  We will be presenting an earlier version of this work as a thought piece at ESREL 2013.  More on the conference paper closer to the date of the conference in October.

this post is about measuring ignorance.

yes, that sounds weird, but in systems engineering it's a very big deal now.  as our complex engineered systems progress in their reliability, we are trying as hard as possible to make sure that we can anticipate all the ways they can possibly fail.  essentially, we are everyday trying to imagine the most outlandish environmental/geopolitical/economic/technical disasters that can happen, put them all together at the same time, and determine whether our systems can withstand that.

naturally, one expects our military to do this as a matter of course: but your humble civil engineer? not sure that's crossed too many folks' minds.  at least not until Fukushima, Deepwater Horizon, and now Hurricane Sandy occurred. now, everyone are wondering how in the world we allowed our systems to perform so poorly under those circumstances.

the problem is not necessarily the safety or reliability of the systems: how often do you and i in the US plan our day around the scheduled load shedding at just about coffee hour? or purchase bottled water to shower in because the water's too dirty to touch? even on the DC Metro or MARC trains, the [relatively] frequent delays are not an important consideration in my daily planning.

the problem is generally that we couldn't possibly anticipate the things that cause our systems to deviate from intended performance.  and short of prophetic revelation, there's not a good way to do that.

there are, however cool ways to explore possibilities at the edge of the realm of possibility.

i have in mind things like robust decision making, info-gap analysis, portfolio evaluation, and modeling to generate alternatives.  some of these tools are older than i am (modeling to generate alternatives) but are only recently finding wider application via the explosion of bio-inspired computing (e.g., genetic algorithms, particle swarm optimization, ant colony optimization, genetic computing, etc.); while others are becoming established tools in the lexicon of risk and systems analysis even as we speak.

for example, info-gap analysis avoids restricting ourselves to decision contexts in which externally valid models can be identified in order to predict the future.  instead, info-gap computes the robustness and opportuneness of a strategy in light of one's wildest dreams [or nightmares] about the future.  in this way, one can be surprised not only about how bad things might turn out, but also how good they might be.

i personally am more partial to robust decision making, as it uses mathematics and terminology i am a bit more familiar with.  robust decision making enables us to evaluate circumstances in which either we can agree on an externally valid model of the future or we would like to discuss a range of competing interpretations and assumptions. one starts with a set of alternatives, and iterates through the futures that the strategies under consideration might suggest.  after a set of futures has been identified, the areas of future vulnerability are identified for the strategies being evaluated.  portfolio evaluation shares many similarities with robust decision making, in that the stakeholders project differing interpretations that may imply competing priorities for the decision context.

while in all of these techniques, it is generally agreed that one of the major weaknesses of existing risk and decision theory is the reliance on probability models that represent possible futures, i can't give up my addiction for Bayesian statistics.  at least when i decide i need rehab, there seems to be a sound selection of medications to choose from.

In this post, Behailu Bekera, a 2nd-year PhD student in the SEED group discusses the role of robust decision making under deep uncertainties.  This post was inspired by a reading of Louis Anthony Cox's "Confronting Deep Uncertainties in Risk Analysis" in a recent issue of Risk Analysis.

There is no one good model that can be used to assess deep uncertainties. Hence our decisions about complex systems or decision contexts are typically made based on insufficient knowledge about the situation. Deep uncertainties are characterized by multiplicity of future events and an unknown future. So, being able to precisely anticipate undesired events in the future and conducting the necessary preparations would be an example of a decision context with deep uncertainty. In this article, Tony offers recognition to ideas from robust optimization, adaptive control and machine learning that seem promising for dealing with deep uncertainties in risk analysis.

Using multiple models and relevant data to improve decisions, average forecasting, resampling data that allows robust statistical inferences despite model uncertainty, adaptive sampling and modeling, and Bayesian model averaging for statistical are some of the tools that can assist in robust risk analysis involving deep uncertainties.

The robust risk analysis techniques shift the focus of risk analysis from addressing passive aspects of it, such as identifying likelihood events (disruptions) and their associated [estimated] consequences to more action-oriented questions.  Active questions such as how we should act now to effectively mitigate the occurrence or consequences of events with highly undesirable effects in the future.

Robust decision making, for instance, is used by developing countries to identify potential large-scale infrastructure development projects and investigate possible vulnerabilities that require profound attention of all stakeholders.  Additionally, adaptive risk management may be used to maintain reliable network structure to ensure service continuity despite failures. This sort of techniques can be considerably important in the areas of critical infrastructure protection and resilience.

Through these emerging methods, Dr. Cox, makes important suggestions for making robust decisions in the face of extreme uncertainty in spite of our incomplete or inadequate knowledge.  This will be an important paper for those looking to advance the science of robust decision-making and risk analysis.