Skip to content

Today, I'm presenting a guest post from Behailu Bekera, a first-year EMSE PhD student working in the SEED Group.  He is studying the relationship between risk-based and resilience-based approaches to systems analysis.

Resilience is defined as the capability of a system with specific characteristics before, during and after a disruption to absorb the disruption, recover to an acceptable level of performance, and sustain that level for an acceptable period of time. Resilience is an emerging approach towards safety. Conventional risk assessment methods are typically used to determine the negative consequences of potential undesired events, understand the nature of and to reduce the level of risk involved. In contrast, the resilience approach emphasizes on anticipation of potential disruptions, giving appropriate attention to perceived danger and establishing response behaviors aimed at either building the capacity to withstand the disruption or recover as quickly as possible after an impact. Anticipation refers to the ability of a system to know what to expect and prepare itself accordingly in order to effectively withstand disruptions. The ability to detect the signals of an imminent disruption is captured by the attentive property of resilience. Once the impact takes place, the system must know how to efficiently respond with the aim of quick rebound.

Safety, as we know it traditionally, is usually considered as something a system or an organization possesses as evidenced by the measurements of failure probability, risk and so on. Concerning the new approach, Hollnagel and Woods argue that safety is something an organization or a system does. Seen from a resilience point of view, safety is a characteristic of how a system performs in the face of disruptions, how it can absorb or dampen the impacts or how it can quickly reinstate itself after suffering perturbation.

Resilience may allow for a more proactive approach for handling risk. It puts the system on a path of continuous performance evaluation to ensure safety at all times. Resilient systems will be flexible enough to accommodate different safety issues in multiple dimensions that may arise and also robust enough to maintain acceptable performance.

In the world of chemical or human health risk analysis there seem to be several clouds forming over the horizon: mixture-based toxicology and interpretation, data-poor extrapolation to human exposure, and high-dose chronic to sub-chronic low-dose dose-response extrapolation.  These opportunities force us to approach risk analysis as an art, and necessitates the inclusion of decision analysis into chemical screening procedures.

One problem whose urgency is increasing is data-poor extrapolation from animal to human dose-response relationships.  Not only are there tens of thousands of compounds that are not regulated and have no publicly available data, but there are also entirely new types of chemicals produced by technological innovation for which existing toxicological approaches may not be appropriate.

Traditionally, risk scientists make this approximation (and similar others) by proposing a reference dose.  The reference dose (RfD) is an unenforceable standard postulating a daily oral human exposure for which no appreciable risk of adverse effects attributable to exposure to the given compound likely exist.  The reference dose is obtained from a point of departure for which either the lowest dose producing effects, or the highest dose for which no effects have been observed (i.e., LOAEL or NOAEL) that has been divided by uncertainty factors reflecting the uncertainties introduced by extrapolation between species and data quality contexts. Roger Cooke (and several commentators) discuss the RfD, concluding that the approach needs to be updated to incorporate probabilistic interpretation of these uncertainties, but there seems to be disagreement on how to update the RfD. In his Risk Analysis article, “Conundrums with Uncertainty Facors,” Cooke argues that this approach not only relies on inappropriate statistical independence assumptions, but that this is analogous to the engineering design application of safety factors.  By not employing a probabilistic approach, we promulgate uneconomic guidelines at best, while at worst we are overconfident in the in our risk mitigation.

Cooke’s paper illustrates a probabilistic approach to obtaining estimates of dose-response relations from combined animal, human, data-poor, and data-rich results in a chemical toxicity knowledge base founded on Bayesian Belief Networks (in his example, non-parametric, continuous BBNs).  He demonstrates the possibility of employing nonparametric or generalizable statistical methods to obtain a probabilistic understanding of the response of interest in the context of the chemical’s toxicological knowledge base.  This in in contrast to the uncertainty factor approach which presupposes there is only limited understanding of the dose-response relationship at relevant human exposures which we might hope to obtain.  While we are a ways away from abandoning the RfD approach, Cooke acknowledges that it may be difficult to rely only on dose-response modeling.  His approach initializes on current practice, while promising a rapid and simple inference mechanism capable of deriving indicators in toxicological indicators and amenable to inclusion in broader decision-making models.

The Exxon Mobil oil spill in the Gulf of Mexico this past year brought to light one of the most unfortunate aspects of the socio-technical systems that define our society. Because of the complexity and technical sophistication of our most critical infrastructures and crucial goods and services the parties responsible for making regulatory decisions are often not in possession of the data required to make decisions about risk mitigation and management that offer the most public protection, especially in the context of disaster response and risk management.  This becomes more of a problem when the environment in which these decisions are promulgated is characterized by a lack of trust between the regulator, the regulated, and third-party beneficiaries.

In an environment where trust exists between the regulated and regulator, opportunities for mutual collaboration towards broader social goals may be more prevalent.  These opportunities may also be more likely to be identified, formulated, and implemented in ways that my promote more trust and improve overall efficiencies both regulatory and economic. But when trust is broken, the adversarial nature of the regulatory relationship can bring gridlock.

We are very familiar with the image of gridlock in a transportation network from our time stuck in traffic during rush hour in many of our North American cities, and 2011 has made us more and more acquainted with partisan gridlock in Congress, but what about regulatory gridlock?  I am stil thinking this one through but am borrowing from the idea of economic gridlock developed by Daniel Heller to construct these ideas. In my opinion, regulatory gridlock occurs when, in an adversarial arrangement, the intended consequences of a complex technical system (CTS) are well known and integrated while the undesirable consequences of a CTS’s deployment are unpredictable and fragmentary.  The adversarial relationship makes it nearly impossible to facilitate effective communication between owners of the CTS that has failed and the stakeholders who are affected.  In addition, the adversarial relationship activates a feedback loop between perceived transparency of the CTS innovation cycle within the CTS ownership and the willingness of stakeholders to accept non-zero risk.  As this feedback loop promotes increased negative perception of transparency and decreased willingness to accept risk, risk mitigation becomes less economically effective while increasing the overall costs to society of CTS management and innovation.

In 2012, as economic and political pressure to make government more efficient and promote economic recovery increases, will we see the need for navigating this potential gridlock increase?  How will we address this challenge, ensuring that the potential for disasters doesn’t divert our focus from the important work of improving our economic and social welfare through technological innovation within our lifeline infrastructures?