[Maarten Van Horenbeeck] [Risk Management]


Annotated Risk Management Bibliography

In 2006 I started maintaining this annotated bibliography on risk management. It contains articles, books and websites on a wide variety of risk related topics. Each article is condensed into its bare essence.

Risk Management Theory & Concepts
Social Psychology and Psychometric Risk
Cultural Risk
Risk Communication
Decision Making
Risk Analysis Methodologies
Applied Risk Assessment
Intelligence and Risk
Political Risk Management
Terrorism and Probabilistic Risk Management


Risk Management Theory & Concepts [back]

Altenbach, J. (1995) A Comparison of Risk Assessment Techniques from Qualitative to Quantitative. Proceedings of the ASME Pressure and Piping Conference, July 23-27 1995. Honolulu, Hawaii.

This text introduces the reader to qualitative and quantitative risk analysis. It reviews methods to represent the result of such analysis in risk matrixes and how screens can be applied to filter out risks of low significance. Concluding that grouping of risk levels is best done after a thorough evaluation of the accident scenarios, instead of blindly applying predefined screens, the author reviews a multidimensional quantitative screening matrix that introduces a quantitative consequence scale. This allows relative risk to be calculated for the entire matrix. The author then mentions and counters ten common reasons for not quantifying risk assessments.

Apostolakis, G. E. (2004). How Useful is Quantitative Risk Assessment? Risk Analysis, Vol. 24, No. 3 (2004). McLean, VA: Society for Risk Analysis.

The author maps QRA's state in several industries by phases: skepticism, familiarity and confidence. He argues that by first defining undesirable end states, next the disturbances that could cause them and then quantifying and mapping accident scenarios using event trees, QRA can significantly assist in safety planning. Stressing the use of peer review by independent experts, he claims QRA can increase completeness. He does touch on a number of limitations, including its limited use in including human error during accident conditions. He responds to one other major concern regarding QRA, clarifying that it is not merely concerned with quantification, but with understanding failure.

Covello, V. & Mumpower, J. (1985) Risk Analysis and Risk Management: An Historical Perspective. Risk Analysis, Vol. 5, No. 2 (1985), pp. 103-119. McLean, VA: Society for Risk Analysis.

Covello and Mumpower review the history behind risk management, going back to the 3200 B.C. Asipu people, widely considered the world's first risk managers. The needs for risk management, such as piracy on open seas and health issues, are clearly identified by colorful examples from the past. Reasons are sought for the move towards quantification of risk, such as the increase of commerce and mathematical methodology in the 17th century. Finally, they identify some of the important aspects, including prospects of dread, which will become important considerations for future risk analysis and management.

Hansson, S.O. (2004) Fallacies of Risk. Journal of Risk Research, Vol. 7, No. 3, April 2004, pp. 353-360. Oxford: Taylor & Francis.

Hannson analyzes contemporary discussion of risk and identifies 10 fallacies specific to such communication: the sheer size fallacy (risk smaller than another accepted risk should be accepted), converse sheer size fallacy, fallacy of naturalness (natural risks should be accepted), ostrich's fallacy (undetectable risks are acceptable), proof-seeking fallacy (if no scientific proof, no action should be taken), delay fallacy (more accurate information will become available, decision-making should be postponed), technocratic fallacy (scientists decide acceptability of risk), consensus fallacy(experts should be asked for consensus), fallacy of pricing (risks must be priced) and infallibility fallacy (experts and public disagree, the public is wrong).

Sarewitz, D., Pielke, R. Jr. & Keykhah, M. Vulnerability and Risk: Some Thoughts from a Political and Policy perspective. Risk Analysis, Vol. 23, No. 4, 2003. McLean, VA: Society for Risk Analysis.

The authors argue for a disconnect between event risk and outcome risk. The former being the probabilistic risk of an event occurring, while the latter reflects the risk of a certain outcome of an event occurring. They introduce six assertions that are fairly critical of how extreme events are modeled in risk analysis: a recurring pattern in these is that risks need to be viewed from multiple perspectives. Covering the cost of risk, they conclude, does not necessary depend on vulnerability reduction.

Wilson, R. & Shlyakhter, A. (1997) Uncertainty and Variability in Risk Analysis. Fundamentals of Risk Analysis and Risk Management. Boca Raton, FL: CRC Press.

The authors define the concept of risk and its origins in uncertainty. They identify a number of different types of uncertainty: stochastic (over time) uncertainty of processes, variability – known variation among members of a population that lead to risk differences – and uncertainty – the combination of all other effects leading to such risks. As part of a review of a number of risk calculation methods, they consider the theory of error or the fact that measurements can be erroneous and conclude that such error is valuable input to any probabilistic risk analysis.

Social Psychology and Psychometric Risk [back]

Baker, V. (2005) Greenpeace v. Shell: media exploitation and the Social Amplification of Risk Framework (SARF). Journal of Risk Research, Vol. 8, No.7-8, October-December 2005, pp. 679-691. Oxford: Taylor & Francis.

Baker introduces the reader to some of the criticism on the SARF, such as its static approach to media. He goes on to show that SARF theory does allow for dynamic interpretation of media usage. He performs an in-depth case study of three powerful risk signals created by Greenpeace in the dumping of the Brent Spar and shows how Shell did not react to these concerns fast nor clearly enough to attenuate the risk signal. One ripple effect, the boycott of German Shell oil stations, is shown and how finally the amplification of this risk signal led to Shell's decision to review its plans.

Kasperson, J., Kasperson, R. E., Pidgeon, N. & Slovic, P. (2003) The social amplification of risk: assessing fifteen years of research and theory. The Social Amplification of Risk, 13-45

The authors provide an overview of 15 years of SARF research. They describe the theory and its intended applicability to many fields of study. Identifying the core concept of a risk signal, the text moves on to identify many of the studies that have confirmed and expanded on the framework, such as the 128-hazard-events study which measured media coverage compared to expert judgments. Critiques, such as SARF dealing solely with 'amplification', are rebutted. Finally, SARF is identified as a useful tool in policy development and its limitations are pointed out: media relations with other institutions are a prime area for future research.

Murdock, G., Judith, P. & Horlick-Jones, T. (2003) After Amplification: Rethinking the Role of the Media in Risk Communication. The Social Amplification of Risk

The article describes a content review of a large number of risk based media stories, combined with qualitative reviews of their messages. Based on this research, the authors argue that SARF in itself is not capable of explaining the full richness of risk interpretation within society: incompleteness is identified in media variety and selection and the use of a dated Weberian definition of power and human capital. They consider SARF as carrying a limited definition of how media contributes to amplification and attenuation and suggest that risk managers look beyond the framework to organize risk communication formatics.

Rosa, E. A. (2003) The logical structure of the social amplification of risk framework (SARF): Metatheoretical foundations and policy implications. The Social Amplification of Risk, 47-79

Rosa describes further development of the SARF as rooted in meeting metatheoretical challenges to the theory and growing breadth in the variety of its applications. He aims to strengthen the SARF by more closely describing the underlying concept of risk as of both constructionist and positivist nature, as uncertain situations where human interests are at stake. He identifies how certain cultures have other views on dangers, real threats. SARF is identified as a useful framework to aid policy establishment: he proposes incorporation of his HERO model – Hierarchical Epistemology and Realist Ontology - to allow for better understanding of lay knowledge and hierarchical ordering of knowledge claims.

Sapp, S.G. (2006) Risk Assessment. Sociology 415: The Sociology of Technology. Retrieved, September 26th from http://www.soc.iastate.edu/Sapp/Assessment.html. Ames: Iowa State University.

Sapp provides an introduction to seven sociological approaches to technology risk assessment. He considers as technical approaches: the actuarial approach, in which risk is an agreed upon undesirable event that can be recognized, the epidemiological approach in which causal mechanisms are sought and the probabilistic approach in which models are applied to systems instead of events. Risk can also be identified from the perspective of economy (objective utility), psychology (subjective utility) and sociology, where risk perceptions are never individual but shared with others. Finally he reviews the cultural approach to risk, based on clusters of values related to a society.

Cultural Risk [back]

Mars, G. & Frosdick, S. (1997) Operationalising the Theory of Cultural Complexity: A Practical Approach to Risk Perceptions and Workplace Behaviours. International Journal of Risk, Security and Crime Prevention, 115-129

The text introduces the Theory of Cultural Complexity, also known as Cultural Theory, which was developed by Thompson, Ellis and Wildavsky based on work in the field of anthropology by Douglas (1978). The authors introduce different approaches to cultural analysis, such as Johnson and Scholes (1988) after which they introduce a practical, analytical framework to apply cultural theory to the workplace. They identify and describe four types of workplace culture: fatalist, individualist, hierarchy & enclaves. They conclude by indicating with a workplace situation how cultural theory can assist in risk identification.

Earle, T.C. & Cvetkovich, G. (1997) Culture, Cosmopolitanism, and Risk Management. Risk Analysis, Vol. 17, No. 1, 1997, pp. 55-65.

Earle and Cvetkovich propose a different view of cultural perception of risk. Instead of classifying society in a number of cultural groups, they see people on a sliding scale between cosmopolitanism, those who view the world from multiple cultural narratives and as such more inclusive risk judgements, and pluralism, those who have one predominant view and are more risk-averse. They investigate their hypothesis through a questionnaire which measures the type of personality they entertain, as well as judgements on contemporary risk matters. Interesting is that they consider cosmopolitanism, which they state is more inclined to problem solving, to be learnable.

Risk Communication [back]

Clarke, L., Chess, C., Holmes, R. & O'Neill, K.M. (2006) Speaking with One voice: Risk Communication Lessons from the US Anthrax Attacks. Journal of Contingencies and Crisis Management, Vol. 14, No. 3, September 2006, pp.160-169. Oxford: Blackwell Publishing.

The authors argue that a common recommendation of risk communication: 'to speak with one voice', has never been empirically tested. Through interviews with 50 decision makers in the 2001 Anthrax attacks, they identify that in many cases, the requirement is politically oriented and more related to power struggles than effective communication. Little agreement exists on what one 'voice' constitutes. Is it the spokesperson or the message? They conclude that speaking with one voice makes a good rule of thumb, but that in cases where there is high social diversity or technical uncertainty, there could be good reason for multi-voice communication.

Dunwoody, S. (1992) The media and public perceptions of risk: How Journalists frame risk stories. The Social Response to Environmental Risk

Dunwoody identifies a general lack of understanding as to how risk messages are selected and structured by journalists or how their constituency interprets the resulting articles. She claims journalists do not generally observe stories as related to 'risk', they view them from frames that are attuned to their readership's interest. She identifies norms in journalism, such as a dominant focus on events instead of process and the requirement to inform, not educate. Based on the latter, she describes that mass media has the ability to signal a story, but the actual risk communication generally takes place through other means.

ENISA (2006) Risk Management: Implementation principles and Inventories for Risk Management/Risk Assessment methods and tools. Herakleon: European Network and Information Security Agency.

ENISA maps that while a large number of risk management methodologies exist, there is very little 'common language' between approaches, and little aggregated data on the situations in which certain methods are applicable. Describing the risk management process and how it is rooted in the ISO 17799 Information Security Management System, they argue that risk assessment is commonly given too little attention. In order to resolve some of these issues, they provide "identity cards" for each of 24 methods that contain basic information on their constitution: input information, which risk management methods are supported, and output (qualitative or quantitative).

Fessenden-Raden, J., Finchen, J.M., Heath, J.S. (1987). Providing Risk Information in Communities: Factors influencing what is heard and accepted. Science, Technology & Human Values, Vol. 12, No. 3/4 (Summer-Autumn 1987), pp. 94-101

In their research, the authors aimed to establish those factors that influence how well risk-related information is accepted by the recipient. They consider risk communication a bidirectional process in which the value of risk is established over time. Through analysis of risk communication related to health related risks, they establish that reception varies strongly between different communities; trust in the message and its messenger is closely linked and that risk communication consist of many messengers and messages, often unofficial. When an official message does not provide sufficient information, recipients will actively seek unofficial messages to complete their understanding.

Needleman, C. (1987) Ritualism in Communicating Risk Information. Science, Technology & Human Values, Vol. 12, No. 3/4 (Summer-Autumn, 1987), pp. 20-25

Needleman argues that at this time, ritualism is a significant concern in risk communication. She implies that too much attention is paid to formatics, such as how the message is sent, and too little to the actual rationale for the communication: enabling those affected to make informed decisions regarding the risk matter. One example is the amount of attention given to overreaction, while underreaction, or the failure of a recipient to use the information has been much less studied. Succesful risk communication, according to the author, should be rooted in follow-up interventions with local support programs.

Peters, R.G., Covello, V.T., McCallum, D.B. (1997) The Determinants of Trust and Credibility in Environmental Risk Communication: An Empirical Study. Risk Analysis, Vol. 17, No. 1, 1997. McLean, VA: Society for Risk Analysis

The authors review the roots of trust and credibility in risk communication. They tested the hypothesis that these values are based on three determinants: knowledge and expertise, openness and honesty and concern and care. All three were found of influence to the perception of trust and credibility, but most importantly, each applied differently to any of three categories of communicators. For industry, concern and care were rated most important, for government, trust and credibility, while citizen groups needed to show more knowledge and expertise. The authors conclude that a communicator's prime task is to defy pre-existing negative stereotypes.

Schapira, M. (2001) Frequency or Probability? A Qualitative Study of Risk Communication Formats Used in Health Care. Medical Decision Making, November-December 2001. pp. 459-466.

This article investigates the use of both frequency and probability based representation of risk information on the perception of risk by a recipient. It does so through qualitative analysis: focus groups organized within the community at risk. Interesting findings include that larger denominators tend to be correlated with a perception of lower risk. Also, frequency based formats were considered less complex to interpret, but felt less related to the estimation of personal risk than probability formats. Less educated members of the focus groups also did not feel at ease with scientific uncertainty.

Zimmerman, R. (1987) A Process Framework for Risk Communication. Science, Technology & Human Values, Vol. 12, No. 3/4 (Summer-Autumn, 1987), pp. 131-137

Zimmerman states that risk communication is guided by three goals: educating, building consensus and merely disclosing information. Quoting research by Chauncy (1985) that indicates the public is more accepting of communication when it shows accurate management of the risk than when it purely disseminates data, he argues government should focus more on the institutional implementation of risk communication. By enabling two-way communication through public forum, broadening both mission and jurisdiction and localizing operations and where possible site processes closer to the actual issue site (such as waste processing close to the sites of waste generation), public acceptance can be promoted.

Decision Making [back]

Eysenck, M. W. & Keane, M. T. (2005) Judgement and Decision Making. Cognitive Psychology: A student's handbook, pp. 475-488

The text describes how decision making is rooted in uncertainty: one usually does not know in advance how a decision will turn out and as such uses subjective probabilities to decide on a certain path forward. A number of biases are introduced, and techniques which have been developed over time to prevent them from having significant impact: Bayes Theorem, for example, helps prevent disregard of account base-rate information. The authors conclude that while human decision-making is not perfect, many of these flaws actually make decision making more efficient and applicable in daily life.

Petts, J. (1998) Risk management and communication: decision-making and risk. Safety, Reliability and Risk Management – An Integrated Approach (2nd Edition).

This paper illustrates the importance of communication to the risk management process. The author divides risk management into seven phases, moving from hazard identification to monitoring the solutions put in place. Each of these seven phases requires its own type of communication processes: during the hazard identification phase, most information will flow between experts. Quantitative Risk Analysis is praised as an objective, more neutral expression of risk, although residual risk needs to be addressed. Examples of a complex communication process, siting of hazardous installations in the UK, are provided.

Shrivastava, P. (1995) Ecocentric Management for a Risk Society. The Academy of Management Review, Vol. 20, No. 1 (January 1995), pp.118-137.

The author reviews the switch from the industrial to postindustrial revolution from a risk perspective. He identifies a change in understanding that production necessarily implies risk. Risk has also proven not to be merely a technical issue but to have a distinct social profile. It has become a functional equivalent of power. Shrivastava clarifies the disregard management paradigms generally have for ecology. He proposes two alternatives, industrial ecosystems and ecocentric management. The first considers harmful byproducts of operations as potential useful input products of other production processes, while the second focuses on better aligning an organization with its natural environment.

Slovic, P., Finucane, M., Peters, E. & MacGregor, D. G. (2002) The Affect Heuristic. Heuristics and biases: The psychology of intuitive judgment

The authors identify affect as a quality of good or bad that expresses itself as a feeling and is linked closely to a certain stimulus. Affect operates as additional bias in risk decision-making, used similarly to 'bounded rationality' to explain for risk decisions incompatible with utilitarian principles. A number of studies are presented that display the relationship between affect and decision-making: in general, items that invoke positive affect, such as a smile, tend to lead to decisions favorable in the communicator's outcome. They address some of the situations in daily life, such as advertising, where affect is commonly used.

Vaughan, E. J. (1997) Risk Management Decisions. Risk Management.

The author grounds risk management in our instincts of self-preservation. Complexity of society incited the establishment of basic risk treatment tools such as risk control and risk financing. He identifies that the quality of a risk management decision cannot be judged solely based on its outcome. Through the example of insurance he identifies important aspects of any risk decision such as maximizing utility and innate risk-averseness. Decision-making happens under situations of certainty, risk or uncertainty, differentiated by the availability of probability information. The author stresses the need for planning to ensure sufficient coverage is taken while not incurring unnecessary expense.

Risk Analysis Methodologies [back]

Alberts, C., Dorofee, A., Stevens, J. & Woody, C. (2003) Introduction to the OCTAVE Approach. Retrieved, September 26th from http://www.cert.org/octave/approach_intro.pdf. Pittsburgh: Carnegie Mellon Software Engineering Institute.

This paper introduces the OCTAVE approach as an information security risk management method that is focused on organizational risk and strategic issues. It is asset-driven, as threat profiles are generated for a number of critical assets, after which infrastructure is evaluated for vulnerabilities and security strategy is generated. The source information is gathered through interviews of key staff members. While it is not an ongoing process but a short term evaluation, the authors argue that the process should be repeated over time. An alternative approach that includes briefer, qualitative use of probability is included in the form of OCTAVE-S.

CLUSIF (2004). MEHARI V3 Risk Analysis Guide. Retrieved October 3rd from http://www.clusif.asso.fr/fr/production/ouvrages/pdf/Risk_analysis_guide.pdf. Paris: Club de La Securité des Systemes d'Information Français.

Mehari is a risk assessment method developed by CLUSIF, a French Information Systems Security organization. Mehari is based on the premise that every risk situation is identified by intrinsic potentiality and impact which can then be evaluated. The methodology also claims that security measures can be taken to measurably reduce intrinsic risk, after which a residual risk level can be established. This paper prescribes the way MEHARI can assist in the process, by providing a knowledge base with scenarios, exposure values and risk reduction factors. It offers a measure of efficiency, on a scale from dissuasive to recuperative measures.

Conrad, J.R. (2005) Analyzing the Risks of Information Security Investments with Monte-Carlo Simulations. IEEE Workshop on the Economies of Information Security, 2005. Piscataway: Institute of Electrical and Electronics Engineers, Inc.

Conrad introduces an approach in which the input of a risk assessment is coupled to a Monte Carlo tool, allowing uncertainty of occurrence of an event to be defined as a probability distribution instead of an expert assessed value. This allows the inherent uncertainty of those assessments to be reflected better in the end result of a risk assessment. He provides his own critique: whether experts would be willing to provide a range instead of a single value, and argues that most experts would be relieved not to be held accountable for one option out of many possible.

Dimitrakos, T., Ritchie, B. & StØlen, K. (2002) Model based Security Risk Analysis for Web Applications: The CORAS approach. Euroweb 2002 – The Web and the GRID: from e-science to e-business. Oxon: Council for the Central Laboratory of Research.

The authors present the CORAS risk analysis framework, developed by a number of European universities and commercial organizations. Based on than AS/NZS 4360 standard, the framework attempts to 'fill in' the different processes of this standard by using existing methods such as HAZOP, FMECA and FTA. The paper presents the integrated framework which assists in specifying the target of evaluation and the design of threat scenarios. It applies this to a web based commerce platform, in which the risk of privacy violation is identified. Subsequently it describes the differences between CORAS and existing methodology such as COBIT and CRAMM.

Di Renzo, B., Hillairet, M., Picard M., Rifaut, A., Bernard C., Hagen, D., Maar, P., Reinard, D. Operational Risk Management in Financial Institutions: Process Assessment in Concordance with Basel II. 5th International SPICE Conference on Process Assessment and Improvement. Lisbon: Klagenfurt University.

Basel II is introduced as new regulation that applies to financial organizations within Europe. This framework places significant demands on operational risk management, but does not offer a comprehensive methodology. The authors review ISO/IEC 15504, a standard from software development, as a flexible process assessment method outside of its regular IT applications. It proposes an ISO 15504 Process Reference Model (PRM) and Process Assessment Model (PAM) for operational risk management and validates its appropriateness by applying it to both IT and credit risk management.

Keong, T.H. (n.d.) Risk Analysis Methodologies. Retrieved, September 20th from http://home.pacific.net.sg/~thk/risk.html.

Keong discusses 13 separate risk analysis techniques, classified as either qualitative, tree-based or dynamic systems related. Of the qualitative techniques listed, FMEA (Failure Mode Effect Analysis) is the most documented, used for reliability improvement in systems design. These techniques however do not prove up to the task of accounting for event dependencies. The article describes the tree-based methods as useful in identifying where events could link leading up to undesired events. They are identified as yet unsuitable for modeling human behaviour. Dynamic system analysis techniques such as the Markov model have strength in identifying system behaviour over time.

Obaidullah, M. (2002). Islamic Risk Management: Towards greater ethics and efficiency. International Journal of Islamic Financial Services, Vol. 3, No. 4 (January-March 2002). Jeddah: International Institute of Islamic Business & Finance.

In financial markets efficiency usually wins out on ethics in cases where both conflict. The author argues that in Islamic financial markets, strict adherence to ethics need not significantly impact market efficiency. The basics of Islamic Finance are reviewed, culminating in the comparison of some financial hedging products, such as options, within this framework. The author then proceeds by using the al-khiyar framework, or the rights of one of the parties to cancel a contract, to review design of new Islam-compatible financial risk management products.

Yazar, Z. (2002). A qualitative risk analysis and management tool – CRAMM. Retrieved October 2nd from http://www.sans.org/reading_room/whitepapers/auditing/83.php?portal=daab9ac611ee1e3f2d0079f2e1d7bcff. Bethesda: SANS Institute.

The author introduces the generic risk assessment processes of identifying assets, threats and vulnerabilities. He then introduces the CRAMM tool, developed by the UK government to allow for simplified qualitative risk analysis. The process of questioning organization members and valuation of their answers is covered, leading up to final risk calculation. He clarifies that while CRAMM can provide a set of potential countermeasures, no detailed information is given – final implementation remains up to management. As such, he concludes, CRAMM is a very useful tool to allow for efficient risk assessment, while still requiring significant knowledge on behalf of its user.

White, D. (1995). Application of Systems Thinking to Risk Management: a review of the literature. Management Decision, Vol. 33, No. 10, 1995, pp. 35-45. Edinburgh: MCB University Press Ltd.

This paper performs a thorough literature review on risk assessment methodologies and to what degree they review their subject from a systemic point of view, that is, taking into account emerging parameters of systems that may not be part of their components. It concludes that most technical methodologies (FMEA, FTA and HAZOP) are reductionist in nature, while theories such as cultural theory are much more holistic. Root cause analysis, however, is found to be holistic, but can only be applied in hindsight. Finally it proposes the failures method as a way of using systems thinking to study failure.

Applied Risk Assessment [back]

Basili, M. & Franzini, M. (2006) Understanding the Risk of an Avian Flu Pandemic: Rational Waiting or Precautionary Failure. Risk Analysis, Vol. 26, No. 3 (2006). McLean, VA: Society for Risk Analysis.

Reviewing preparations for the H5N1 bird flue, the authors state that there has been a failure regarding implementation of the precautionary principle (PP). They investigate the value of stockpiling and production capacity of antiviral drugs and vaccines in light of a potential crisis, and conclude that allowing market principles to ensure sufficient production of such crucial material may prove too risky.

Intelligence and Risk [back]

Paté-Cornell, E. (2002) Fusion of Intelligence Information: A Bayesian Approach. Risk Analysis, Vol. 22, No. 3, 2002. McLean, VA: Society for Risk Analysis.

The author argues that governments face two issues in fusing separate intelligence signals into a probability assessment of a certain event: ensuring internal communications through database synchronicity and merging different signal quality into useful information. He proposes the use of a probabilistic Bayesian model to calculate probability of an event given the quality of a signal, defined by its likelihood of being a false positive or false negative. To gather the rate of these, 'testing' of the sensors needs to take place, which the author identifies as a potential issue with regards to human intelligence.

Political Risk Management [back]

Belkin, A. & Schofer, E. (2005) Coup Risk, Counterbalancing and International Conflict. Security Studies, 14, no.1 (January-March 2005): pp. 140-177

Through quantitative study, the author finds counterbalancing, the re-ordering of military forces to obtain two countering structures, related to nations where the risk of coup is high. In addition, he finds that nations in such military configuration practice more low intensity warfare. The review of whether a regime is highly susceptible to coup is an example of qualitative risk analysis, for which the author users two parameters: strength of civil society and legitimacy of state institutions, applied on a dataset of 108 countries. After proving his hypothesis correct, he provides a qualitative analysis of the Georgia's in the mid-90s.

Bingqin L. (2005) Urban Social Change in Transitional China: A Perspective of Social Exclusion and Vulnerability. Journal of Contingencies and Crisis Management, Vol. 13, No. 2, June 2005, pp.54-64. Oxford: Blackwell Publishing.

This paper assesses the risk level of Chinese urban society. It identifies that while economic growth has been significant, prosperity has not followed for the majority of the Chinese people. Certain groups have been 'socially excluded', initially on political grounds, which afterwards translated into financial discrepancies. Bingqin states that when Chinese reform started, many different social stratifications became apparent instead of merely the discrepancy between 'urban' and 'rural' residents. Social insurance systems are only slowly starting for each of these social groups. Due to the difference in living standards and opportunities, China is facing high risk of potential social crisis.

Bunn, D.W. & Mustafaoglu, M. M. (1978) Forecasting political risk. Management Science, Vol. 24, No. 15 (November 1978), pp. 1557-1567.

The authors argue that prediction of political action is rooted in an understanding of its social and political conditions. They define Political Risk Events as an outcome with negative effect on business, while a Political Risk Factor is a parameter of society that could influence the risk of a Political Risk Event. A model is proposed which solicits expert opinion and afterwards uses Bayesian probability analysis to assess the likelihood of a risk event occurring. Due to its relative ease of use, the authors feel this technique would be useful not only strategically, but also for tactical decisions.

Fitzpatrick, M. (1983) The Definition and Assessment of Political Risk in International Business: A Review of the Literature. The Academy of Management Review, Vol. 8, No. 2 (April 1983). pp. 249-254. New York, NY: Academy of Management.

Conducting a literature review, Fitzpatrick defines three categories dubbed political risk: (1) government action, (2) occurrences of a political nature and (3) discontinuities in the business environment by radical change. A fourth defines it more generically as risk generated by the political environment. He shows how assessments of political risk were rooted in the 1970s political events. Identifying methodologies as being either quantitative or qualitative, he reviews a small number, used by e.g.; the Bank of Montreal, and its output classification. He identifies a challenge is ahead to move from purely reactive to strategic management of political risk.

Gurr, T.R. & Moore, W.H. (1997). Ethnopolitical Rebellion: A Cross-Sectional Analysis of the 1980s with Risk Assessments for the 1990s. American Journal of Political Science, Vol. 41, No. 4 (October 1997), pp. 1079-1103. Bloomington: Midwest Political Science Association.

This paper reviews ethnopolitical rebellion in the 80s from the perspective of mobilization theory and deprivation. The authors establish a model which aims to predict behaviour based on active rebellion, repression, grievances and mobilization. By using data from the Minorities at Risk project to fill this model they were able to define the relative activity of a number of ethnic groups and forecast their activity into the future (90s).They conclude that while their model cannot be used as an accurate prediction it can assist in identifying risky cases that require observation.

Sarewitz, D., Pielke, R. & Keykhah, M. (2003). Vulnerability and Risk: Some Thoughts from a Political and Policy Perspective. Risk Analysis, Vol. 23, No. 4 (2003). McLean, VA: Society for Risk Analysis

The authors investigate the distinction between vulnerability and risk. They establish a difference between event risk, probabilistic risk that a certain event will take place, and outcome risk, the risk of a particular outcome occuring. Outcome risk can sometimes be altered while maintaining the event risk. This is shown as a potential approach to high cost, extreme situations such as the 9/11 attacks. Such approach would require accurate information on the probability of the event. It also indicates reduction of the vulnerability is closer linked to humanitarian aspects, while outcome risk deals more with direct quantifications.

Simon, J.D. (1982) Political Risk Assessment: Past Trends and Future Prospects. The Columbia Journal of World Business. Fall 1982. pp. 62-71

Writing at the beginning of the 80s, Jeffrey Simon shapes a picture of the state of the art through a literature review and polling of large international organizations. This clearly shows that the lack of a defined framework is what is holding back the use of political risk assessment. A need for such activities is identified by showing the impact political events in the late 70s had on business. He fills in this gap through an early warning system that consists of an organizational model to accurately assess micro- and macro-risk to an organization and its indicators.

Terrorism and Probabilistic Risk Assessment [back]

Farley, J.D. (2003) Breaking Al Qaeda Cells: A Mathematical Analysis of Counterterrorism Operations (A Guide for Risk Assessment and Decision Making). Studies in Conflict & Terrorism, 26:399-411, 2003

The author argues that current methods of identifying effectiveness of a counterterrorism approach are insufficient. When certain members of a terror cell are taken out, success of the operation is measured through whether or not the network is divided in separate parts, leaving it unable to communicate. Farley argues that, provided the network is hierarchical, the goal should be to remove communication between the leaders and foot soldiers, executing the attacks. He presents a mathematical model to assess the likelihood that such disruption is indeed effectuated by an operation.

Haas, C. (2002) The Role of Risk Analysis in Understanding Bioterrorism. Risk Analysis, Vol. 22, No. 4 (2002). McLean, VA: Society for Risk Analysis.

Haas reviews the risk of bioterrorism using Kaplan and Garrick's risk triplet. He identifies that the risk of bioterrorism is low compared to the risk of exposure to other microorganisms. Nevertheless, he quotes a 1999 medical journal article which indicates that "decontamination of large urban areas or even a building … is not indicated". This reveals that while risk is low, the impact of such incident could be unexpectedly high. Within this perspective, he touches on some of the dread aspects that are associated with bioterrorism and their impact on social perception of its risk.

Kunreuther, H. & Michel-Kerjan, E. (2004). Dealing with Extreme Events: New Challenges for Terrorism Risk Coverage in the U.S. Philadelphia, PA: Wharton School of the University of Pennsylvania; Center for Risk Management and Decision Processes.

Demonstrating that the 9/11 attacks were the single most costly insurance event to date, the authors argue that a number of factors indicate that terrorism insurance should not be left solely to the commercial market, and some government involvement is necessary. They put forward a number of premises: most intelligence information useful for risk analysis is kept secret, security is often interdependent between multiple players and government has a significant influence on the overall risk. They review some of the market's reactions to the 9/11 event, including the 2002 Terrorism Risk Insurance Act that forced insurers to offer coverage. It identifies it as a partial solution and reviews foreign approaches of government-public interaction to offer better coverage.

Taylor, C., Krings, A. & Alves-Foss, J. (2002) Risk Analysis and Probabilistic Survivability Assessment (RAPSA): An assessment approach for power substation hardening. Proceedings of the first workshop on Scientific Aspects of Cyber Terrorism, Washington D.C., November 2002

This paper provides a new framework which combines Survivability Systems Analysis with Probabilistic Risk Assessment to generate an all-encompassing approach to assessing the cyber-security of power substations. It does so by identifying essential and non-essential services within a substation, then defining individual attack scenarios and the affected components. Likelihood of each attack scenario is assessed by experts and can be modeled using decision trees. Benefits of this approach include ease of comparison between separate mitigation strategies and the potential for self-assessment.

Woo, G. (2002) Quantitative Terrorism Risk Assesment. Risk Management Solutions Ltd.

Woo revisits the dilemma between insurance and terrorism. He argues that terrorism cannot be quantified using pure probabilistic measures such as is being done for natural hazards: terrorism has a distinct human element. Only a limited number of high impact attacks are required to spread a terrorist's message. Applying a two-state Markov process, he claims large scale terror attacks are more likely to occur shortly after tactical security measures to a previous attack are relaxed. Reviewing the initial insurer responses, such as a terrorism bonds, he finds such financial products could be targeted at hedge funds selling the market short.

Wulf, W., Haimes, Y., Longstaff, T. (2003). Strategic Alternative Responses to Risk of Terrorism. Risk Analysis, Vol. 23, No. 3 (2003). McLean, VA: Society for Risk Analysis

The authors identify that besides tactical responses to terrorism, strategic responses are required to remove some of the root causes that lead terrorists to their actions. They identify a number of factors that have not yet been considered, such as the belittling impression generated by humanitarian aid in English or French, and proposes a model that maps the United States as a complex system. A number of state variables are then examined that increase vulnerability of the nation. By applying a five step analysis process introduced by Arquilla and Ronfeldt they generate a risk model that allows wider identification of the threat of terrorism and its sources.