Friday, September 21, 2012

“Small Miscalculations Are Magnified Very Quickly.”


A war game organized by Kenneth Pollack of the Brookings Institute's Saban Center for Middle East Policy was conducted to examine the reactions of both the United States and Iran during escalated events concerning Iran's nuclear program and the United States' reactions to attacks by Iran, as reported by David Ignatius of the Washington Post.  

                                             Lessons from an Iranian war game

 Of interest, and what Mr Ignatius pointed out is that 

             "The game showed how easy it was for each side to misread the other’s 
               signals. And these players were separated by a mere corridor in a 
              Washington think tank, rather than half a  world away. 

This highlights one of the greatest problems we currently face in dealing with not only Iran, but other countries within the Middle East and Asia: A mere vague grasp on how our perceptions and personal biases can distort the intentions and actions of state actors. While we usually tend to embrace the idea of cultural awareness on a superficial level, this game highlights (in some what exaggerated terms) the inability for hypothetical leaders to interpret actions of our opponents correctly. 
These "small miscalculations" ended a scenario in a likely war outcome, which could have been avoided had more diplomatic interactions possibly occurred. It begs to question then how much political face leaders in the US would attempt to preserve in a real world situation like this, and attempt to publicly retaliate against Iran, versus attempting to identify the problem and tackle it in diplomatic channels.  

There needs to be more scenarios like this, more dry runs, more rehearsals, not only with actual government participants, but with other countries as well. This scenario provided an in depth lesson, one that could be used to prevent us from making rash and damaging decisions in a real world scenario.

4 comments:

  1. Found this from a reddit link.

    What do you think of these papers and this blog post?

    http://globalpolicy.gmu.edu/political-instability-task-force-home/pitf-phase-v-findings-through-2004/

    http://www.krannert.purdue.edu/faculty/cason/papers/crgame-comm.pdf

    http://www.ndu.edu/CTNSP/docUploaded/DTP%2032%20Senturion.pdf

    http://www.palantir.com/2010/03/friction-in-human-computer-symbiosis-kasparov-on-chess/

    ReplyDelete
  2. Also, there have been similar approaches trying to run general simulations using agent based modeling, but I'm not sure what their current progress is. Agent based modeling has a hard time managing a lot of detail and people have a hard time agreeing how much detail should cut from a given model while still keeping it relevant.

    Check em:
    http://home.comcast.net/~dshartley3/DIMEPMESIIGroup/ModelingSimulation.htm

    ReplyDelete
  3. I agree (and thanks for the links) that Agent Based modeling is difficult in terms of implementation and planning, but just getting low/mid echelons involved in the process of critical thinking is imperative.

    What we determined in hindsight to the 9-11 attacks as far as the intel community (NPR did a great piece on this) was the inability for our analysts at various levels to effectively use critical thinking for problem solving of various problems. For example, analysts were effectively good at recieving raw data and turning out products, but looking at long range and various order of effects that the data had on the bigger picture were completely lost to a majority of them.

    I believe that scenarios and training like this stimulates them, helps figure out where the fault lies, and will allow us to approach problem sets with a larger mind set. Thoughts?

    ReplyDelete
    Replies
    1. Team size and total time spent working directly effect people on the bottom and mid-level's ability to do good analysis and synthesis. Big picture thinking is difficult when operating under long hours, or in large groups where information becomes simplified and groupthink takes hold to keep everyone on the same page.

      http://lunar.lostgarden.com/Rules%20of%20Productivity.pdf

      I definitely agree that these scenarios are what's needed, good exercises shouldn't have all of the data explicitly mentioned, or sometimes even implicitly included in their outline.

      Instead of solving for X, they are forced to try to make sense of relationships between things that change over time and require exploration, not just analysis.

      Intelligence isn't just about prediction, but also minimizing surprise. It requires exploration, and exploration usually leads you into a lot of dead-ends.

      The CIA put out a good paper on Intelligence Analysis Tradecraft awhile back,

      https://www.cia.gov/library/center-for-the-study-of-intelligence/csi-publications/books-and-monographs/Tradecraft%20Primer-apr09.pdf

      One of the big things that sticks out is that intelligence failures often come from bad assumptions that go unchallenged. Either because they are implicit and not analyzed or because the individual cannot address alternative perspectives. Shifting perspectives like this require adaptability in thinking.

      Why not have an analyst write out all of their implicit and explicit assumptions that go into their analysis, and then trying to invert them to see if they still make sense? What kind of divergent thinking software have they designed for analysts? Stop and think about the ungodly amount of computing power the NSA and CIA have, now think about how to use it to get a better effect.

      The other thing that sticks out is that the CIA Tradecraft paper recommends 10-12 people, while I've seen other papers that show the top out limit where group-think takes hold starting at 8-10.

      I wonder if they tested cutting the groups down into smaller sized brain-storming sessions, and then doing a scrum of scrums to take the leaders of the smaller groups for another small brainstorming session after the first run?

      Finally, the running strategy for the last 60 years or so for nearly every opfor has been to bleed US forces of cash and morale until they get tired of fighting.

      All of the options mentioned by the blue team involve doing things that play into this. The blue team seemed to be ultimately reactive in this strategic context, mostly dependent on what the opfor decides to do.

      All of their solutions involve doing things that will agitate the Iranian leadership. This can work if you can successfully go one step higher in violence than your opponent is ready to escalate at that point in time, but they haven't defined any threshold for what the Iranians are ready to go to. And their willingness to move up to a higher threshold will change over time. Instead they just create a positive feedback loop.

      Delete