2015-02-24 16.23.42.jpg

Hi.

Welcome to my website. This is where I write about what interests me. Enjoy your visit!

Post 9v: Collision at Sea-What to do? Pt 4b Commanding Officers

Post 9v: Collision at Sea-What to do? Pt 4b Commanding Officers

Introduction

The prior post on this topic used the CO’s decision to enter the Singapore Strait Traffic Separation Scheme (TSS) without additional watchstanders to articulate important, but seldom made explicit, Principles of High Reliability Risk Management and Watchstanding. This post continues the identification of such principles through a careful examination of CO Decision 2, modifying the ship’s rudder and engine control configuration (NTSB Report, p. 10) at the moment of greatest impact on ship safety. The analysis goes beyond the two accident investigation reports, seeking to identify what can be learned without mind reading or second guessing. It is useless for learning to declare what the CO should have done.

Learning from CO Decision 2

CO Decision 2 was to change the steering control mode while the Bridge watchteam was managing many other things: overtaking the ALNIC MC, entering the TSS at 18 knots, and assessing the risk of many other surface contacts visually and via reports from the Combat Information Center (CIC). There may have been others. These are just the tasks we know about from the accident reports. The Navy report noted that this “unplanned shift caused confusion in the watch team” (Sec 2.2, p. 46). The "original plan, per the navigation brief held the day prior to the accident, was to split the duties of the helmsman and a lee helmsman when the Sea and Anchor Detail was set at 0600” (NTSB Report, p.28). Readers of the reports are on their own to understand why this decision caused confusion and who was confused about what.

CO Decision 2 didn’t cause the collision. It was the trigger that surfaced the latent errors that contributed to the confusion on the Bridge and in After Steering when the crew was trying to manage what they thought was a loss of steering casualty. The latent errors combined with the active errors watchstanders to produce the collision. The collision investigations noted that the CO made Decision 2 to improve the helmsman’s focus on steering the ship. Much more important than the decision itself was its timing and effect on the watch team, which surfaces several important High Reliability (HR) Principles.

HR Risk Management Principle 5 is making only an emergency justifies making an “in-the-moment change” to what has been briefed. An “in-the-moment change” is one you make in the middle of operations. If not briefed (unless as a possible response to a problem), such changes introduce greater risk both from the steps necessary to make the change and anything new that results from the change. For the OOD and Conning Officer at the time, the situation after splitting Helm and Lee Helm ship control functions across two watchstanders could have been transparent with minimal impact. They could still give engine orders, receive repeats of the orders, and monitor the performance of the orders. It wouldn’t matter that a different watchstander was executing that order. It was the process of getting there that added additional risk because they had to find an additional watchstander, manage the process of setting the new watch and oversee the system configuration change necessary to split the functions while conditions on the Bridge were not stable (i.e., overtaking the ALNIC MC and entering the TSS at 18 knots). It is no wonder that post-collisions interviews revealed that no one on the Bridge knew what happened at the Ship Control Console when the steering control mode was changed. Even if it were routine for an officer to monitor this shift (we don’t know from the reports), it would have been very hard for someone to do so given everything else they were managing. What was to be gained from the risk introduced by CO Decision 2? At best, an improvement of Helmsman focus (statement by the CO, NTSB Report, 2.2, p.25).

HR Risk Management Principle 6: only give the watchteam tasks they have the experience to handle. This means that the same order presents different levels of risk to different watch teams depending on their experience. This is a complex evaluation that depends on many factors. While we don’t know the thought process behind the steering control change ordered by the CO, the fact that he made the decision suggests that either he thought the Bridge watchteam could execute the order or he was desperate because watchstanders were overwhelmed. It doesn’t matter. We do know by the way things turned out that the watchteam led by the least experienced OOD on the ship did not manage the order well. HR Risk Management Principle 6 is a statement that a CO can’t judge whether an order presents a high or low risk to a watchteam without considering their experience level.

CO Decision 2, changing steering control mode, also illustrates HR Watchstanding Principle C, make significant changes to the plan or configuration of control systems only when stable. There are very few important changes to ship control configuration that are as simple as pushing a button. Any change that involves a new equipment lineup, control system changes, moving control stations, and personnel assuming new watches has a period of instability in the middle of the change. This period of instability is of variable duration and not unique to steering control. It is built-in to any significant control and watchstanding change, which is why you shouldn’t make those changes at the same time many other ship control variables are changing. As I noted in previous posts, the OOD and Conning Officer had so much to monitor that they would have struggled to monitor the change in steering control mode even if that were normal practice on the JSM (we don’t know if it was).

One of the consequences triggered by CO Decision 2 was the perceived loss of steering. Contrary to the CO’s standing orders, the OOD did not slow to bare steerageway (i.e., just enough speed so the rudders still have an impact on ship’s heading, typically 3-4 knots), but rather 10 knots. Her rationale for doing so was irrelevant. Since the ship was traveling at 18 knots just before this, slowing to bare steerage way would have required the Conning Officer to order an engine reversal to reduce speed. Otherwise, the ship would have coasted at greater than bare steerage way for some time. This illustrates HR Watchstanding Principle B: some casualty immediate actions are more important than others. That is, some actions in the list of things to do during a casualty are so important that none of the other actions will matter unless you do those higher-priority actions. To paraphrase George Orwell, all casualty procedure immediate actions are important, but some are more important than others. Thus it is with loss of steering. The single most important immediate action is slow down so you head slower toward danger without control. I am not faulting the most junior OOD for not knowing this. She was too junior to know better.

Conclusion

“… the ability to deal with a crisis situation is largely dependent on the structures that have been developed before chaos arrives. The event can in some ways be considered as an abrupt and brutal audit: at a moment’s notice, everything that was left unprepared becomes a complex problem, and every weakness comes rushing to the forefront. (Lagadec, 1993, p. 54)

* Lagadec, P. (1993). Preventing Chaos in a Crisis: Strategies for Prevention, Control, and Damage Limitation. McGraw Hill Europe.

Without mind-reading or suggesting what the CO of the JSM should have done, I distilled important principles of High Reliability from the investigation reports. These principles are seldom articulated because post-event reports focus on establishing causality and accountability. My purpose was to identify what to do differently that has general application to both risk management and watchstanding:

Principles of High Reliability (HR) Risk Management (in priority order)

  • Principle 1: Don’t accept more risk unless mission failure is at stake. Corollary: any change to the plan that was briefed adds risk.

  • Principle 2: Empower people to question your risk decisions. Your default response should be to accept their recommendations when they are more conservative or at least ask them to explain why they see things differently.

  • Principle 3: Don’t force subordinates to accept more risk without additional controls. The CO’s willingness to personally supervise doesn’t count.

  • Principle 4: COs exercise their responsibility for safety best when not in the operations “loop.” Only an emergency justifies inserting themselves in operations and then only to establish safety because they have no one to back them up.

  • Principle 5: Making an “in-the-moment change” to what has been briefed is only justified in an emergency.

  • Principle 6: Only give the watchteam orders they have the experience to handle. Corollary: evaluate watchteam performance constantly and don’t overload people when it looks like they are already doing all they can handle.

Principles of High Reliability (HR) Watchstanding (in priority order)

  • Principle A: Execute watch relief only when conditions are stable as long as you need them to be.

  • Principle B: Some casualty immediate actions are more important than others. If you don’t get them right, the others don’t matter.

  • Principle C: Make significant changes to the plan or configuration of control systems only when stable. Watch relief always involves disruption and risk.

For Commanding Officers

As the CO, if you find yourself willing to accept a risk that others are not, break your thinking down for them and yourself. Make your assumptions transparent and consider that they might not be valid. Most importantly, you don’t really manage the risk, only accept or reject it. Your personnel manage the risk.

When a CO decides to accept more risk than his leadership team recommends, is there a graceful or gradual fallback position to add more control? CO Decision 1 was all or nothing. Once the Bridge Watch Team became overburdened, the CO made Decision 2, interrupted the watch team, and added still more risk. A different approach would have been to station additional watchstanders or replace inexperienced watchstanders with more experienced peers. Another option would have been to delay entry into the TSS until after setting the SAD. Being late to meet the harbor pilot is not a mission failure.

For High Reliability Organizing, it is important to lead with humility and admit that you might be uncertain about how you interpret the situation. This is not highly valued in many organizations, but don’t let that be your organization. To help overcome biases towards thinking everyone shares YOUR perception of the situation, mission, plan, and risks, you can use a protocol for conveying Commander’s Intent with greater transparency based on the acronym Situation, Task, Intent, Concerns, and Calibration (STICC, Klein, 2003, pp. 201–207).

1. Situation: here’s what I think we face.

2. Task: here’s what I think we should do.

3. Intent: here’s why I think we should do that.

4. Concern: here’s what we should watch carefully because, if things deviate from what we expect, we’re in something new entirely.

5. Calibrate: now talk to me.

* Klein, G. (2004). The power of intuition: How to use your gut feelings to make better decisions at work. Currency. Emphasis added.

Note the emphasis on “I think.” When a leader uses words like that, it is an acknowledgment of uncertainty that can create an opening for revising the assessment. While admitting uncertainty produces some anxiety, that admission can build trust among subordinates (Weick, 2011).

* Weick, K. E. (2011). Organizing for transient reliability: The production of dynamic non‐events. Journal of contingencies and crisis management, 19(1), 21-27).

Don’t try to be the smartest person in the room. You might be most days, but surely not every day. Have some skepticism that your understanding of a situation is the only one possible. Better understandings might exist. Be open to having your decisions questioned, but do insist that people doing the questioning provide a rationale for their position based on High-Reliability principles. The hardest part of responding to questions about your risk decisions is being serious about asking yourself “What do they see that I don’t?” and “In what way is their proposed course of action superior to mine?” What if you’re wrong? Does your plan enable graceful recovery? If you have an off day, who will rescue you?

Post 9w: Collision at Sea-What to do? Pt 4c Commanding Officers

Post 9w: Collision at Sea-What to do? Pt 4c Commanding Officers

Post 9u: Collision at Sea-What to do? Pt 4a Commanding Officers

Post 9u: Collision at Sea-What to do? Pt 4a Commanding Officers