I am writing Chapter 1 of my dissertation again. Modifying the original plan of measuring effect of a safety management system (vPSI) on perceived and actual performance and changing it to correlation of perceived leadership style to intent or willingness to report near miss incidents. Near misses…those are interesting. An incident where if not but for luck, disaster would have resulted. I’ve had a few of those myself, and I learned from them. What is interesting is that near misses represent an opportunity for improvement and learning, but organizations oftentimes cannot take advantage of these opportunities. If the event happens and nobody reports it, learning from it is limited. Typically, people are reluctant to report near miss incidents in industry for numerous reasons. People are afraid they will be blamed, they think their managers don’t care, they think reporting will result in extra work for them, they think they will be punished. In fact, many workers are blamed and punished after they report near misses. It has happened in the company I work for.
The Columbia Space Shuttle disaster is a prime example of the phenomenon. Foam insulation blew off the spacecraft on at least 30 prior missions and nothing bad happened. It became a routine variance – it always happens and nothing bad came of it; people expected it. Then, the foam insulation blew off and hit a critical sensor resulting in disaster. Nobody really recognized the risk until it was too late.
Thanks to Sarbanes-Oxley, I can no longer move changes to production. I must write detailed instructions for somebody else (who knows nothing about my applications) to implement changes. One of the apps I support involves databases at 7 locations; they are structured differently but have some commonalities. Long story short, for a rather significant change, the move-to-production coordinator ran my scripts on the wrong server and parts of the scripts succeeded. He notified me the change was implemented and I began verifying the change only to find nothing worked. THEN…I checked HIS results (he is required to attach these to the change request ticket) and saw a mixture of success and failure, and realized he ran the scripts on the wrong server. SICK FEELING. OH NO!! The blunder was actually not nearly as bad as it could have been. I quickly wrote scripts to back out the mess created by the error. I learned a lesson: Always write the scripts to fail if ran on the wrong server. I assumed the MTP person would be as diligent as I am about verifying the server name, but no more! This is technically not a near miss incident because harm happened. This would likely equate to a first aid incident. It took an hour of my time (start to finish) to undo the harm and accomplish what was supposed to happen. My time is charged to the client at $100 per hour…a band-aid.
Chapter 1 through Chapter 3 represent the proposal for the study. Chapter 1 is a big overview and includes such things as what the study is about, why it’s important, who it involves, what problem it addresses, how it will be done, and a summary of the theoretical framework supporting the study’s methodology. Chapter 2 is a review of the literature. Chapter 3 includes all the details of the methodology. Chapter 2 is the longest but probably the easiest to write. It’s a review of what is relevant [DAMN IT] to this topic.
Excursis: I cannot ever spell relevant right the first time. WHY IS THAT?
Health, Environmental, and Safety (HES) management systems are implemented in layers. A problem is recognized and addressed with a management plan that usually includes an IT solution. Managing HES is a very dynamic set of processes. Many variables are involved – people, place, time, scope, and what else? Season, weather, production variables, leadership tone, government regulations, current events, and even emotion affect what happens in a refinery.