Sometimes A Great Notion is the title of Ken Kesey's best, and my favorite, work, and his entry into the Great American Novel Sweepstakes. There are other ponies in the race. I'd include Barbara Kingsolver's The Poisonwood Bible, John Dos Passos' USA trilogy, and William Attaway's Blood on the Forge in my starting field, final four, top of the pops. Kesey's work might have the pole position, but I'd keep an eye on the long-shot, the relatively unheralded unsung horse with the heart to make and sustain the crowd-thrilling dramatic late move, Blood on the Forge.
Anyway....
Anyway, sometimes I get a great notion, and it's not to jump in the river and drown. For example, only yesterday, I had this great notion to review the information that FRA, and others, make available on the official close-call website.
I thought, "Wouldn't it be interesting to see now what the official published logic is for supporting the push for C3RS? Wouldn't it be good to know the data being used in assessing the viability and impact of these programs? Wouldn't it be great to compare the anticipated, and advertised outcomes, with reality?" Damn, if I weren't so naturally humble sometimes I'd knock myself out with my great notions.
In truth, great notions are a dime a dozen. That's allowing for inflation, too. In these deflationary times? Not so much.
Doing the work required to make sense of a great notion doesn't come quite that cheap or that easy.
OK, we begin. I go to the website and I look at the publications offered, and I see something that says "Whitepaper." Because I always have a dictionary with me, I know a whitepaper has British roots and originally designated "a corrected and revised version of an Order Paper of the House of Commons issued earlier the same day." The Order Paper contained the order of business for the Commons, and the orders that the parliamentary body will issue that day.
Since then, and over here, whitepaper has become the term referring to a document outlining the reasons and basis for official positions. Sounds promising, doesn't it, a whitepaper on the official website?
This particular whitepaper is entitled "Improving Rail Safety Through Understanding Close Calls," and that sounds good too. But the reading? Again not so much.
The paper does not give its date of publication. I think it was produced sometime in 2008, and I think the author is Jane Saks of the Volpe Institute (then).
Ms. Saks introduces her whitepaper with the parable of the Concorde SST. On July 25, 2000, a Concorde SST in passenger service crashed shortly after takeoff, killing all passengers and crew and four people on the ground. Prior to that catastrophe, the author notes that the Concorde had experience a significantly higher rate of tire failures, with pieces of tires penetrating the wings and fuselage of the aircraft. No deaths or injuries resulted from any of these previous incidents. .
You might think that the July 25 accident was due to the spontaneous rupture of one or more of the tires of the aircraft upon takeoff. The whitepaper doesn't claim that's what happened, but the whitepaper also does not supply the root cause for the Concorde accident.
That particular incident was determined to have been caused when a Continental Airlines DC10, taking off prior to the Concorde on the same runway "shed" a metal strip. The Concorde, upon take off, struck the metal causing the tire to rupture and pierce the wing fuel tank.
Concordes indeed had a dramatically higher rate of tire failure upon takeoff. All failures, however, had been reported and the reports were maintained in regulatory agency data archives. A judicial inquiry determined that the plane's fuel tanks lacked sufficient protection from shock and that officials had known about the problem for more than 20 years.
Continental Airlines in presenting its defense proved that the Concorde had suffered a rate of tire failure above the "average" during 27 years of service prior to the disaster.
Close-call confidential reporting is supposed to facilitate employee reporting of incidents and accidents that would not, without the protection of the system, have been filed. In the Concorde's case all previous incidents had been reported and documented. That the aircraft operating companies, and its regulators, did not act says a lot about, and against, those companies and regulators, but it does not say anything for close-call confidential reporting.
So why is the Concorde accident the "lead" for this whitepaper? Well, it's the lead for the same reason it was the lead for any and every newspaper headline. Nothing is more dramatic than life and death, except the moments between the life and death of hundreds. And nothing like drama distracts from the data, or lack thereof; from the science, or the lack of science in claims made for methods of improving safety.
The author of the whitepaper then moves on to register significant dissatisfaction with the common usage for, and understanding of, "close call." We all know close call to indicate "an event that could have resulted in personal injury, property damage, or environmental damage, but did not."
The author of the whitepaper claims that common usage is inadequate and too narrow, arguing that events that do in fact cause injury or property damage but do not reach a certain threshold for reporting can still provide information about system safety.
Huh? Note that the author has introduced a condition, a qualifier, that has nothing to do with what distinguishes a close-call from an accident-- a certain threshold. Thresholds are manufactured, secondary criteria applied to accidents and injuries and are not determining characteristics in distinguishing a close call from an accident.
The determining characteristic that distinguishes a close call from the actual accident is the material reality of an actual accident. It cannot be a "close call"-- an "almost accident" if it is an actual accident or injury.
With the Concorde accident, the author introduced irrelevant and incomplete material into the analysis. Following that, we are given an illogical oxymoronic argument for rejecting the meaning of "close call." (I'll add here that in a railroad environment there is no accident, and no injury that is exempt from being reported to the proper railroad officers. FRA establishes thresholds for the railroads in reporting accidents and incidents. Railroads do not establish thresholds releasing employees from the responsibility for making the reports).
We are then given the following alternate definition for "close call": an opportunity to improve safety practices based on a condition or incident with a potential for more serious consequences. Huh? Again.
What? More precisely WTF? That "definition" is so broad, so amorphous, so without determining characteristics as to be meaningless.
Everything is an opportunity to improve safety practices based on a condition or incident with potential for more serious consequences.
A train on-time, at the authorized speeds, over track with a working train control system, complying with all speed limitations is an opportunity to improve safety practices based on conditions with potential for more serious consequences. That's why we examine the processes that we employ when everything is operating as intended.
A train proceeding by a stop signal, accelerating to 47 mph, slamming head on into a freight train that has been given authority on the track, killing 26 people, and injuring hundreds is a "close call" for those hundreds injured but not killed. Those in the rear cars of the passenger train who are shaken up, but not injured, had a close call. They could have been killed. Those in the head car, killed when the freight train locomotives telescoped into the car body didn't have a close-call. So which is it? Close call or catastrophe? It is precisely the lack of distinction that converts illogic into nonsense and nonsense into disaster.
It gets better. Whereas just a few paragraphs earlier the author identified the use of thresholds as an obstacle to obtaining adequate information, the author now tells us: "Using this definition, a threshold must be set to decide what events qualifiy as close calls. This definition could be used broadly to include many cases, or narrowly to only include a few cases."
Huh? Again? WTF? Again? Clearly, I don't understand. The author wants me to understand, so we get this: "Ultimately, what events are considered close calls depend on how these events are used in the safety management process."
Get that? The characteristics, the details of the incidents don't determine how we identify, define, categorize, respond to them. How we desire to use, respond to, categorize, or identify the events determines what they are. See previous statement about and rejection of characterizing anything and everything we like as close-calls.
Now the author's enshrinement of subjectivity might qualify as science if we were dealing with things, relations, and forces on a sub-atomic level. Then, we could play dice with our teeny-weeny universe, and call the sub-atomic craps game "science," more specifically quantum mechanics.
In quantum mechanics how we observe, what we want to observe determines what we actually observe and what the "things" or relations are/aren't, might be. But the last time I checked, railroads are not sub-atomic particles, quantum probability is not an adequate principle to guide safe train separation. Safe train operations require knowing the momentum, velocity, and location of our "particles"-- trains-- throughout their entire periods of movement.
So we're not allowed to play dice. For our purposes, not only is Newton adequate and Heisenberg inapplicable, Heisenberg is a menace, and Newton is a life saver. Check back with me if ever a unified field theory is determined and proven.
We're not allowed to change the physical reality and recategorize events based on our wish for a different outcome.
I anticipate that the partisans of C3RS will argue that if we can't recategorize events according to desire, if we can't reclassify a collision as a close call, then by that same token we should not respond to a close-call as we would respond to an actual accident-- to be more precise: we should not respond to a violation of a stop signal with no resulting collision with the same energy, vehemence, and penalties that we display when such a violation does result in an actual accident.
I could argue that, of course, we don't. Actual collision brings a much stronger penalty, but that's quibbling. We really do regard the violation, with or without an accident, as a vital threat. We must regard the violation itself as a vital threat, as an emergency.
Why is that? Because when such an event occurs, we do not know if the locomotive engineer is at fault; if this is an individual error or if he or she is in a fit condition to operate a train. The train must be stopped immediately and the investigation must begin.
We do not know if we have a human error, or a system failure. We do not know if the cab signal display corresponded to the fixed signal displayed at the interlocking. We do not know if the train brakes worked properly. We do not know if the signal itself operated properly, or if it displayed a "false proceed."
We have to account for and eliminate possible source of the violation in order to verify the level of system safety. We cannot determine the system safety without immediately identifying and stopping the train; without immediately identifying the locomotive engineer; without immediately interviewing the engineer; without determining if the engineer viewed the signal and attempted to stop the train; without determining if the train brakes worked properly; without checking the sight-distance, the preview the engineer has of the signal; without checking and maintaining a "watch" on the signal involved to ensure it is conveying proper signal indications.
We cannot verify the system safety if information conceals the identity of the train, the time, the date, the crew, the location, the signal.
We don't do this, we don't require a detailed and thorough investigation so we can "punish" someone; so we can assess discipline. We do this to ensure the safety of all trains.
When we do assess discipline, it is because the vital process of the railroad, the safe separation of trains, has been jeopardized.
Now those who are petitioning for, advocating, instituting C3RS reporting that protects the confidentiality of those involved in violations of these vital principles are assuming that 1) the violation is always a case of individual error and not individual unfitness to continue in service and 2) there are no mechanical, electronic, signal malfunctions that account for the violation.
Those advocates propose the withholding of information critical to preventing subsequent failures.
The whitepaper has now established irrelevant examples, illogical definition, self-contradictory criteria, magical or reverse thinking as key elements to its official position. What's left? For one, and a thousand, the claims that are made for the benefits of a C3RS program constructed on irrelevant examples, illogical definition, self-contradictory criteria, and reverse thinking.
To assess those benefits we need.... of course, sufficient data; sufficient empirical data, not anecdotal reports of "improved feelings" "better communication," "higher levels of cooperation," but tangible, significant data.
Now FRA initiated its pilot programs in 2007-2008, so we have six or seven years of experience with the system, right? So we should have data that shows significant deviations between operations using C3RS and those not using C3RS? Is there that data, and if so what does it show? That's for next time.
November 23 2014
You need a lean horse for a long race.
Copyright 2012 Ten90 Solutions LLC. All rights reserved.