/
International Civil Aviation Organization International Civil Aviation Organization

International Civil Aviation Organization - PDF document

adia
adia . @adia
Follow
344 views
Uploaded On 2021-06-27

International Civil Aviation Organization - PPT Presentation

Approved by the Secretary Generaland published under his authority Line OperationsSafety Audit LOSA First Edition ID: 847734

crew losa safety error losa crew error safety flight data operations wrong errors airline management audit approach line human

Share:

Link:

Embed:

Download Presentation from below link

Download Pdf The PPT/PDF document "International Civil Aviation Organizatio..." is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.


Presentation Transcript

1 International Civil Aviation Organizatio
International Civil Aviation Organization Approved by the Secretary Generaland published under his authority Line OperationsSafety Audit (LOSA) First Edition — 2002 Doc 9803AN/761 AMENDMENTSThe issue of amendments is announced regularly in the ICAO Journal and in themonthly Supplement to the Catalogue of ICAO Publications and Audio-visualTraining Aids, which holders of this publication should consult. The space belowis provided to keep a record of such amendments.RECORD OF AMENDMENTS AND CORRIGENDAAMENDMENTSCORRIGENDAapplicableenteredEnteredbyNo.Dateof issueDateenteredEntered TABLE OF CONTENTSPagePageForeword . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .(v)Acronyms and Abbreviations . . . . . . . . . . . . . . . . .(vi)Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .(vii)Chapter 1.Basic error management concepts. .1-11.1Introduction . . . . . . . . . . . . . . . . . . . . . . . . .1-1 FOREWORDThe safety of civil aviation is the major objective of theInternational Civil Aviation Organization (ICAO). Consider-able progress has been made in increasing safety, butadditional improvements are needed and can be achieved. It ACRONYMS AND ABBREVIATIONSADSAutomatic Dependent SurveillanceATCAir Traffic ControlCFITControlled Flight Into TerrainCNS/ATMCommunications, Navigation and Surveillance/Air Traffic ManagementCPDLCController-Pilot Data Link CommunicationsCRMCrew Resource ManagementDFDRDigital Flight Data RecorderETOPSExtended Range Operations by Twin-engined AeroplanesFAAFederal Aviation AdministrationFDAFlight Data AnalysisFMSFlight Management SystemFOQAFlight Operations Quality AssuranceICAOInternational Civil Aviation OrganizationLOSALine Operations Safety AuditMCPMode Control PanelQARQuick Access RecorderRTORejected Take-OffSCPSafety Change ProcessSOPsStandard Operating ProceduresTEMThreat and Error ManagementUTTEMUniversi

2 ty of Texas Threat and Error Management
ty of Texas Threat and Error Management INTRODUCTION1.This manual describes a programme for themanagement of human error in aviation operations knownas Line Operations Safety Audit (LOSA). LOSA is proposedas a critical organizational strategy aimed at developingcountermeasures to operational errors. It is an organizationaltool used to identify threats to aviation safety, minimize therisks such threats may generate and implement measures tomanage human error in operational contexts. LOSA enablesoperators to assess their level of resilience to systemic Line Operations Safety Audit (LOSA)following reasons. First, the forms presented in Appendix Aare for illustration purposes exclusively, since they areperiodically amended on the basis of experience gained andfeedback obtained from continuing audits. Second, formaltraining in the methodology, in the use of LOSA tools and,most important, in the handling of the highly sensitive datacollected by the audits is absolutely essential. Third, theproper structuring of the data obtained from the audits is ofparamount importance.9.Therefore, until extensive airline experience isaccumulated, it is highly desirable that LOSA training becoordinated through ICAO or the founding partners of theLOSA project. As the methodology evolves and reaches fullmaturity and broader industry partnerships are developed,LOSA will be available without restrictions to theinternational civil aviation community.10.This manual is designed as follows:Chapter 1 includes an overview on safety, andhuman error and its management in aviationoperations. It provides the necessary backgroundinformation to understand the rationale for LOSA.Chapter 2 discusses the LOSA methodology andprovides a guide to the implementation of LOSAwithin an airline. It also introduces a model of crewerror management and proposes the error classi-fication utilized by LOSA, which is esse

3 ntiallyoperational and practical.Chapte
ntiallyoperational and practical.Chapter 3 discusses the safety change process thatshould take place following the implementation ofChapter 4 introduces the example of one operator’sexperience in starting a LOSA.Appendix A provides examples of the various formsutilized by LOSA.Appendix B provides an example of an introductoryletter by an airline to its flight crews.Appendix C provides a list of recommended readingand reference material.11.This manual is a companion document to theHuman Factors Training Manual (Doc 9683). Thecooperation of the following organizations in the productionof this manual is acknowledged: The University of Texas atAustin Human Factors Research Project, ContinentalAirlines, US Airways and ALPA, International. Specialrecognition is given to Professor Robert L. Helmreich,James Klinect and John Wilhelm of The University ofTexasat Austin Human Factors Research Project; CaptainsBruce Tesmer and Donald Gunther of Continental Airlines;Captains Ron Thomas and Corkey Romeo of US Airways;and Captain Robert L. Sumwalt III of US Airways and ofALPA, International. 1-1Chapter 1BASIC ERROR MANAGEMENT CONCEPTS1.1INTRODUCTION1.1.1Historically, the way the aviation industry hasinvestigated the impact of human performance on aviationsafety has been through the retrospective analyses of thoseactions by operational personnel which led to rare and SafetyProduction 1-2Line Operations Safety Audit (LOSA)decisions to achieve theoretical safety demands. Allproduction systems — and aviation is no exception —generate a migration of behaviours: due to the need foreconomy and efficiency, people are forced to operate at thelimits of the system’s safety space. Human decision makingin operational contexts lies at the intersection of productionand safety and is therefore a compromise. In fact, it mightbe argued that the trademark of experts is not years ofexperience and ex

4 posure to aviation operations, but rathe
posure to aviation operations, but ratherhow effectively they have mastered the necessary skills tomanage the compromise between production and safety.Operational errors are not inherent in a person, although thisis what conventional safety knowledge would have theaviation industry believe. Operational errors occur as a resultof mismanaging or incorrectly assessing task and/or situ-ational factors in a specific context and thus cause a failedcompromise between production and safety goals.1.1.3The compromise between production and safetyis a complex and delicate balance. Humans are generallyvery effective in applying the right mechanisms tosuccessfully achieve this balance, hence the extraordinarysafety record of aviation. Humans do, however, occasionallymismanage or incorrectly assess task and/or situationalfactors and fail in balancing the compromise, thuscontributing to safety breakdowns. Successful compromisesfar outnumber failed ones; therefore, in order to understandhuman performance in context, the industry needs tosystematically capture the mechanisms underlying suc-cessful compromises when operating at the limits of thesystem, rather than those that failed. It is suggested thatunderstanding the human contribution to successes andfailures in aviation can be better achieved by monitoringnormal operations, rather than accidents and incidents. TheLine Operations Safety Audit (LOSA) is the vehicleendorsed by ICAO to monitor normal operations.1.2BACKGROUNDReactive strategiesAccident investigation1.2.1The tool most often used in aviation to documentand understand human performance and define remedialstrategies is the investigation of accidents. However, in termsof human performance, accidents yield data that are mostlyabout actions and decisions that failed to achieve thesuccessful compromise between production and safetydiscussed earlier in this chapter.1.2.2There ar

5 e limitations to the lessons learned fro
e limitations to the lessons learned fromaccidents that might be applied to remedial strategies vis-à-vis human performance. For example, it might be possibleto identify generic accident-inducing scenarios such asControlled Flight Into Terrain (CFIT), Rejected Take-Off(RTO), runway incursions and approach-and-landing acci-dents. Also, it might be possible to identify the type andfrequency of external manifestations of errors in thesegeneric accident-inducing scenarios or discover specifictraining deficiencies that are particularly related to identifiederrors. This, however, provides only a tip-of-the-icebergperspective. Accident investigation, by definition, concen-trates on failures, and in following the rationale advocatedby LOSA, it is necessary to better understand the successstories to see if they can be incorporated as part of remedialstrategies.1.2.3This is not to say that there is no clear role foraccident investigation within the safety process. Accidentinvestigation remains the vehicle to uncover unanticipatedfailures in technology or bizarre events, rare as they may be.Accident investigation also provides a framework: if onlynormal operations were monitored, defining unsafebehaviours would be a task without a frame of reference.Therefore, properly focused accident investigation canreveal how specific behaviours can combine with specificcircumstances to generate unstable and likely catastrophicscenarios. This requires a contemporary approach to theinvestigation: should accident investigation be restricted tothe retrospective analyses discussed earlier, its contributionin terms of human error would be to increase existingindustry databases, but its usefulness in regard to safetywould be dubious. In addition, the information couldpossibly provide the foundations for legal action and theallocation of blame and punishment.Combined reactive/proactive strategiesInc

6 ident investigation1.2.4A tool that the
ident investigation1.2.4A tool that the aviation industry has increasinglyused to obtain information on operational human perform-ance is incident reporting. Incidents tell a more completestory about system safety than accidents do because theysignal weaknesses within the overall system before thesystem breaks down. In addition, it is accepted that incidentsare precursors of accidents and that -number of incidentsof one kind take place before an accident of the same kindeventually occurs. The basis for this can be traced backalmost 30 years to research on accidents from differentindustries, and there is ample practical evidence thatsupports this research. There are, nevertheless, limitations Chapter 1.Basic error management concepts1-3on the value of the information on operational humanperformance obtained from incident reporting.1.2.5First, reports of incidents are submitted in thejargon of aviation and, therefore, capture only the externalmanifestations of errors (for example, “misunderstood afrequency”, “busted an altitude”, and “misinterpreted aclearance”). Furthermore, incidents are reported by theindividuals involved, and because of biases, the reportedprocesses or mechanisms underlying errors may or may notreflect reality. This means that incident-reporting systemstake human error at face value, and, therefore, analysts areleft with two tasks. First, they must examine the reportedprocesses or mechanisms leading up to the errors andestablish whether such processes or mechanisms did indeedunderlie the manifested errors. Then, based on this relativelyweak basis, they must evaluate whether the error manage-ment techniques reportedly used by operational personneldid indeed prevent the escalation of errors into a systembreakdown.1.2.6Second, and most important, incident reporting isvulnerable to what has been called “normalization ofdeviance”. Over time, operational p

7 ersonnel develop infor-mal and spontaneo
ersonnel develop infor-mal and spontaneous group practices and shortcuts tocircumvent deficiencies in equipment design, clumsy pro-cedures or policies that are incompatible with the realitiesof daily operations, all of which complicate operationaltasks. These informal practices are the product of thecollective know-how and hands-on expertise of a group, andthey eventually become normal practices. This does not,however, negate the fact that they are deviations fromprocedures that are established and sanctioned by theorganization, hence the term “normalization of deviance”. Inmost cases normalized deviance is effective, at leasttemporarily. However, it runs counter to the practices uponwhich system operation is predicated. In this sense, like anyshortcut to standard procedures, normalized deviance carriesthe potential for unanticipated “downsides” that mightunexpectedly trigger unsafe situations. However, since theyare “normal”, it stands to reason that neither these practicesnor their downsides will be recorded in incident reports.1.2.7Normalized deviance is further compounded bythe fact that even the most willing reporters may not be ableto fully appreciate what are indeed reportable events. Ifoperational personnel are continuously exposed to sub-standard managerial practices, poor working conditionsand/or flawed equipment, how could they recognize suchfactors as reportable problems?1.2.8Thus, incident reporting cannot completelyreveal the human contribution to successes or failures inaviation and how remedial strategies can be improved toenhance human performance. Incident reporting systems arecertainly better than accident investigations in understandingsystem performance, but the real challenge lies in taking thenext step — understanding the processes underlying humanerror rather than taking errors at face value. It is essentialto move beyond the visible manifestati

8 ons of error whendesigning remedial stra
ons of error whendesigning remedial strategies. If the aviation industry is tobe successful in modifying system and individual per-formance, errors must be considered as symptoms thatsuggest where to look further. In order to understand themechanisms underlying errors in operational environments,flaws in system performance captured through incidentreporting should be considered as symptoms of mismatchesat deeper layers of the system. These mismatches might bedeficiencies in training systems, flawed person/technologyinterfaces, poorly designed procedures, corporate pressures,poor safety culture, etc. The value of the data generated byincident reporting systems lies in the early warning aboutareas of concern, but such data do not capture the concernsthemselves.Training1.2.9The observation of training behaviours (duringflight crew simulator training, for example) is another toolthat is highly valued by the aviation industry to understandoperational human performance. However, the “production”component of operational decision making does not existunder training conditions. While operational behavioursduring line operations are a compromise between productionand safety objectives, training behaviours are absolutelybiased towards safety. In simpler terms, the compromisebetween production and safety is not a factor in decisionmaking during training (see Figure 1-2). Trainingbehaviours are “by the book”.1.2.10Therefore, behaviours under monitoredconditions, such as during training or line checks, mayprovide an approximation to the way operational personnelbehave when unmonitored. These observations maycontribute to flesh out major operational questions such assignificant procedural problems. However, it would beincorrect and perhaps risky to assume that observingpersonnel during training would provide the key tounderstanding human error and decision making inunmonitored operat

9 ional contexts.Surveys1.2.11Surveys comp
ional contexts.Surveys1.2.11Surveys completed by operational personnelcan also provide important diagnostic information aboutdaily operations and, therefore, human error. Surveys 1-4Line Operations Safety Audit (LOSA)provide an inexpensive mechanism to obtain significantinformation regarding many aspects of the organization,including the perceptions and opinions of operationalpersonnel; the relevance of training to line operations; thelevel of teamwork and cooperation among various employeegroups; problem areas or bottlenecks in daily operations;and eventual areas of dissatisfaction. Surveys can also probethe safety culture; for example, do personnel know theproper channels for reporting safety concerns and are theyconfident that the organization will act on expressedconcerns? Finally, surveys can identify areas of dissent orconfusion, for example, diversity in beliefs among particulargroups from the same organization regarding the appropriateuse of procedures or tools. On the minus side, surveyslargely reflect perceptions. Surveys can be likened toincident reporting and are therefore subject to theshortcomings inherent to reporting systems in terms ofunderstanding operational human performance and error.Flight data recording1.2.12Digital Flight Data Recorder (DFDR) andQuick Access Recorder (QAR) information from normalflights is also a valuable diagnostic tool. There are, however,some limitations about the data acquired through thesesystems. DFDR/QAR readouts provide information on thefrequency of exceedences and the locations where theyoccur, but the readouts do not provide information on thehuman behaviours that were precursors of the events. WhileDFDR/QAR data track potential systemic problems, pilotreports are still necessary to provide the context withinwhich the problems can be fully diagnosed.1.2.13Nevertheless, DFDR/QAR data hold highcost/efficiency ratio pote

10 ntial. Although probably under-utilized
ntial. Although probably under-utilized because of cost considerations as well as culturaland legal reasons, DFDR/QAR data can assist in identifyingoperational contexts within which migration of behaviourstowards the limits of the system takes place.Proactive strategiesNormal line operations monitoring1.2.14The approach proposed in this manual toidentify the successful human performance mechanisms thatcontribute to aviation safety and, therefore, to the design ofcountermeasures against human error focuses on themonitoring of normal line operations.Figure 1-2.Training Behaviours — Accomplishing training goals SafetyProduction Chapter 1.Basic error management concepts1-51.2.15Any typical routine flight — a normal process— involves inevitable, yet mostly inconsequential errors(selecting wrong frequencies, dialling wrong altitudes,acknowledging incorrect read-backs, mishandling switchesand levers, etc.) Some errors are due to flaws in humanperformance while others are fostered by systemic short-comings; most are a combination of both. The majority ofthese errors have no negative consequences because oper-ational personnel employ successful coping strategies andsystem defences act as a containment net. In order to designremedial strategies, the aviation industry must learn aboutthese successful strategies and defences, rather than continueto focus on failures, as it has historically done.1.2.16A medical analogy may be helpful inillustrating the rationale behind LOSA. Human error couldbe compared to a fever: an indication of an illness but notits cause. It marks the beginning rather than the end of thediagnostic process. Periodic monitoring of routine flights istherefore like an annual physical: proactively checkinghealth status in an attempt to avoid getting sick. Periodicmonitoring of routine flights indirectly involves measure-ment of all aspects of the system, allowing i

11 dentification ofareas of strength and ar
dentification ofareas of strength and areas of potential risk. On the otherhand, incident investigation is like going to the doctor to fixsymptoms of problems; possibly serious, possibly not. Forexample, a broken bone sends a person to the doctor; thedoctor sets the bone but may not consider the root cause(s)— weak bones, poor diet, high-risk lifestyle, etc. Therefore,setting the bone is no guarantee that the person will not turnup again the following month with another symptom of thesame root cause. Lastly, accident investigation is like apost-mortem: the examination made after death to determine itscause. The autopsy reveals the nature of a particularpathology but does not provide an indication of theprevalence of the precipitating circumstances. Unfor-tunately, many accident investigations also look for aprimary cause, most often “pilot error”, and fail to examineorganizational and system factors that set the stage for thebreakdown. Accident investigations are autopsies of thesystem, conducted after the point of no return of the system’shealth has been passed.1.2.17There is emerging consensus within the aviationindustry about the need to adopt a positive stance andanticipate, rather than regret, the negative consequences ofhuman error in system safety. This is a sensible objective.The way to achieve it is by pursuing innovative approachesrather than updating or optimizing methods from the past.After more than 50 years of investigating failures andmonitoring accident statistics, the relentless prevalence ofhuman error in aviation safety would seem to indicate asomewhat misplaced emphasis in regard to safety, humanperformance and human error, unless it is believed that thehuman condition is beyond hope.1.3A CONTEMPORARY APPROACH TOOPERATIONAL HUMAN PERFORMANCEAND ERROR1.3.1The implementation of normal operationsmonitoring requires an adjustment on prevailing views ofh

12 uman error. In the past, safety analyses
uman error. In the past, safety analyses in aviation haveviewed human error as an undesirable and wrongfulmanifestation of human behaviour. More recently, a con-siderable amount of operationally oriented research, basedon cognitive psychology, has provided a very differentperspective on operational errors. This research has proven,in practical terms, a fundamental concept of cognitivepsychology: error is a normal component of human behav-iour. Regardless of the quantity and quality of regulations theindustry might promulgate, the technology it might design,or the training people might receive, error will continue tobe a factor in operational environments because it simply isthe downside of human cognition. Error is the inevitabledownside of human intelligence; it is the price human beingspay for being able to “think on our feet”. Practically speak-ing, making errors is a conservation mechanism afforded byhuman cognition to allow humans the flexibility to operateunder demanding conditions for prolonged periods withoutdraining their mental “batteries”.1.3.2There is nothing inherently wrong ortroublesome with error itself as a manifestation of humanbehaviour. The trouble with error in aviation is the fact thatnegative consequencesmay be generated in operationalcontexts. This is a fundamental point in aviation: if thenegative consequences of an error are caught before theyproduce damage, then the error is inconsequential. Inoperational contexts, errors that are caught in time do notproduce negative consequences and therefore, for practicalpurposes, do not exist. Countermeasures to error, includingtraining interventions, should not be restricted to avoidingerrors, but rather to making them visible and trapping thembefore they produce negative consequences. This is theessence of error management: human error is unavoidablebut manageable.1.3.3Error management is at the heart

13 of LOSA andreflects the previous argumen
of LOSA andreflects the previous argument. Under LOSA, flaws in humanperformance and the ubiquity of error are taken for granted,and rather than attempting to improve human performance,the objective becomes to improve the context within whichhumans perform. LOSA ultimately aims — through changesin design, certification, training, procedures, management 1-8Line Operations Safety Audit (LOSA)developing analytic methods to integrate multiple anddiverse data sources. However, most importantly, the realchallenge for the large-scale implementation of LOSA willbe overcoming the obstacles, presented by a blame-orientedindustry, that will demand continued effort over time beforenormal operations monitoring is fully accepted by theoperational personnel, whose support is essential.1.5.2Despite the challenges and barriers, the aviationsystem has more to gain by moving forward to system-wideimplementation of LOSA than by denying progress becausethat is not the way business has been done in the past or bydecrying the difficulties involved. The following chapterspresent an overview of how to tackle these challenges andbarriers. 2-1Chapter 2IMPLEMENTING LOSA2.1HISTORY OF LOSA2.1.1In 1991, The University of Texas at AustinHuman Factors Research Project, with funding from theFAA (Human Factors Division, AAR-100), developedLOSA to monitor normal line operations. In its early form,LOSA mostly focused on CRM performance. The reason forthis was that researchers and airlines alike wanted to knowmore about the actual practice of CRM rather than just *Guidance on Threat and Error Management (TEM) training canbe found in the Human Factors Training Manual (Doc 9683). 2-2Line Operations Safety Audit (LOSA)Figure 2-1.The Threat and Error Management Model ThreatsThreat managementInconsequential Crew errorCrew error responsesUndesiredAircraft StateCrew UndesiredAircraft Stateresponses Chapter 2.Impl

14 ementing LOSAthe flight and pose a safet
ementing LOSAthe flight and pose a safety risk to the flight at some level.Threats may be expected or anticipated and, therefore, thecrew may brief in advance. Threats may also be unexpected.As they occur suddenly and without any warning, there isno possibility for the crew to brief in advance. Externalthreats may be relatively minor or major. Observers shouldrecord all external threats that are on the code sheet or anyothers that may be considered significant.2.2.5Errors originated by non-cockpit personnel areconsidered external threats. For example, if the cockpit crewdetects a fuel loading error made by ground staff, it wouldbe entered as an external threat, not an error. The crew wasnot the source of the error (although they must manage it,as they would any other external threat). Other examples ofnon-cockpit crew errors that would be entered as externalthreats are errors in Air Traffic Control (ATC) clearancesdiscovered by the crew, dispatch paperwork errors anddiscrepancies in passenger boarding counts by cabinattendants.Errors2.2.6Cockpit crew error is defined as an action orinaction by the crew that leads to deviations fromorganizational or flight crew intentions or expectations.Errors in the operational context tend to reduce the marginof safety and increase the probability of accidents orincidents. Errors may be defined in terms of non-compliancewith regulations, Standard Operating Procedures (SOPs) andpolicies, or unexpected deviation from crew, company orATC expectations. Errors observed may be minor (selectingthe wrong altitude into the mode control panel (MCP), butcorrecting it quickly) or major (forgetting to do an essentialchecklist). Observers should record all cockpit crew errorsthat they detect.2.2.7Operators set up SOPs and checklists as thestandards for the proper and safe way to conduct flights.Instructors observing deviations from SOPs or checkli

15 stswould define this as an error, and so
stswould define this as an error, and so does LOSA. If a crewmember does not know how to execute a procedure properlyor cannot control the aircraft in the expected manner, aninstructor would also consider this an error, and so doesLOSA. Deviations from expectations of ATC are alsoclassified as crew errors; these would, for example, includealtitude deviations or significant deviations around thunder-storms without ATC notification. There are rules in SOPsand/or operator manuals that, for example, specify howmuch deviation crews may make around thunderstormsbefore notifying ATC, and observers must be familiar withand apply these company rules when conducting obser-vations. Operators also have policies that are lessproscriptive than procedures, where preferred modes ofoperation are described. Pilots may violate policies withoutviolating SOPs or increasing risk, and under LOSA, this isnot defined as an error. However, if the observer feels thatviolating a policy unnecessarily increases risk to flightsafety, it would be defined as an error. There are also manydecision points on a normal flight that are not defined bySOPs or procedures. However, any time the crew makes adecision that unnecessarily increases risk to flight safety, itis defined as a crew error.2.2.8Crew errors may not have any consequences, butthey still need to be recorded by the observer. For example,a violation to the sterile cockpit rule may not have anynegative consequence to the flight, but it is a violation ofregulations and thus must be entered as an error. In addition,errors may be intentional or unintentional. As implied in thedefinition, when a crew action is appropriate or prescribedin SOPs, the lack of action may also be defined as an error.2.2.9Is poor crew behaviour that is not a violation ofregulations or SOPs (and did not result in an increased riskto flight safety) deemed an error?For example

16 , shouldobservers enter an error if a cr
, shouldobservers enter an error if a crew performed the pre-departure briefing in such a way that it was felt to deservea “minimal proficiency”? The answer is “No”. If theminimally proficient or poor pre-departure briefing (or anyother less than optimum behaviour) was not associated withan error of some kind, then it is not an error in its own rightand should not be entered in the observation form.2.2.10LOSA is predicated upon the following fivecategories of crew errors:Intentional non-compliance error: Wilful deviationfrom regulations and/or operator procedures;Procedural error: Deviation in the execution ofregulations and/or operator procedures. The inten-tion is correct but the execution is flawed. Thiscategory also includes errors where a crew forgot todo something;Communication error: Miscommunication, mis-interpretation, or failure to communicate pertinentinformation among the flight crew or between theflight crew and an external agent (for example, ATCor ground operations personnel);Proficiency error: Lack of knowledge orpsychomotor (“stick and rudder”) skills; andOperational decision error: Decision-making errorthat is not standardized by regulations or operator 2-4Line Operations Safety Audit (LOSA)procedures and that unnecessarily compromisessafety. In order to be categorized as an operationaldecision error, at least one of three conditions musthave existed:The crew must have had more conservativeoptions within operational reason and decidednot to take them;The decision was not verbalized and, therefore,was not shared among crew members; orThe crew must have had time but did not use iteffectively to evaluate the decision.If any of these conditions were observed, then it isconsidered that an operational decision error was made inthe LOSA framework. An example would include the crew’sdecision to fly through known wind shear on an approachinstead of going ar

17 ound.Definitions of crew error response2
ound.Definitions of crew error response2.2.11LOSA considers three possible responses bycrews to errors:Trap: An active flight crew response in which anerror is detected and managed to an inconsequentialoutcome;Exacerbate: A flight crew response in which an erroris detected but the crew action or inaction allows itto induce an additional error, Undesired AircraftState, incident or accident; andFail to respond: The lack of a flight crew responseto an error because it was either ignored orundetected.Definitions of error outcomes2.2.12The outcome of the error is dependent upon theflight crew response. LOSA considers three possibleoutcomes of errors depending upon crew response:Inconsequential: An outcome that indicates thealleviation of risk that was previously caused by anerror;Undesired Aircraft State: An outcome in which theaircraft is unnecessarily placed in a compromisingsituation that poses an increased risk to safety; andAdditional Error: An outcome that was the result ofor is closely linked to a previous error.Undesired Aircraft States2.2.13An “Undesired Aircraft State” occurs when theflight crew places the aircraft in a situation of unnecessaryrisk. For instance, an altitude deviation is an UndesiredAircraft State that presents unnecessary risk. An UndesiredAircraft State may occur in response to a crew action orinaction (error). It is important to distinguish between errorsand the Undesired Aircraft State that can result. If anUndesired Aircraft State is observed, there should always bea crew error that is responsible for this undesired state. Sucherrors may be miscommunications, lack of proficiency, poordecision making or wilful violation of regulations.2.2.14Undesired Aircraft States can also occur as aresult of equipment malfunction or external party errors, forexample, a malfunctioning altimeter or flight managementsystem (FMS), or an ATC command error. These

18 are notassociated with crew error and w
are notassociated with crew error and would be classified asexternal events.Crew response to Undesired Aircraft States2.2.15LOSA considers three possible crew responsesto Undesired Aircraft States:Mitigate: An active flight crew response to anUndesired Aircraft State that results in thealleviation of risk by returning from the UndesiredAircraft State to safe flight;Exacerbate: A flight crew response in which anUndesired Aircraft State is detected, but the flightcrew action or inaction allows it to induce anadditional error, incident or accident; andFail to respond: The lack of an active flight crewresponse to an Undesired Aircraft State because itwas ignored or undetected.Definitions of outcomes of Undesired Aircraft States2.2.16LOSA considers three possible outcomes toUndesired Aircraft States:Recovery: An outcome that indicates the alleviationof risk that was previously caused by an UndesiredAircraft State; Chapter 2.Implementing LOSAEnd State/Incident/Accident: Any undesired endingthat completes the activity sequence with a negative,terminal outcome. These outcomes may be of littleconsequence, for example, a long landing or alanding too far to the left or right of the centre line,or may result in a reportable incident or in anaccident; andAdditional error: The flight crew action or inactionthat results in or is closely linked to another cockpitcrew error.2.3LOSA OPERATING CHARACTERISTICS2.3.1LOSA is a proactive safety data collectionprogramme. The data generated provide a diagnosticsnapshot of organizational strengths and weaknesses, as wellas an overall assessment of flight crew performance innormal flight operations. Therefore, the intent of LOSA isto aid airlines in developing data-driven solutions to improveoverall systemic safety. The classic business principle ofmeasure, implement change and measure again is pertinenthere, with LOSA providing the metric of im

19 plementationeffectiveness. Experience ha
plementationeffectiveness. Experience has proven that expert externaloversight, especially on a first LOSA, is essential forsuccess.2.3.2LOSA is defined by the following ten operatingcharacteristics that act to ensure the integrity of the LOSAmethodology and its data. Without these characteristics, it isnot a LOSA. These characteristics are:Jump-seat observations during normal flightoperations: LOSA observations are limited toregularly scheduled flights. Line checks, initial lineindoctrination or other training flights are off-limitsdue to the extra level of stress put upon the pilotsduring these types of situations. Having anotherobserver on board only adds to an already high stresslevel, thus providing an unrealistic picture of per-formance. In order for the data to be representativeof normal operations, LOSA observations must becollected on regular and routine flights.Joint management / pilot sponsorship: In order forLOSA to succeed as a viable safety programme, itis essential that both management and pilots(through their professional association, if it exists)support the project. The joint sponsorship providesa “check and balance” for the project to ensure thatchange, as necessary, will be made as a result ofLOSA data. When considering whether to conducta LOSA audit, the first question to be asked byairline management is whether the pilots endorse theproject. If the answer is “No”, the project should notbe initiated until endorsement is obtained. This issueis so critical in alleviating pilot suspicion that theexisting LOSA philosophy is to deny airlineassistance if a signed agreement is not in placebefore commencing a LOSA. A LOSA steeringcommittee is formed with representatives from bothgroups and is responsible for planning, scheduling,observer support and, later, data verification (seePoint 8).Voluntary crew participation: Maintaining theintegrity of LOSA within

20 an airline and the industryas a whole i
an airline and the industryas a whole is extremely important for long-termsuccess. One way to accomplish this goal is tocollect all observations with voluntary crewparticipation. Before conducting LOSA obser-vations, an observer must first obtain the flightcrew’s permission to be observed. The crew has theoption to decline, with no questions asked. Theobserver simply approaches another flight crew onanother flight and asks for their permission to beobserved. If an airline conducts a LOSA and has anunreasonably high number of refusals by crews to beobserved, then it should serve as an indicator to theairline that there are critical “trust” issues to be dealtwith first.De-identified, confidential and safety-minded datacollection: LOSA observers are asked not to recordnames, flight numbers, dates or any otherinformation that can identify a crew. This allows fora level of protection against disciplinary actions. Thepurpose of LOSA is to collect safety data, not topunish pilots. Airlines cannot allow themselves tosquander a unique opportunity to gain insight intotheir operations by having pilots fearful that a LOSAobservation could be used against them fordisciplinary reasons. If a LOSA observation is everused for disciplinary reasons, the acceptance ofLOSA within the airline will most probably be lostforever. Over 6 000 LOSA observations have beenconducted by The University of Texas at AustinHuman Factors Research Project and not one hasever been used to discipline a pilot.Targeted observation instrument: The current datacollection tool to conduct a LOSA is the LOSAObservation Form. It is not critical that an airline usethis form, but whatever data collection instrument isused needs to target issues that affect flight crewperformance in normal operations. An example of 2-6Line Operations Safety Audit (LOSA)the LOSA Observation Form is shown inAppendixA. The form is based

21 upon the UTTEMModel and generates data f
upon the UTTEMModel and generates data for a variety of topics,including the following:Flight and crew demographics such as city pairs,aircraft type, flight time, years of experiencewithin the airline, years of experience withinposition, and crew familiarity;Written narratives describing what the crew didwell, what they did poorly and how theymanaged threats or errors for each phase of theflight;CRM performance ratings using research-developed behavioural markers;Technical worksheet for the descent/approach/land phases that highlights the type of approachflown, landing runway and whether the crew metairline stabilized approach parameters;Threat management worksheet that details eachthreat and how it was handled;Error management worksheet that lists eacherror observed, how each error was handled andthe final outcome; andCrew interview conducted during low workloadperiods of the flight, such as cruise, that askspilots for their suggestions to improve safety,training, and flight operations.Trusted, trained and calibrated observers: Primarily,pilots conduct LOSAs. Observation teams willtypically include line pilots, instructor pilots, safetypilots, management pilots, members of HumanFactors groups and representatives of the safetycommittee of the pilots organization. Another part ofthe team can include external observers who are notaffiliated with the airline. If they have no affiliationwith the airline, external observers are objective andcan serve as an anchor point for the rest of theobservers. Trained, expert external observers addtremendous value, especially if they have par-ticipated in LOSA projects at other airlines. It iscritical to select observers that are respected andtrusted within the airline to ensure the line’s accept-ance of LOSA. Selecting good observers is thelifeline of LOSA. If you have unmotivated oruntrustworthy observers, LOSA will fail. The

22 size ofthe observation team depends on t
size ofthe observation team depends on the airline’s size,the number of flights to be observed and the lengthof time needed to conduct the observations. Afterobservers are selected, everyone is trained andcalibrated in the LOSA methodology, including theuse of the LOSA rating forms and, particularly, theconcepts of threat and error management. Trainingof observers in the concepts and methodology ofLOSA will ensure that observations will be con-ducted in the most standardized manner. Aftercompleting training, observers spend a period oftime (between one and two months) observingregularly scheduled line flights. The objective is toobserve the largest number of crews and segmentspossible in the time frame, given the flightschedules, logistics and types of operation sampled.Trusted data collection site: In order to maintainconfidentiality, airlines must have a trusted datacollection site. At the present time, all observationsare sent off-site directly to The University of Texasat Austin Human Factors Research Project, whichmanages the LOSA archives. This ensures that noindividual observations will be misplaced orimproperly disseminated through the airline.Data verification roundtables: Data-driven pro-grammes like LOSA require quality data man-agement procedures and consistency checks. ForLOSA, these checks are done at data verificationroundtables. A roundtable consists of three or fourdepartment and pilots association representativeswho scan the raw data for inaccuracies. For example,an observer might log a procedural error for failureto make an approach callout for which there areactually no written procedures in the airline’s flightoperations manual. Therefore, it would be the job ofthe roundtable to detect and delete this particular“error” from the database. The end product is adatabase that is validated for consistency andaccuracy according to the airline’s standards

23 andmanuals, before any statistical analy
andmanuals, before any statistical analysis is performed.Data-derived targets for enhancement: The finalproduct of a LOSA is the data-derived LOSAtargets for enhancement. As the data are collectedand analysed, patterns emerge. Certain errors occurmore frequently than others, certain airports orevents emerge as more problematic than others,certain SOPs are routinely ignored or modified andcertain manoeuvres pose greater difficulty in ad-herence than others. These patterns are identified forthe airline as LOSA targets for enhancement. It isthen up to the airline to develop an action plan basedon these targets, using experts from within the airline Chapter 2.Implementing LOSAto analyse the targets and implement appropriatechange strategies. After two or three years, theairline can conduct another LOSA to see if theirimplementations to the targets show performanceimprovements.10.Feedback of results to the line pilots: After a LOSAis completed, the airline’s management team andpilots association have an obligation to communicateLOSA results to the line pilots. Pilots will want tosee not only the results but also management’s planfor improvement. If results are fed back in an appro-priate fashion, experience has shown that futureLOSA implementations are welcomed by pilots andthus more successful.2.3.3Over the years of implementation, the tenoperating characteristics listed above have come to defineLOSA. Whether an airline uses third party facilitation orattempts to do a LOSA by itself, it is highly recommendedthat all ten characteristics are present in the process. Overthe past five years, the most valuable lesson learned was thatthe success of LOSA goes much beyond the data collectionforms. It depends upon how the project is executed andperceived by the line pilots. If LOSA does not have the trustfrom the pilot group, it will probably be a wasted exercisefor the airline.Obs

24 erver assignment2.3.4Members of the obse
erver assignment2.3.4Members of the observation teams are typicallyrequired to observe flights on different aircraft types. Thisis an important element of the line audit process for severalreasons. For one, this has the advantage of allowing both linepilots and instructor pilots of particular fleets to “break outof the box” (their own fleet) and compare operations of fleetsother than their own. Eventually, this helps the team as awhole to focus on Human Factors issues and commonsystemic problems, rather than on specific, within-fleetproblems. Furthermore, the results are more robust ifobservers observe across many fleets instead of observingonly one type.Flight crew participation2.3.5Normally the line audit is announced to crewmembers by means of a letter from the highest level ofmanagement within flight operations, with the endorsementof other relevant personnel such as chief pilots and pilotsassociation representatives. This letter specifies the purposeof the audit and the fact that all observations are of a no-jeopardy nature and all data are to be kept strictlyconfidential. The letter of announcement should precede theline audit by at least two weeks, and line observers are givencopies of the letter to show crew members in case questionsshould arise. Data are kept anonymous and crews are givenassurance that they are not in disciplinary jeopardy.Furthermore, crews should have the option to refuseadmission of the observer to perform an observation on theirflight.2.4HOW TO DETERMINE THESCOPE OF A LOSA2.4.1Only smaller airlines with limited numbers offleets would find it reasonable to attempt to audit their entireflight operation, that is, all types of operations and all fleets.Most airlines will find it cost effective to conduct a LOSAon only parts of their operation. Evidence from LOSAsuggests that flight crew practices vary naturally by fleet.The type of operation,

25 such as domestic, international, short-h
such as domestic, international, short-haul or long-haul, is also relevant. Usually, auditing anycombination of types of operations is a good way to breakdown an entire operation into useful comparison groups.2.4.2Ideally, every flight crew should be audited, butmore often than not, this will be impossible or impracticalin material terms. At a major airline and in large fleets,around 50 randomly selected flight crews will providestatistically valid data. For smaller fleets, around 30 ran-domly selected flight crews will provide statistically validdata, although the risk of arriving at conclusions that mightnot reflect reality increases as the number of flight crewsaudited drops. If less than 25 flight crews are audited, thedata collected should be considered as “case studies” ratherthan representing the group as a whole.2.4.3The number of observers needed depends, asalready discussed, on the intended scope of the audit. Forexample, an airline might want to audit 50 flight crews ineach of 2 domestic fleets, for a total of 100 segments. Aconservative rule of thumb to scope this audit would be2domestic observations per day per observer. The goal isthus expressed in terms of flight crews observed, rather thansegments. Should an airline want to audit an internationalfleet, the first step is to determine how many internationalobservations can be made in a day, and this depends on thelength of the segments. For a domestic LOSA, a workablerule of thumb suggests the need for 50 person/days of workfor the actual audit phase of the LOSA. Using line pilots fora month of observations, each might be requested to spend10 days conducting observations, plus 4 days training/travelling. This requires 14 days per observer. Thus, therewould be a need for 4 observers for this hypothetical audit, 2-8Line Operations Safety Audit (LOSA)and this should easily meet the audit’s goals. It is impor

26 tantto be conservative in the estimates
tantto be conservative in the estimates since sometimes it willbe necessary to observe a crew for more than one segment.This counts as one crew, not two.2.5ONCE THE DATA IS COLLECTEDThe data acquired through the observations must be“verified” and prepared for analysis, and the time involvedin this process should not be underestimated. Once thevarious LOSA forms have been collected, the airline is readyto begin a lengthy process. It typically takes longer toprepare the LOSA data for analysis and ulterior action thanit does to collect it. The steps that must be followed in thisprocess include data entry, data quality/consistency checksand final aggregation.2.6WRITING THE REPORT2.6.1The last stage of LOSA is a written report thatpresents the overall findings of the project. With a largedatabase like the one generated from a LOSA, it is easy tofall into the trap of trying to present too much information.The author needs to be concise and present only the mostsignificant trends from the data. If the report does notprovide a clear diagnosis of the weaknesses within thesystem for management to act upon, the objective of theLOSA will be unfulfilled.2.6.2Writing the report is where “data smarts” entersinto the process. Although certain types of comparisons willseem obvious, many analyses will be based upon the“hunches” or “theories” of the writer. The usefulness of theresult has to be the guiding principle of this effort. If thewriter knows how fleets and operations are managed,comparisons that reflect this structure can be made. If theauthor knows the kinds of information that might be usefulto training, safety or domestic/international flight oper-ations, results can be tailored to these particular aspects ofthe operation. Feedback from various airline stakeholders iscritical during this stage of writing the report. Authorsshould not hesitate to distribute early drafts to key

27 peoplefamiliar with LOSA to verify the
peoplefamiliar with LOSA to verify the results. This not only helpsvalidate derived trends, but it gives other airline personnel,besides the author, ownership of the report.2.6.3General findings from the survey, interview andobservational data should serve as the foundation inorganizing the final report. A suggested outline for the reportfollows:Introduction — Define LOSA and the reasons why itwas conducted.Executive Summary — Include a text summary of themajor LOSA findings (no longer than two pages).Section Summaries – Present the key findings from eachsection of the report including:I—DemographicsII—Safety Interview ResultsIII—External Threats and Threat ManagementResultsIV—Flight Crew Errors and Error ManagementResultsV—Threat and Error Countermeasure ResultsAppendix — Include a listing of every external threatand flight crew error observed with the proper codingand an observer narrative of how each one was managedor mismanaged.Tables, charts and explanations of data should be providedwithin each section of the report.2.6.4It is important to remember that the author’sprimary job is to present the facts and abstain from outliningrecommendations. This keeps the report concise andobjective. Recommendations and solutions may be givenlater in supporting documentation after everyone has had thechance to digest the findings.2.7SUCCESS FACTORS FOR LOSAThe best results are achieved when LOSA is conducted inan open environment of trust. Line pilots must believe thatthere will be no repercussions at the individual level;otherwise, their behaviour will not reflect daily operationalreality and LOSA will be little more than an elaborate linecheck. Experience at different airlines has shown that severalstrategies are key to ensuring a successful, data-rich LOSA.These strategies include:Using third-party oversight: One way to build trustin the LOSA process is to seek a credible b

28 ut neutralthird party who is removed fro
ut neutralthird party who is removed from the politics andhistory of the airline. Data can be sent directly to thisthird party, who is then responsible for the objectiveanalyses and report preparation. The University ofTexas at Austin Human Factors Research Projectprovides, for the time being, such third partyoversight; Chapter 2.Implementing LOSAPromoting LOSA: Use group presentations, mediaclippings, experience from other airlines and intra-airline communications to discuss the purpose andlogistics of a LOSA audit with management, pilotsand any pilots associations. Experience shows thatairlines often underestimate the amount of com-munication required so they must be persistent intheir efforts;Stressing that observations cannot be used fordiscipline purposes: This is the key issue and mustbe stated as such in the letter of endorsement;Informing the regulatory authority of the proposedactivity: It is as much a courtesy as it is a way ofcommunicating the presence of LOSA;Choosing a credible observer team: A line crewalways has the prerogative to deny cockpit access toan observer; hence the observer team is mosteffective when composed of credible and well-accepted pilots from a mix of fleets and departments(for example, training and safety). This was achievedat one airline by asking for a list of potentialobservers from the management and the pilotsassociation; those pilots whose names appeared onboth lists were then selected as acceptable toeveryone;Using “a fly on the wall” approach: The bestobservers learn to be unobtrusive and non-threatening; they use a pocket notebook while in thecockpit, recording minimal detail to elaborate uponlater. At the same time, they know when it isappropriate to speak up if they have a concern,without sounding authoritarian;Communicating the results: Do not wait too longbefore announcing the results to the line or elsepilots will believe

29 nothing is being done. A summaryof the
nothing is being done. A summaryof the audit, excerpts from the report and relevantstatistics will all be of interest to the line; andUsing the data: The LOSA audit generates targetsfor enhancement, but it is the airline that creates anaction plan. One airline did this by creating acommittee for each of the central concerns, and theywere then responsible for reviewing procedures,checklists, etc., and implementing change, whereappropriate. 3-1Chapter 3LOSA AND THE SAFETY CHANGE PROCESS (SCP)3.1INTRODUCTION3.1.1When an airline commits to LOSA, it must alsocommit to acting upon the results of the audit. LOSA is buta data collection tool. LOSA data, when analysed, are usedto support changes aimed at improving safety. These may bechanges to procedures, policies or operational philosophy.The changes may affect multiple sectors of the organization 3-2Line Operations Safety Audit (LOSA)mechanisms include crosstalk, spontaneous informationtransfer, and sharing in general by everyone in theorganization. Both mechanisms work toward activelymaintaining focus on the changes affecting safety.3.2.4Therefore, when in spite of these formal andinformal mechanisms an airline experiences an accident oran incident, the immediate question arises: What ishappening “out there”? The fact is that system changes andorganizational responses to these changes generate activeand latent threats to daily line operations. Active and latentthreats themselves constantly change in a manner pro-portional to system changes. Active and latent threatsbecome the breeding grounds of crew errors. Many organ-izations are not aware of these active and latent threats fora number of reasons, including the following:The “big picture” of flight operations is constantlychanging because of the constantly changing scene;Crews may not report threats, fearing punishment;Crews may not report threats because they do notr

30 eceive any feedback on their reports;Cr
eceive any feedback on their reports;Crews operate unsupervised most of the time;Line Checks (supervised performance) are poorindicators of normal operations; andManagement may have difficulty screening out validreported crew concerns from over-reported crewcomplaints.3.2.5Active and latent threats are the precursors toaccidents and incidents. Threats cannot be identified throughthe investigation of accidents and incidents until it is too late.Most threats, however, can be proactively identified throughLOSA (and other safety data collection programmes such asflight data analysis) and considered as targets for enhance-. For example, following a LOSA, an airline mightidentify the following targets for enhancement:Stabilized approachesChecklistsProcedural errorsAutomation errorsATC communicationsInternational flight operations guideCaptain leadership (intentional non-complianceerrors)3.2.6To sustain safety in a constantly changingenvironment, data must be collected and analysed on aroutine basis to identify the targets for enhancement and thena formal safety change process (SCP) must occur in orderto bring about improvement. The basic steps of the SCPinclude the following and are also shown in Figure 3-1.Measurement (with LOSA) to obtain the targetsDetailed analysis of targeted issuesListing of potential changes for improvementRisk analysis and prioritization of changesSelection and funding of changesImplementation of changesTime allocation for changes to stabilizeRe-measurement3.2.7Airlines need a defined SCP to keep theorganization working together to achieve the same safetyobjectives. A well-defined SCP keeps the organization fromgetting into “turf” issues, by clearly specifying who andwhat impacts flight operations. An SCP also contributes toimproving the safety culture by maximizing the capabilitiesof current and future safety programmes. Last, but

31 not least,an SCP provides a principled a
not least,an SCP provides a principled approach to target limitedresources.3.2.8In the past, SCPs were based on accident andincident investigations, experience and intuition. Today,SCPs must be based on the “data wave”, the “datawarehouse” and the “drill-down” analysis. Measurement isfundamental, because until an organization measures, it canonly guess. In the past, SCPs dealt with accidents. Today,SCPs must deal with the precursors of accidents.3.3ONE OPERATOR’S EXAMPLEOF AN SCP3.3.1This section briefly presents some of the verypositive results obtained by one airline that pioneered LOSAin international civil aviation. The examples represent atwo-year period, between 1996 and 1998, and include Chapter 3.LOSA and the safety change process (SCP)3-3aggregate data collected during 100 flight segments. Duringthis two-year period, 85 per cent of the crews observed madeat least one error during one or more segments, and 15 percent of the crews observed made between two and fiveerrors. Errors were recorded in 74 per cent of the segmentsobserved, with an average of two errors per segment (seeChapter 2 for a description of the error categories in LOSA).These data, asserted as typical of airline operations,substantiated the pervasiveness of human error in aviationoperations, while challenged beyond question the illusion oferror-free operational human performance. 3.3.2LOSA observations indicated that 85 per cent oferrors committed were inconsequential, which led to twoconclusions. First, the aviation system possesses very strongand effective defences, and LOSA data allow a principledand data-driven judgement of which defences work andwhich do not, and how well defences fulfil their role.Second, it became obvious that pilots intuitively develop hoc error management skills, and it is therefore essential todiscover what pilots do well so as to promote safety throrganizational inter

32 ventions, such as improved training,proc
ventions, such as improved training,procedures or design, based on this “positive” data.Figure 3-1.Basic steps of the safety change process Re-measurementMeasurementRisk analysis andprioritization of changesListing ofpotential changesAnalysis ofTime allocationfor changesImplementationof changesSelection andfunding of changesLOSA 3-4Line Operations Safety Audit (LOSA)3.3.3When the airline started conducting base-lineobservations in 1996, the crew error-trapping rate was 15 percent; that is, flight crews detected and trapped only 15 percent of the errors they committed. After two years, followingimplementation of organizational strategies aimed at errormanagement based on LOSA data, the crew error-trappingrate increased to 55 per cent (see Figure 3-2).3.3.4Base-line observations in 1996 suggestedproblems in the area checklist performance. Followingremedial interventions — including review of standardoperating procedures, checklist design and training —checklist performance errors decreased from 25 per cent to15 per cent, which is a 40 per cent reduction in checklisterrors (see Figure 3-3).Figure 3-2.Crew error-trapping rateFigure 3-3.Checklist errors 15%55% 10%20%40%50%60% 19961998 5%10%15%20%25% 19961998 Reduction Chapter 3.LOSA and the safety change process (SCP)3-53.3.5Lastly, base-line observations in 1996 suggestedthat 34.2 per cent of approaches did not meet all require-ments of the audit’s stabilized approach criteria, as specifiedin the operator’s SOPs. Unstabilized approaches (using morestringent criteria than during the 1996 audit) decreased to13.1 per cent (a 62 per cent reduction) in 1998, followingremedial action through organizational interventions. Thedata accessed through the operator’s flight operations qualityassurance (FOQA) programme is consistent with LOSA dataand shows a similar decline for 1998.3.3.6How does such change take place? By adoptinga def

33 ined SCP. Following data acquisition and
ined SCP. Following data acquisition and analysis, theairline decided to form specific committees including achecklist committee and an unstabilized approaches com-mittee. Each committee considered the problems identifiedby the analysis of the LOSA data and then proposedorganizational interventions to address them. Such inter-ventions included modification of existing procedures,implementation of new ones, specific training, andredefinition of operational philosophies, among others. Forexample, checklists were reviewed to ensure relevance ofcontents, and clear guidelines for their initiation andexecution were promulgated. Gates and tolerances forstabilized approaches were defined, as opposed to the“perfect approach” parameters promulgated by the SOPsexisting at that time. Proper training and checking guidelineswere established, taking into account an error managementapproach to crew coordination.3.3.7The improved error management performance byflight crews, successful reduction in checklist performanceerrors and reduction in unstabilized approaches discussedearlier reflect the success of a properly managed SCP, basedupon data collected by observing line operations. They arealso examples of how analysis of LOSA data provides anopportunity to enhance safety and operational humanperformance. 4-1Chapter 4HOW TO SET UP A LOSA — US AIRWAYS EXPERIENCE“Honest and critical self-assessment is one of the mostpowerful tools that management can employ to measureflight safety margins.”Flight Safety Foundation Icarus CommitteeMay 19994.1GATHERING INFORMATIONIn order to decide if conducting a LOSA would be 4-2Line Operations Safety Audit (LOSA)is that the safety department often holds the trust of the linepilots regarding confidential information. It is the safetydepartment that typically administers confidential incidentreporting systems and the FOQA Programme or digitalflight data r

34 ecorder monitoring programmes.Flight ope
ecorder monitoring programmes.Flight operations and training departments4.3.3The flight operations and training departmentsmust be integrally involved with implementing a LOSA forseveral reasons. First, they are at the centre of the operationand have first-hand information about what is and is notworking well. These departments often know of specificareas on which they would like the LOSA to concentrate.Additionally, these departments can provide valuable inputand suggestions for the smooth conduct of the LOSA. Theywill also be able to help provide the much needed personnel.Possibly the most important reason for their involvement isthat ultimately many of the problem areas and the potentialbenefits that are identified during a LOSA must be“corrected” or implemented by these departments. As withthe example of the airline above, if these departments do notsupport the LOSA, then there could be possible resistanceto the findings from the LOSA. However, if these depart-ments take an active part in the process, implementation ofLOSA enhancements becomes much more probable.Pilots union4.3.4The importance of having the pilots unioninvolved with and support the LOSA must not beoverlooked. If the line pilots believe that their union supportsthis endeavour, they will more readily accept observationflights. Additionally, if pilots believe this is a process thatthey can support, they will be more forthcoming and candidwith their views and safety concerns. On the other hand, ifthe pilots view LOSA as a management tool to spy on them,then the results will not be as productive. The pilots unioncan also help disseminate the results of the LOSA andinform the pilots of any company decisions as a result of theLOSA. Hopefully, the union will agree with theenhancements and endorse them.4.4THE KEY STEPS OF A LOSA4.4.1To help provide focus for the LOSA, the LOSAsteering committee should first

35 look at problems that havebeen identifi
look at problems that havebeen identified in the past by all involved departments. Withthis information in hand, the committee can then decidewhat they expect to gain from the LOSA and use that to formgoals and an action plan. It must be kept in mind that thegoals and action plan may have to be modified dependingon the LOSA findings.Goals4.4.2The LOSA steering committee should meet todetermine what they would like to achieve from the LOSA.This will vary among airlines, but the following are somegoals established by one airline:To heighten the safety awareness of the line pilotTo obtain hard data on how crews manage threatsand errorsTo measure and document what is happening “on theline”—What works well—What does not work wellTo provide feedback to the system so thatenhancements can be madeTo inform end users WHY enhancements are beingmade, especially if the enhancements are a result ofend user feedbackTo monitor results of LOSA enhancements4.4.3One airline stated up front that they wanted theirline pilots to be the “customer” of the LOSA, meaning thatwhatever problems were identified, they would work tocorrect them to make the system safer and more efficient fortheir pilots.Action plan4.4.4Figure 4-1 shows a flow chart of the key stepsto LOSA. Steps 1 to 6 are covered below. Notice that theactual LOSA observations are not the end of the project but,in fact, are only a part of an entire process to help improvesystem safety at an airline. Steps 7 to 9 have already beencovered earlier in this manual.Step 1: Form initial development teamThis team may be the same as the LOSA steering committeeor just a few core individuals who can bring the committeeup to date. Chapter 4.How to set up a LOSA — US Airways experience4-3Figure 4-1.The key steps to LOSA STEP 5: Schedule audit dates,select observers andschedule trainingSTEP 8: Provide feedback to systemand carry out improv

36 ements to systemSTEP 9: Develop enhanced
ements to systemSTEP 9: Develop enhanced policies,procedures and a safer operationalenvironment for the line pilotSTEP 4: Determine howmany segments to observeSTEP 6: Conduct observer trainingSTEP 7: Analyse audit findingsAUDITSTEP 3: Identify what to look atSTEP 2: Gather informationSTEP 1: Form initialdevelopment team SAFER OPERATION!Develop data collection forms andobserver trainingRefine data collection formsas neededRecalibrate observers 4-4Line Operations Safety Audit (LOSA)Step 2: Gather informationIn order to conduct a LOSA, the initial development teammust understand how LOSAs have been carried out in thepast and they should be aware of the benefits that have beenderived. They should, therefore, gather information on theLOSA process.Step 3: Identify what to look atTo conduct the most efficient LOSA, it is best to focus onspecific aspects. One common mistake is to try to tackle toomuch at one time. When doing this, the effort can beenormous and the data findings can be overwhelming.A more manageable approach may be to narrowly focus onor target specific things to be observed. Are there certainairports that have more hazards or threats compared to otherairports? Do certain aircraft fleets have more instances of tailstrikes? Are unstabilized approaches something youroperation is struggling with?The decisions about what to observe should be based on dataand not just instincts. For example, if an airline utilized anFOQA Programme or a confidential incident reportingsystem, these sources would be excellent places to helppinpoint areas on which efforts should be focused.It should be remembered that LOSA is not designed to lookat the entire operation, but really just provide arepresentative sampling or “slice” of the operations. Onelarge international carrier decided to focus their first LOSAon their domestic operations, but had plans to later conducta LOSA that focus

37 ed on their international operations.Ste
ed on their international operations.Step 4: Determine how many segments to observeThe number of flights that will be observed is a function ofthe number of people who will act as LOSA observers. Alsoto be considered is the need to collect enough data to providea statistically valid sample of the operation. For example,statisticians at The University of Texas at Austin HumanFactors Research Project have determined that if an airlinewanted to evaluate a specific airport, then that airline shouldobserve at least ten flights into and out of that airport. Fora specific operation or fleet, the LOSA should observe atleast 50 flights on that operation or fleet.Step 5: Schedule audit dates, select observers and scheduletraining datesDepending on the size of an airline’s operations, a LOSAmay last anywhere from approximately three to eight weeks.The LOSA observations should not be spread out over anextremely long time period. The objective is to gather thedata needed to examine a specific area of operations. If theobservations take place over a long time, it is likely that theeffort will become diluted.The quality of data collected depends entirely on who iscollecting that data, so selecting LOSA observers issomething that should be carefully considered. A goodLOSA observer is one who is familiar with the airline’sprocedures and operations. Observers should be able tooccupy the cockpit jump-seat and capture data but shouldnot be obtrusive and overbearing.Step 6: Conduct observer trainingLOSA observer training will typically take two days. Duringthis time, LOSA observers should have the opportunity tocomplete LOSA rating forms using training examples. Also,once the line audit has begun, it is a good idea to periodicallyprovide feedback to LOSA observers to reinforce the thingsthat they do well and to coach them in the areas that requireimprovement.4.5THE KEYS TO AN EFFECTIVE LOS

38 A4.5.1If a LOSA is properly conducted, a
A4.5.1If a LOSA is properly conducted, an airline willbe able to obtain a multitude of information about threatsand errors that flight crews face in daily operations. InUSAirways experience, there are two key elements thatwilldetermine the quality of data obtained: the airline’sviews on confidentiality and no-jeopardy, and the observersthemselves.Confidentiality and no-jeopardy4.5.2It is human nature for people to behavesomewhat differently when they know they are beingevaluated, and airlines have a lot of information on howflight crews perform in the simulator and line checks. Theidea of a LOSA is to capture data about the flight operationsthat could not be obtained otherwise.4.5.3To facilitate being able to observe the naturalbehaviour of crews, airlines must promote LOSA as no-jeopardy. The notion is that data from LOSA observationswill not be used to discipline a pilot. For example, if a LOSAobserver sees a crew unintentionally deviate from theirassigned altitude, the observer will not use that informationin a manner that could be punitive to that crew. Chapter 4.How to set up a LOSA — US Airways experience4-54.5.4Some airlines are not as comfortable with thenotion of no-jeopardy. At a minimum, in order to do aLOSA, an airline should agree that LOSA flight data areconfidential and de-identified. The LOSA forms must notcontain information that could be traced to a specific flightor crew.4.5.5This is not to say that the overall results from anairline’s LOSA programme should not be publicized. In fact,once the entire LOSA programme is completed, the airlineis encouraged to share the findings with their pilots.However, under no circumstances should the results from aparticular flight be divulged or a crew disciplined formistakes that occur on a LOSA flight.The role of the observer4.5.6As cited above, the LOSA observer plays a keyrole in the effectiveness of a LOSA. If

39 observers are seenas threats to the care
observers are seenas threats to the career of the pilots being observed, then thepilots may act differently than if the observers wereperceived as simply being there to collect data to helpimprove the airline.4.5.7Some airlines use the analogy that the LOSAobserver should be like a “fly on the wall”, meaning that theobserver will not interfere with the crew’s performance.Observers should create an environment where the crewshardly realize that they are being observed. It is imperativethat crews do not feel as if they are being given a check-ride.If an airline uses check airmen and instructors as LOSAobservers, those observers must make a conscious effort tostep out of their typical roles as evaluators. The LOSAobservers must clearly understand that their role is limitedto collecting data, not to disciplining or critiquing crews.4.6PROMOTING LOSA FORFLIGHT CREWSBefore an airline begins a LOSA, it is highly recommendedthat the LOSA be widely publicized. Articles in thecompany’s safety publication can go a long way towardsimproving line pilot acceptance of a LOSA. There is oneway of publicizing a LOSA that must not be overlooked andthat is a letter that is jointly signed by the companymanagement and union officials. See Appendix B for anexample. Copyright 2002 The University of Texas at Austin. All rights reserved.Appendix AEXAMPLES OF THE VARIOUS FORMS UTILIZED BY LOSALOSA Observation Form — EXAMPLEObserver InformationFlight DemographicsCrew Demographics Observer ID (Employee number) 3059 Observation Number #1 Crew Observation Number(e.g., “1 of 2” indicates segment one for a crew that you observed across two segments) 1 City Pairs PIT - LAX A/C Type (e.g., 737-300) B-757 Pilot flying (Check one) CA Time from Pushback to Gate Arrival Late Departure?(Yes or No) Yes How late?(Hours:Minutes) Relief 1 Relief 2 PIT Years experience for all airlines 35 Years in position for

40 this A/C 7 1 month 12 1 month Crew Fami
this A/C 7 1 month 12 1 month Crew Familiarity(Check one) First LEG the crew has EVER flown together First DAY the crew has EVER Crew has flown together before X AMPLEFOR ILLION POSES O Line Operations Safety Audit (LOSA)Copyright 2002 The University of Texas at Austin. All rights reserved.Predeparture / Taxi-Out Narrative Your narrative should provide a context. What did the crew do well? What did the crew do poorly? How did the crew perform when confronted with threats, crew errors, and significant events? Also, be sure to justify your behavioral ratings. The CA established a great team climate – positive with open communication. However, he seemed to be in a rush and not very detail oriented. The FO, who was relatively new to the A/C, tried to keep up but fell behind at times. The CA did not help the cause by interrupting the FO with casual conversation (“marginal” workload management). All checklists were rushed and poorly executed. The CA was also lax verifying paperwork. This sub-par behavior contributed to an undetected error - the FO failed to set his airspeed bugs for T/O (“poor” monitor/cross-check). The Before Takeoff Checklist should have caught the error, but the crew unintentionally skipped over that item. During the takeoff roll, the FO noticed the error and said, “Missed that one.” The Captain’s brief was interactive but not very thorough (“marginal” SOP briefing). He failed to note the closure of the final 2000’ of their departing runway (28R) due to construction. Taxiways B7 and B8 at the end of the runway were also out. The crew was marked “poor” in contingency management because there were no plans in place on how to deal with this threat in the case of a rejected takeoff. Lucky it was a long runway. 1 PoorObserved performancehad safety implications MarginalObserved performancewas barely adequate GoodObserved performancewas effe

41 ctive OutstandingObserved performancewas
ctive OutstandingObserved performancewas truly noteworthy Planning Behavioral Markers Rating SOP BRIEFING The required briefing was interactive and operationally thorough —Concise, not rushed, and met SOP requirements—Bottom lines were established 2 PLANS STATED Operational plans and decisions were communicated and acknowledged —Shared understanding about plans — “Everybody on the same page” 3 WORKLOAD ASSIGNMENT Roles and responsibilities were defined for normal and non-normal situations —Workload assignments were communicated and acknowledged 3 CONTINGENCY MANAGEMENT Crew members developed effective strategies to manage threats to safety —Threats and their consequences were anticipated—Used all available resources to manage threats 1 AMPLEFOR ILLION POSES O Appendix A.Examples of the various forms utilized by LOSAA-3Copyright 2002 The University of Texas at Austin. All rights reserved. Execution Behavioral Markers Rating MONITOR / CROSS- Crew members actively monitored and cross-checked systems and other crew members —Aircraft position, settings, and crew actions were verified 1 WORKLOAD MANAGEMENT Operational tasks were prioritized and properly managed to handle primary flight duties —Avoided task fixation—Did not allow work overload 2 Crew members remained alert of the environment and position of the aircraft —Crew members maintained situational awareness AUTOMATION MANAGEMENT Automation was properly managed to balance situational and/or workload requirements —Automation setup was briefed to other members—Effective recovery techniques from automation anomalies Review / Modify Behavioral Markers Rating EVALUATION OF PLANS Existing plans were reviewed and modified when necessary —Crew decisions and actions were openly analyzed to make sure the existing plan was the best plan INQUIRY Crew members asked questions to investigate and/or clarify current plans of action —

42 Crew members not afraid to express a lac
Crew members not afraid to express a lack of knowledge - “Nothing taken for granted” attitude 3 ASSERTIVENESS Crew members stated critical information and/or solutions with appropriate persistence —Crew members spoke up without hesitation AMPLEFOR ILLION POSES O Line Operations Safety Audit (LOSA)Copyright 2002 The University of Texas at Austin. All rights reserved.Takeoff / ClimbCruise Narrative Your narrative should provide a context. What did the crew do well? What did the crew do poorly? How did the crew perform when confronted with threats, crew errors, and significant events? Also, be sure to justify your behavioral ratings. Normal takeoff besides one error. As the crew started to clean up the aircraft, the FO called “flaps up” before the flap retraction speed. The CA trapped the error and did not retract the flaps until the proper speed. After passing 10000’ all the way up to the TOC, the CA and FO failed to cross-verify multiple altitude changes. There was no intention on part of the CA to verify. In addition, since it happened multiple times, the observer coded it as an intentional noncompliance. 1 PoorObserved performance had safety implications MarginalObserved performance was barely adequate GoodObserved performance was effective OutstandingObserved performance was truly noteworthy Execution Behavioral Markers Rating MONITOR /CROSS-CHECK Crew members actively monitored and cross-checked systems and other crew members —Aircraft position, settings, and crew actions were verified 1 WORKLOAD MANAGEMENT Operational tasks were prioritized and properly managed to handle primary flight duties —Avoided task fixation—Did not allow work overload 3 Crew members remained alert of the environment and position of the aircraft —Crew members maintained situational awareness AUTOMATION MANAGEMENT Automation was properly managed to balance situational and/or work

43 load requirements —Automation setup was
load requirements —Automation setup was briefed to other members—Effective recovery techniques from automation anomalies Review / Modify Behavioral Markers Rating EVALUATION OF PLANS Existing plans were reviewed and modified when necessary —Crew decisions and actions were openly analyzed to make sure the existing plan was the best plan INQUIRY Crew members asked questions to investigate and/or clarify current plans of action —Crew members not afraid to express a lack of knowledge — “Nothing taken for granted” attitude ASSERTIVENESS Crew members stated critical information and/or solutions with appropriate persistence —Crew members spoke up without hesitation Narrative Your narrative should provide a context. What did the crew do well? What did the crew do poorly? How did the crew perform when confronted with threats, crew errors, and significant events? Also, be sure to justify your behavioral ratings. Routine – no comments SAMPLEFOR ILLION POSES O Appendix A.Examples of the various forms utilized by LOSAA-5Copyright 2002 The University of Texas at Austin. All rights reserved.Descent / Approach / Land Technical WorksheetDescent (Above 10,000 ft.)Approach and Land (Below 10,000 ft.) 1 Was the approach briefed before the TOD? (Yes / No) Yes Did the crew begin the descent before or at the FMS TOD? (Yes / No) Yes Did the aircraft get significantly above/below the FMS or standard path? (Yes / No) No If “Yes”, explain in the narrative the cause and whether the crew tried to regain the path. Approach flown? (Check one) Visual Instrument backup onvisual approach?(Check One) Yes Precision Type of precisionapproach Nonprecision Type of nonprecisionapproach Approach: Hand flown or Automation flown? Hand-flown Did the aircraft get significantly above/below a desirable descent path? (Yes / No) Yes If “Yes”, explain in the narrative the cause and whether the crew tried to re

44 gain the path. 7 During flap extension,
gain the path. 7 During flap extension, flaps were “generally” extended: (Check one) Close to or at minimum maneuvering speed Close to or at the maximum flap extension speed Above maximum flap extension speed(If this happens, be sure to describe in the narrative) Weather (Check One) VMC AMPLEFOR ILLION POSES O Line Operations Safety Audit (LOSA)Copyright 2002 The University of Texas at Austin. All rights reserved. 9 Stabilized Approach Parameters 1500 AFE 1000 AFE 500 AFE Target airspeed between –5 and +15 Yes Yes Yes Vertical speed 1000 fpm Yes Yes Yes Engines spooled Yes Yes Yes Landing configuration(Final flaps / gear down) Yes Yes Yes On proper flight path (G/S and localizer) Yes Yes Yes AMPLEFOR ILLION POSES O Appendix A.Examples of the various forms utilized by LOSAA-7Copyright 2002 The University of Texas at Austin. All rights reserved.Descent / Approach / Land – The Blue Box Narrative Think “blue box.” Describe significant events from the TOD to landing using the picture above to define landmarks. Talk about how the crew performed when confronted with threats and crew errors. Also, be sure to justify your behavioral ratings. Briefing to TOD – The CA and FO did a nice job with the approach brief, which was completed by the TOD. Much better than their takeoff brief. They expected runway 25L from the Civet Arrival for a straight-in visual approach. Jepp charts were out, contingencies talked about, and everything was by the book. The FO asked a lot of questions and the CA was patient and helpful. Nicely done!10000’ to slowing and configuring – ATC cleared the crew to 25L, but at 8000’, ATC changed us to the Mitts Arrival for runway 24R due to a slow moving A/C on 25L. The CA changed the arrival and approach in the FMC and tuned the radios. As soon as everything was clean, ATC called back and told the crew they could either land on 25L or 24R at their d

45 iscretion. Since time was a factor, the
iscretion. Since time was a factor, the crew discussed and decided to stick with the approach into 24R. The crew was flexible and the CA did a nice job assigning workload. He directed the FO fly the plane while he checked everything over one more time. The crew was also better monitors and cross checkers. However, their execution of checklists was still a little sloppy – late and rushed. The crew did a nice job staying vigilant with heavy traffic in the area – used ATC and TCAS effectively. Bottom lines to Flare / Touchdown – The approach was stable, but the FO let the airplane slip left, which resulted in landing left of centerline. Since the FO was new to this aircraft (1 month flying time), the observer chalked it up to a lack of stick and rudder proficiency.Taxi-in – The crew did a great job navigating taxiways and crossing the active 24L runway. Good vigilance and teamwork. BriefingTODTransitionAltitude10000 ft.Slow and configureFAF/OMStabilized approachbottom linesFlare/Touchdown Line Operations Safety Audit (LOSA)Copyright 2002 The University of Texas at Austin. All rights reserved.Descent / Approach / Land 1 PoorObserved performance had safety implications MarginalObserved performance was barely adequate GoodObserved performance was effective OutstandingObserved performance was truly noteworthy Planning Behavioral Markers Rating SOP BRIEFING The required briefing was interactive and operationally thorough —Concise, not rushed, and met SOP requirements—Bottom lines were established 4 PLANS STATED Operational plans and decisions were communicated and acknowledged —Shared understanding about plans — “Everybody on the same page” 4 WORKLOAD ASSIGNMENT Roles and responsibilities were defined for normal and non-normal situations —Workload assignments were communicated and acknowledged 4 CONTINGENCY MANAGEMENT Crew members developed effective st

46 rategies to manage threats to safety —T
rategies to manage threats to safety —Threats and their consequences were anticipated—Used all available resources to manage threats 3 Execution Behavioral Markers Rating MONITOR / CROSS-CHECK Crew members actively monitored and cross-checked systems and other crew members —Aircraft position, settings, and crew actions were verified 2 WORKLOAD MANAGEMENT Operational tasks were prioritized and properly managed to handle primary flight duties —Avoided task fixation—Did not allow work overload 3 Crew members remained alert of the environment and position of the aircraft —Crew members maintained situational awareness AUTOMATION MANAGEMENT Automation was properly managed to balance situational and/or workload requirements —Automation setup was briefed to other members—Effective recovery techniques from automation anomalies 3 Review / Modify Behavioral Markers Rating EVALUATION OF PLANS Existing plans were reviewed and modified when necessary —Crew decisions and actions were openly analyzed to make sure the existing plan was the best plan 4 INQUIRY Crew members asked questions to investigate and/or clarify current plans of action —Crew members not afraid to express a lack of knowledge — “Nothing taken for granted” attitude 3 ASSERTIVENESS Crew members stated critical information and/or solutions with appropriate persistence —Crew members spoke up without hesitation AMPLEFOR ILLION POSES O Appendix A.Examples of the various forms utilized by LOSAA-9Copyright 2002 The University of Texas at Austin. All rights reserved.Overall Flight Narrative This narrative should include your overall impressions of the crew. Overall, the crew did a marginal job with planning and review/modify plans during predeparture. However, during the descent/approach/land phase, it was excellent. Their execution behaviors were marginal to good for the entire flight. While the takeoff brief was

47 marginal, the CA made an outstanding ap
marginal, the CA made an outstanding approach brief. Open communica-tion was not a problem. Good flow of information when the flight’s complexity increased with the late runway change. They really stepped it up. The big knock against this crew involved checklists, cross verifications, and all monitoring in general. They were a little too complacent during low workload periods (e.g., No altitude verifications during climb). The CA set a poor example in this regard. During predeparture, the CA introduced an unnecessary element of being rushed, which compromised workload management. However, his decisiveness and coordination in the descent/approach/land phase kept his leadership from being marked “marginal.” 1 PoorObserved performance had safety implications MarginalObserved performance was barely adequate GoodObserved performance was effective OutstandingObserved performance was truly noteworthy Overall Behavioral Markers Rating COMMUNICATION ENVIRONMENT Environment for open communication was established and maintained - Good cross talk – flow of information was fluid, clear, and direct 4 LEADERSHIP Captain showed leadership and coordinated flight deck activities - In command, decisive, and encouraged crew participation 3 Did you observe a flight attendant briefing on the first leg of the pairing? (Check one) Yes Rating No opportunity to observe Contribution to Crew Effectiveness 2 Overall Crew Effectiveness Rating AMPLEFOR ILLION POSES O A-10Line Operations Safety Audit (LOSA)Copyright 2002 The University of Texas at Austin. All rights reserved.Threat Management Worksheet ThreatsEvents or errors that originate outside the influence of the flightcrew but require active crew management to maintain safety Threat ID Threat Description Threat Management Describe the threat Threat Phase of flight1 Predepart/Taxi2 Takeoff/Climb3 Cruise4 Des/App/Land5 Taxi-in Effecti

48 velymanaged?(Yes / No) How did the crew
velymanaged?(Yes / No) How did the crew manage or mismanage the threat? T 1 Runway and taxiway construction on their departing runway (final 2000’) 4 Threat mismanaged T 2 Late ATC runway change — changed runway to 24R from 25L due to a slow moving aircraft on 25L 50 Threat managed CA reprogrammed the FMC, handled the radios, and placed emphasis on the FO to fly the aircraft. T 3 ATC called back and told the crew that 50 Threat managed CA asked for the FO’s preference. They mutually decided to continue the approach into 24R because it was already in the box. T 4 Heavy congestion going into LAX 3 Threat managed T __ Threat Codes Departure / Arrival Threats1Adverse weather / turbulence / IMC2Terrain3TrafficAir or ground congestion, TCAS warnings4Airportconstruction, signage, ground conditions5TCAS RA/TAAircraft Threats20Aircraft malfunction Operational Threats30Operational time pressuredelays, OTP, late arriving pilot or aircraft31Missed approach 32Flight diversion33Unfamiliar airport34Other non-normal operation eventsmax gross wt. T/O, rejected T/O Cabin Threats Crew Support Threats80MX event81MX error82Ground handling event83Ground crew error84Dispatch/ paperwork event 85Dispatch / paperwork error SLEFOLLUSTOUSES O Appendix A.Examples of the various forms utilized by LOSAA-11Copyright 2002 The University of Texas at Austin. All rights reserved.Error Management Worksheet Error ID Error Description Error Response / Outcome Describe the crew error and associatedundesired aircraft states Phase of flight1 Predepart/Taxi2 Takeoff/Climb3 Cruise4 Des/App/Land5 Taxi-in Error Type1 Intentional Noncompliance2 Procedural3 Communication4 Proficiency5 Decision Error CodeUseCode Book Whocommittedthe error? detectedthe error? Crew ErrorResponse1 Trap2 Exacerbate3 Fail to Respond ErrorOutcome1 Inconsequential2 Undesired state3 Additional error E 1 FO failed to set his airspeed

49 bugs. 1 211 Error IDError ManagementUnd
bugs. 1 211 Error IDError ManagementUndesired Aircraft State Associatedwith athreat?(If Yes, enter Threat ID) How did the crew manage or mismanage the error? Undesired AircraftState detectedthe state? Crew UndesiredState Response1 Mitigate2 Exacerbate3 Fail to Respond UndesiredAircraft StateOutcome1 Inconsequential2 Additional error E 1 No Error chain to E2 Who Committed /Detected CodesUndesired Aircraft State Codes Flightcrew1CA2FO3SO / FE4Relief Officer5Jumpseat Rider6All crew members7Nobody Other people8ATC9Flight attendant10Dispatch11Ground12MXAircraft20Aircraft systems 99Other Configuration States1Incorrect A/C configuration - flight controls, brakes, thrust reversers, landing gear2Incorrect A/C configurationsystems (fuel, electrical, hydraulics, pneumatics, air-conditioning, pressurization, instrumentation)3Incorrect A/C configurationautomation4Incorrect A/C configurationenginesGround States20Proceeding towards wrong runway21Runway incursion22Proceeding towards wrong taxiway / ramp23Taxiway / ramp incursion24Wrong gate Aircraft Handling StatesAll Phases40Vertical deviation41Lateral deviation42Unnecessary WX penetration43Unauthorized airspace penetration 44Speed too high45Speed too low46Abrupt aircraft control (attitude)47Excessive banking48Operation outside A/C limitations Approach / Landing States80Deviation above G/S or FMS path81Deviation below G/S or FMS path82Unstable approach83Continued landing - unstable approach84Firm landing85Floated landing86Landing off C/L87Long landing outside TDZ99Other Undesired States SLEFOLLUSTOUSES O A-12Line Operations Safety Audit (LOSA)Copyright 2002 The University of Texas at Austin. All rights reserved.Error Management Worksheet Error ID Error Description Error Response / Outcome Describe the crew error and associated undesired aircraft states Phase of flight1 Predepart/Taxi2 Takeoff/Climb3 Cruise4 Des/App/Land5 Taxi Erro

50 r Type1 Intentional Noncompliance2 Pro
r Type1 Intentional Noncompliance2 Procedural3 Communication4 Proficiency5 Decision ErrorCodeUseCodeBook Whocommittedthe error? detectedthe error? Crew ErrorResponse1 Trap2 Exacerbate3 Fail to Respond ErrorOutcome1 Inconsequential2 Undesired state3 Additional error E 2 In running the Before Takeoff Checklist, the FO skipped the takeoff data item. 1 200 E 3 FO called “flaps up” prior to the flap retraction speed. 2 299 Error IDError ManagementUndesired Aircraft State Associated with athreat?(If Yes, enter Threat ID) How did the crew manage or mismanage the error? Undesired AircraftState Who detectedthestate? Crew UndesiredState Response1 Mitigate2 Exacerbate3 Fail to Respond UndesiredAircraft StateOutcome1 Inconsequential2 Additional error E 2 No Errors mismanaged The bug error should have been caught with the Before Takeoff Checklist, but the FO unintentionally skipped that item. All checklists during this phase were poorly executed. The FO caught the error during the takeoff roll. E 3 No Error managed CA saw that the aircraft was not at the proper speed and waited to retract the flaps. Good monitoring in this case. SLEFOLLUSTOUSES O Appendix A.Examples of the various forms utilized by LOSAA-13Copyright 2002 The University of Texas at Austin. All rights reserved.Error Management Worksheet Error ID Error Description Error Response / Outcome Describe the crew error and associated Phase of flight1 Predepart/Taxi2 Takeoff/Climb3 Cruise4 Des/App/Land5 Taxi Error Type1 Intentional Noncompliance2 Procedural3 Communication4 Proficiency5 Decision ErrorCodeUseCodeBook Whocommitted detectedthe error? Crew ErrorResponse1 Trap2 Exacerbate3 Fail to Respond ErrorOutcome1 Inconsequential2 Undesired state3 Additional error E 4 CA and FO failed to verify multiple altitude E 5 FO, who was new to the aircraft, let it slip a little to the left during the final approach. Resulted

51 in landing left of the centerline. 4
in landing left of the centerline. 4 Error IDError ManagementUndesired Aircraft State Associated with athreat?(If Yes, enter Threat ID) How did the crew manage or mismanage the error? Undesired Whodetectedthe state? Crew UndesiredState Response1 Mitigate2 Exacerbate3 Fail to Respond UndesiredAircraft StateOutcome1 Inconsequential2 Additional error E 4 No E 5 No Error mismanaged FO tried to correct but still landed left of the centerline. Approach was stable and they made the first high-speed taxiway. The CA did not verbalize the deviation during the approach. 86 LEFOLLUSTOUSES O A-14Line Operations Safety Audit (LOSA)Copyright 2002 The University of Texas at Austin. All rights reserved.Intentional Noncompliance Error CodesSterile Cockpit Errors100 Sterile cockpit violationCallout Errors104 Omitted takeoff callouts (i.e., V-speeds)105 Omitted climb or descent callouts106 Omitted approach calloutsCrew to ATC Errors109 Altitude deviation without ATC clearance 110 Course or heading deviation without ATC clearance (deviation more than 20 degrees)111 Use of nonstandard ATC phraseology112 Omitted position report to ATC113 Omitted non-radar environment report to ATC114 Omitted call signs to ATCChecklist Errors120 Checklist performed from memory121 Completed checklist not called "complete"122 Checklist not performed to completion123 Use of nonstandard checklist protocol (i.e., use of nonstandard responses)124 Omitted checklist125 Self-performed checklist — no challenge or response126 Omitted abnormal checklist127 Self initiated checklist — not called for by PF128 Self initiated checklist — not called for by CA129 Checklist performed late or at wrong timeCross-Verification Errors140 Failure to cross-verify MCP / altitude alerter changes141 Failure to cross-verify FMC/CDU changes before execution142 Failure to cross-verify altimeter settingsHard Warning Errors160 Failur

52 e to respond to GPWS warnings161 Failure
e to respond to GPWS warnings161 Failure to respond to TCAS warnings162 Failure to respond to overspeed warningBriefing Errors170 Omitted takeoff briefing171 Omitted approach briefing172 Omitted flight attendant briefing (only for the first flight of a trip or crew change)173 Omitted engine-out briefing179 Intentional failure to arm spoilersApproach Errors180 Failure to execute a go-around after passing procedural bottom lines of an unstable approach181 Speed deviation without ATC clearance183 Intentionally flying below the G/S184 PF makes own flight control settingsAutomation and Instrument Setting Errors185 PF makes own MCP changes186 PF makes own FMC changes187 Failure to set altitude alerter189 Setting altimeters before the transition altitude190 Using equipment placarded inoperative Other Noncompliance Errors195 Taxi-in or out without a wing walker196 A/C operation with unresolved MEL item199 Other noncompliance errors not listed in the code book Appendix A.Examples of the various forms utilized by LOSAA-15Copyright 2002 The University of Texas at Austin. All rights reserved.Procedural Error CodesChecklist Errors200 Missed checklist item201 Wrong checklist performed202 Checklist performed late or at the wrong time 203 Forgot to call for checklist206 Wrong response to a challenge on a checklist (i.e., item not checked that was responded to as “checked”)207 Completed checklist not called "complete"209 Omitted checklist233 Omitted abnormal checklistPrimary Instrument or Panel Errors210 Wrong altimeter settings211 Wrong bug settings (i.e., airspeed or altimeter)212 Failure to set altitude alerter213 Failure to cross-verify altimeter settings214 Failure to cross-verify altitude alerterLever and Switch Errors215 Failure to extend the flaps on schedule216 Failure to retract the flaps on schedule217 Wrong display switch setting218 Failure to leave thrust reversers extend

53 ed219 Failure to lower the landing gear
ed219 Failure to lower the landing gear on schedule220 Failure to bring up the landing gear on schedule221 Failure to extend the speed brakes on landing222 Failure to retract the speed brakes223 Failure to engage thrust reversers on landing224 Failure to retract thrust reversers after landing225 Failure to turn on the landing lights226 Wrong fuel switch setting227 Failure to turn on TCAS228 Failure to turn on the fasten seat belt sign229 Failure to arm spoilers 230 Failure to turn on the A/C packs (no pressurization)231 Wrong panel setup for an engine start278 Wrong power settings for T/O279 Wrong autobrake setting232 Other incorrect switch or lever settingsMode Control Panel Errors234 Failure to cross-verify MCP / altitude alerter changes235 Wrong MCP altitude setting dialed236 Wrong MCP vertical speed setting dialed237 Wrong MCP speed setting dialed238 Wrong MCP course setting dialed239 Wrong MCP heading setting dialed240 Wrong setting on the MCP autopilot or FD switch241 Wrong MCP mode executed242 Wrong MCP mode left engaged243 Manual control while a MCP mode is engaged244 Failure to execute a MCP mode when needed245 Wrong MCP navigation select setting (NAV/GPS/ILS/VOR switch)246 PF makes own MCP changes247 Wrong MCP setting on the auto-throttle switchFlight Management Computer / Control Display Unit Errors249 Failure to cross-verify FMC/CDU changes / position250 Wrong waypoint / route settings entered into the FMC251 Failure to execute a FMC mode when needed252 Wrong mode executed in the FMC253 Wrong mode left engaged in the FMC254 Wrong present position entered into the FMC255 Wrong weights / balance calcs entered into the FMC256 Wrong speed setting entered into the FMC257 PF makes own FMC changes258 Wrong FMC format for input205 Wrong approach selected in the FMC204 Other wrong CDU entries / settings259 Wrong nav radio frequencyRadio Errors260 Wrong ATIS frequenc

54 y dialed261 Wrong ATC frequency dialed26
y dialed261 Wrong ATC frequency dialed262 Wrong squawkDocumentation Errors263 Wrong ATIS information recorded264 Wrong runway information recorded265 Wrong V-speeds recorded266 Wrong weights and balance information recorded267 Wrong fuel information recorded268 Missed items on the documentation (flight plan, NOTAMS, or dispatch release)269 Misinterpreted items on the documentation (flight plan, NOTAMS, or dispatch release)270 Wrong time calculated in the flight plan271 Wrong clearance recordedCallout Errors275 Omitted takeoff callouts (i.e., V-speeds)276 Omitted climb or descent callouts277 Omitted approach calloutsJob Sequence Errors280 Executing the correct job procedures out of sequenceHandling Errors281 Unintentional lateral deviation282 Unintentional vertical deviation286 Unintentional speed deviationGround Navigation Errors283 Attempting or actually turning down the wrong runway284 Attempting or actually turning down the wrong ramp / taxiway / gate287 Attempting or actually lining up for the incorrect runway288 Attempting or actually lining up off C/L289 Failure to execute a go-around after passing procedural bottom lines of an unstable approach290 Missed runway291 Missed taxiway292 Missed gate A-16Line Operations Safety Audit (LOSA)Copyright 2002 The University of Texas at Austin. All rights reserved.Hard Warning Errors293 Failure to respond to GPWS warnings294 Failure to respond to TCAS warningsBriefing Errors272 Incomplete flight attendant briefing273 Incomplete cruise briefing274 Incomplete approach briefing 295 Omitted takeoff briefing296 Omitted approach briefing297 Omitted flight attendant briefing298 Omitted engine-out briefingOther Procedural Errors299 Other procedural errors not listed in the code book Appendix A.Examples of the various forms utilized by LOSAA-17Copyright 2002 The University of Texas at Austin. All rights reserved.Communication Error

55 CodesCrew to ATC Errors300 Wrong readbac
CodesCrew to ATC Errors300 Wrong readbacks or callbacks to ATC301 Missed ATC calls302 Omitted call signs to ATC303 Failure to give readbacks or callbacks to ATC305 Omitted position report to ATC306 Omitted non-radar environment report to ATC307 Misinterpretation of ATC instructions309 Crew omitted ATC call310 Missed instruction to hold shortCrew to Crew Errors319 Wrong airport communicated320 Wrong taxiway communicated321 Wrong runway communicated322 Wrong takeoff callouts communicated323 Wrong climb and descent callouts communicated324 Wrong approach callouts communicated325 Wrong gate assignment communicated335 Crew miscommunication that lead to a misinterpretation336 Wrong engine out procedures statedOther Communication Errors350 Misinterpretation of ATIS399 Other communication errors not listed in the code bookProficiency Error Codes400 Lack of systems knowledge401 Lack of automation knowledge 402 Lack of stick and rudder proficiency403 Lack of knowledge to properly contact ATC404 Lack of procedural knowledge405 Lack of weather knowledge406 Lack of knowledge of standard ATC phraseology407 Lack of knowledge to contact company (i.e., gate assignments)499 Other knowledge or proficiency based errors not listed in the code book A-18Line Operations Safety Audit (LOSA)Copyright 2002 The University of Texas at Austin. All rights reserved.Operational Decision Error CodesDescent and Approach Errors500 Failure to execute a go-around before reaching procedural bottom-lines501 Unnecessary low maneuver on approach502 Approach deviation (lateral or vertical) by choice503 Decision to start the descent late520 Operating at the edge of the performance envelope (nobuffer for error)Navigation Errors510 Navigation through known bad weather that unnecessarily increased risk (i.e., thunderstorms or wind shear)512 Decision to navigate to the wrong assigned altitude513 Decision to navigat

56 e on the incorrect heading or course514
e on the incorrect heading or course514 Decision to navigate without ground clearance521 Speed too high for operating environmentATC Errors530 Accepting instructions from ATC that unnecessarily increased risk531 Making a request to ATC that unnecessarily increasedrisk532 Failure to verify ATC instructions533 Altitude deviation without ATC notification 534 Course or heading deviation without ATC clearance535 Accepting a visual in nonvisual conditionsCrew Interaction Errors540 Non-essential conversation at inappropriate timesAutomation Errors550 FMC over-reliance — used at inappropriate times551 FMC under-reliance — not used when needed552 Heads down FMC operation553 Discretionary omission of FMC data (e.g., winds)Instrument Errors560 Lack of weather radar useChecklist Errors570 Failure to complete a checklist in a timely manner (i.e., after takeoff checklist)Paperwork Errors590 Failure to cross-verify documentation or paperworkOther Operational Decision Errors599 Other operational decision errors not listed in the code book Appendix A.Examples of the various forms utilized by LOSAA-19Copyright 2002 The University of Texas at Austin. All rights reserved.Threat and Error Management Worksheet Codes Threat Codes Departure / Arrival Threats1Adverse weather / turbulence / IMC2Terrain3Traffic — Air or ground congestion, TCASwarnings Operational Threats30Operational time pressure — delays, OTP,late arriving pilot or aircraft31Missed approach 32Flight diversion Cabin Threats40Cabin event / distraction / interruption41Flight attendant errorATC Threats50ATC command — challenging clearances, latechanges51ATC error52ATC language difficulty53ATC non-standard phraseology54ATC radio congestion55Similar call signs Who Committed / Detected Codes Undesired Aircraft State Codes Flightcrew1CA2FO3SO / FE Other people8ATC9Flight attendant10Dispatch11Ground12MXAircraft20Aircraft systems99Othe

57 r Configuration States1Incorrect A/C con
r Configuration States1Incorrect A/C configuration — flight controls, brakes, thrust reversers, landing gear Aircraft Handling States — Approach / Landing States80Deviation above G/S or FMS path81Deviation below G/S or FMS path82Unstable approach SLEFOLLUSTOUSES O A-20Line Operations Safety Audit (LOSA)Copyright 2002 The University of Texas at Austin. All rights reserved.LOSA Crew Interview1.Training a)Is there a difference in how you were trained, and how things really go in line operations?b)If so, why?2.Standardizationa)How standardized are other crews that you fly with?b)If there is a lack of standardization, what do you think is the reason(s) for procedural non-compliance?3.Automationa)What are the biggest automation “gotchas” for this airplane?4.Overall safety improvements – concerns and suggestions for improvementa)Flight Opsb)Dispatchc)Airports and ATCd)SOPs AMPLEFOR ILLION POSES O Appendix BEXAMPLE OF AN INTRODUCTORY LETTERBY AN AIRLINE TO ITS FLIGHT CREWSTo: All US Airways PilotsFrom: Captain Ed BularSenior Director, Flight OperationsCaptain Ron Schilling Line Operations Safety Audit (LOSA)In addition to using US Airways pilots as LOSA observers, we will also use three observers from UT HumanFactors Research Program. These gentlemen are very experienced LOSA observers, having worked with theUT program for many years. They are John Bell, Roy Butler and James Klinect, and their credentials canbe verified by your requesting that they present a copy of their FAA jumpseat authorization.Please extend your usual professional courtesies to the LOSA observation team, and thank you for yourunfailing cooperation.Sincerely,Captain Ed BularSenior Director, Flight OperationsCaptain Ron SchillingDirector, Flight Training and StandardsCaptain Pete EichenlaubDirector, Flight Safety and Quality AssuranceCaptain Terry McVenesChairman, ALPA Central Air Safety Committee C-1App

58 endix CLIST OF RECOMMENDED READING ANDRE
endix CLIST OF RECOMMENDED READING ANDREFERENCE MATERIALAmalberti, R. La conduite de systèmes a risques. Paris: Presses Universitaires de France, 1996.Flin, R. and L. Martin. “Behavioural markers for crew resource management.” C-2Line Operations Safety Audit (LOSA)Klein, G. A, J. Orasanu, R. Calderwood and C. E. Zsambok. Decision making in action: Models and methods.Norwood, New Jersey: Ablex Publishing Corporation, 1993.Klinect, J. R., J. A. Wilhelm and R. L. Helmreich. “Event and Error Management: Data from Line OperationsSafety Audits.” In Proceedings of the Tenth International Symposium on Aviation Psychology. The Ohio StateUniversity, 1999, pp. 130–136.Law, J. R., and J. A. Wilhelm. “Ratings of CRM skill markers in domestic and international operations: Afirst look.” In Proceedings of the Eighth International Symposium on Aviation Psychology. Columbus, Ohio:The Ohio State University, 1995.Maurino, D. E, J. Reason, A. N. Johnston and R. Lee. Beyond Aviation Human Factors. Hants, England:Averbury Technical, 1995.Pariès, J. “Evolution of the aviation safety paradigm: Towards systemic causality and proactive actions.” InB. Hayward and H. Lowe (Eds.), Proceedings of the 1995 Australian Aviation Psychology Symposium. Hants,England: Averbury Technical, 1996, pp. 39–49.Reason, J. Managing the Risks of Organizational Accidents. Hants, England: Averbury Technical, 1998.Taggart, W. R. “The NASA/UT/FAA Line/LOS checklist: Assessing system safety and crew performance.”In Proceedings of the Eighth International Symposium on Aviation Psychology. Columbus, Ohio: The OhioState University, 1995.Vaughan, D. The Challenger launch decision. Chicago, USA: The University of Chicago Press, 1996.Woods, D. D., L. J. Johannesen, R. I. Cook and N. B. Sarter. Behind human error: Cognitive systems,computers and hindsight. Wright-Patterson Air Force Base, Ohio: Crew Systems Ergonomics Informat