Successful Training in Gastrointestinal Endoscopy. Группа авторов
Чтение книги онлайн.
Читать онлайн книгу Successful Training in Gastrointestinal Endoscopy - Группа авторов страница 69
Figure 6.34 An endoscopist practices on an ex vivo bovine colon model.
For terminal ileum intubation skills, there really is no substitute for patient‐based experience. Computer and ex vivo animal models do not recreate the valve adequately to be effective either for learning or for assessing this skill. Instead, practice of valve intubation with every colonoscopy should be encouraged during training. This allows the required repetition needed both to gain this skill as well as to allow programs to accurately assess and monitor ileal intubation rates in an ongoing manner.
Ongoing assessment
Suggestions as to the ideal method for training and assessment have been made for each of the cognitive and motor skills addressed in this chapter (Table 6.2). In each section, reference to the need for ongoing assessment was made. This is to allow the ability to document when a fellow has reached the threshold of competence and more importantly to help define what factors equate to competence. This continuous monitoring is something rarely done at most institutions. For greater than 15 years, experts in endoscopy education have been calling for continuous measurements of fellows' skills as they progress toward competency [42, 43]. In recent years, greater emphasis has been placed on the documentation of competence by professional organizations such as ASGE and training regulatory bodies such as the Residency Review Committee (RRC) of the American College of Graduate Medical Education (ACGME). This section will focus on the development and use of one such method of ongoing skills assessment from the beginning of training until graduation.
Many institutions utilize a global evaluation system, where, toward the end of training, supervising staff make subjective recommendations as to the preparedness of the trainee to operate alone. Though commonly used, this type of assessment is profoundly subjective in nature and is prone to significant biases. It also does not allow for early identification of learners below the average learning curve nor provide ample time as graduation draws close to remediate these fellows. Instead, professional societies have tried to develop recommendations in order to standardize training. Unfortunately, these recommendations are based on a few publications of learning curve data that have very small numbers of subjects, are predominantly retrospective data, and focus on very narrow definitions of competency based primarily on cecal intubation rates [42–45]. As a result of these data, it has been suggested that a minimum of 140 colonoscopies should be performed by a trainee before competency can be assessed [46]. However, these competency guidelines make only one suggestion as to what benchmarks define competence (cecal intubation rate). The ASGE suggests that an “90% technical success” at reaching the cecum is needed to be deemed minimally competent [47]. Though better defined, this is still only one parameter of performance and does not take into account all of the other motor or cognitive skills. The RRC guidelines state, “Assessment of procedural competence should not be based solely on a minimum number of procedures performed, but on a formal evaluation process.” What is needed is the development of an ongoing, formalized assessment that assesses a broad range of both motor and cognitive skills (Table 6.3).
One such evaluation process is called the Direct Observation of Procedural Skill (DOPS) form. This form focuses on six broad motor skill parameters and was developed by researchers as a means of assessing these skills following an intensive hands‐on training course [27, 48]. Another standardized skill evaluation form has been developed at the Mayo Clinic (Rochester, MN) in conjunction with the ASGE, which grades a spectrum of both cognitive and motor skills of trainees during colonoscopy. These component skills of colonoscopy were identified by a panel of expert endoscopists and educators and are based on the general training and competency recommendations outlined by professional societies [49]. From this, a blueprint for the evaluation tool was created, and based on this blueprint and refinement by the ASGE training committee, the Assessment of Competency in Endoscopy (ACE) was developed (Table 6.4). Some version of this survey has been in use for over a decade at Mayo and is completed on every colonoscopy from the first day of a fellow's training until graduation. The supervising staff enters these data directly into the institution's endoscopy database during the procedure so that the performance data are connected to other procedural data such as cecal intubation time and withdrawal times as well as special therapies applied, polyp detection, and complications. This assessment tool was recently made available as part of the ProVation Endoscopy reporting software package used by a vast number of endoscopy centers. Linking this evaluation to the procedural database has allowed the avoidance of having staff duplicate much of the data already generated automatically as part of the procedure. Admittedly, the goal of completing a fellow skills assessment with every procedure is labor‐intensive and daunting but can certainly be accomplished. Alternatively, the use of assessment forms such as the DOPS or ACE could be performed on a periodic basis, giving instructors a “snap‐shot” in time of how a trainee compares to the expected learning curve.
Table 6.3 Competency metrics for continuous assessment
Cognitive skills |
Appropriate use of initial sedation |
Continuous monitoring and management of patient comfort and depth of sedation |
Identification of landmarks/awareness of scope location |
Accuracy and sophistication of pathology recognition |