Evaluation of a Rating Tool for Measuring General Surgeons' Non-technical Performance in the Operating Room
|ClinicalTrials.gov Identifier: NCT01562054|
Recruitment Status : Completed
First Posted : March 23, 2012
Last Update Posted : August 9, 2012
The operating room, OR, is a high-tech and complex domain in which clinicians' possession of clinical knowledge and good technical skills are essential, but not sufficient. Retrospective studies have shown that a large proportion of the mistakes that occur during surgery are not caused by lack of clinical skills but rather by insufficient communication and teamwork. Prospective studies have shown that insufficient teamwork in the OR causes a higher risk of complications and that communication failure in the surgical team causes procedural errors and delays. It is therefore evident, that surgeons and the surgical teams need skills in communication, teamwork, decision making and leadership (so called non-technical skills) to ensure a good outcome for surgical patients.
Internationally different behavioural marker systems have been developed to aid training and assessment of non-technical skills such as Non-Technical Skills for Surgeons (NOTSS) from Scotland (12). In Denmark certain aspects of non-technical skills appear in the curricula for surgical trainees, but no formal training or assessment occurs at this point.
The investigators developed a behavioural marker system for general surgeons adapted to the Danish cultural and organisational context (NOTSSdk). The behavioural marker system consists of four categories: leadership, situation awareness, decision making and communication & teamwork. Each category has between 3 and 4 elements underpinning it and each element has several behavioural examples illustrating good and poor behaviour.
The system needs evaluation to ensure that it is valid and can be used reliably to observe and rate surgeons' intra operative performance.
Aim: The aim of this study is to examine the psychometric properties of NOTSSdk in relation to it's construct validity, reliability, and usability in a simulated setting.
- What is the inter rater reliability among raters?
- Are these ratings consistent with expert ratings (= "gold standard ratings")?
- Can a one day training session improve novice raters' agreements (construct validity)?
METHODS The study is designed to explore the reliability of the ratings made by surgeons as well as to examine whether the reliability improves with training. The study setting will be a one day course in which surgeons are asked to rate the non-technical performance of surgeons in simulated video scenarios. Two distinct sets of ratings will be obtained. The first at the beginning of the day when the surgeons have had no prior training in or knowledge of rating non-technical skills and at the end of the day after 5 hours of training.
Sample A sample of general surgeons from all hospitals in Zeeland, Denmark will be recruited to participate as raters after the heads of the departments have been contacted and informed about the project. The investigators wish to recruit consultant surgeons (n=10) and senior residents in surgery (n=10) who have an interest in education and supervision. Once recruited background information such as age, gender, position, years of surgical experience and prior experience with assessment of others will be collected.
Training and rating session The surgeons will attend a one day course. The course will consist of three parts: first a rating session to get baseline-ratings from participants of 6 video scenarios, then a teaching and training session and then another rating session with 4 new and 2 of the baseline-videos. During the first rating session the surgeons will only have the NOTSSdk rating form and will not be allowed to discuss with each other. The second rating session will be like the first but in addition to the rating form the participants will also be allowed to look in the full NOTSSdk behavioral marker system. The teaching will include an introduction to Human Factors and non-technical skills in surgery, and a short description of the development and intended use of NOTSSdk. The following training will allow participants to practice the rating of surgeons' non-technical skills on 4 video recorded scenarios and then discuss their ratings.
Ratings of the lead surgeons' performance will be obtained at the category level of each scenario.
Video recordings 14 simulated video recordings will be made showing general surgeons "acting" in their own roles during surgery. The scenarios will be scripted to display various scenarios of routine (such as unnoticed prolongation of a laparoscopic appendectomy) and non-routine situations (such as unexpected bleeding during open surgery). The scenarios will be designed after a specific structure aiming at a specific variation in the surgeons' non-technical performance according to the four categories of NOTSSdk.
|Condition or disease|
|Focus of Study: Surgeons' Non-technical Skills|
|Study Type :||Observational|
|Estimated Enrollment :||20 participants|
|Official Title:||Evaluation of a Rating Tool for Measuring General Surgeons' Non-technical Performance in the Operating Room|
|Study Start Date :||February 2012|
|Actual Study Completion Date :||April 2012|
- Rating of non-technical performance [ Time Frame: One day ]
Please refer to this study by its ClinicalTrials.gov identifier (NCT number): NCT01562054
|Danish Institute for Medical Simulation|
|Herlev, Denmark, 2730|