Lung ultrasound training: a systematic review of published literature in clinical lung ultrasound training
Critical Ultrasound Journal volume 10, Article number: 23 (2018)
Clinical lung ultrasound examinations are widely used in the primary assessment or monitoring of patients with dyspnoea or respiratory failure. Despite being increasingly implemented, there is no international consensus on education, assessment of competencies, and certification. Today, training is usually based on the concept of mastery learning, but is often unstructured and limited by bustle in a clinical daily life. The aim of the systematic review is to provide an overview of published learning studies in clinical lung ultrasound, and to collect evidence for future recommendations in lung ultrasound education and certification.
According to PRISMA guidelines, three databases (PubMed, Embase, Cochrane Library) were searched, and two reviewers examined the results for eligibility. Included publications were described and assessed for level of evidence and risk of bias according to guidelines from Oxford Centre for Evidence-Based Medicine and Cochrane Collaboration Tool for Risk of Bias assessment.
Of 7796 studies screened, 16 studies were included. Twelve pre- and post-test studies, three descriptive studies and one randomized controlled trial were identified. Seven studies included web-based or online modalities, while remaining used didactic or classroom-based lectures. Twelve (75%) studies provided hands-on sessions, and of these, 11 assessed participants’ hands-on skills. None of the studies used validated neither written nor practical assessment. The highest level of evidence score was 2 (n = 1), remaining scored 4 (n = 15). Risk of bias was assessed high in 11 of 16 studies (68.75%).
All educational methods proved increased theoretical and practical knowledge obtained at the ultrasound courses, but the included studies were substantial heterogeneous in setup, learning-, and assessment methods, and outcome measures. On behalf of current published studies, it was not possible to construct clear guidelines for the future education and certification in clinical lung ultrasound, but the use of different hands-on training facilities tends to contribute to different aspects of the learning process. This systematic review proves a lack of learning studies within this content, and research with validated theoretical and practical tests for assessment is desired.
The clinical use of lung ultrasound (LUS) in emergency departments, critical care units as well as in respiratory departments has increased substantially. LUS has an excellent diagnostic accuracy for many of the most common causes of acute respiratory failure (e.g., cardiogenic pulmonary edema, pneumonia, pleural effusion, and pneumothorax) and increases the proportion of patients receiving a correct diagnosis and treatment [1,2,3,4,5,6]. Furthermore, LUS is a rapid, bedside, non-invasive, radiation-free diagnostic tool, which the clinician can use as an integrated part of the initial clinical assessment as well as for monitoring purposes. However, the value of LUS is dependent on competent operators performing the examination.
Several societies, e.g., the European Federation of Societies for Ultrasound in Medicine and Biology, British Thoracic Society and European Association of Cardiovascular Imaging, have clear guidelines and descriptions of logbook, number of performed supervised examinations needed, and basic knowledge curricula, which must be obtained before performing unsupervised lung ultrasound examinations [7,8,9]. However, no clear evidence-based guidelines or recommendations exist on the training needed to obtain adequate skills for performing an LUS examination.
Like other procedures and treatments, LUS education and certification should be based on best available evidence, and with gathered validity evidence in learning- or clinical studies. The aims of this systemic review were to provide an overview of the literature published in learning studies in clinical LUS, and to explore and collect evidence for future recommendations in lung ultrasound education and competency assessment.
Materials and methods
The systematic review was performed according to the Preferred Reporting Items for Systematic Review and Meta-Analysis (PRISMA) guidelines . A systematic literature search was conducted in PubMed, Embase, and Cochrane Library in collaboration with a research librarian from the Medical Research library at Odense University Hospital, Denmark. Terms used: lung OR lungs OR pulmonal OR pulmonary OR thoracic OR thorax OR thoracal OR mediastinal OR mediastinum, ultrasound OR ultrasonic OR ultrasonography OR ultrasonics OR sonography OR sonographic, medical education OR education OR learning OR training OR clinical competences OR curriculum including MeSH terms. The search was completed on March 7, 2017. The inclusion criterion was: learning- or education studies in lung or thoracic ultrasound. No exclusion criteria were provided within languages, animal studies, etc.
After removing duplicates, all titles and abstracts were screened by two authors (PP and KRM). All articles that potentially met the broad inclusion criterion or indeterminate articles were assessed with full article reading. Abstracts regarding the following studies were excluded: ultrasound education in other organ systems or anatomical structures than lungs or thorax, cost–benefit analysis, case reports, author responses, letter to the editor, and comments. Diagnostic accuracy studies were excluded from this review, except from those, which also included a learning study or had objectives or outcomes that assessed training or development of competencies in LUS. The same two authors then subsequently read all eligible articles, and each article was discussed until consensus. In case of disagreement, a third reviewer (CBL) was conferred. Hand search was conducted on references of included full articles. Level of evidence was categorized using the Oxford Centre for Evidence-Based Medicine (OCEBM) system for Level of Evidence . Bias in each included article were discussed and marked according to Cochrane Collaboration risk of bias .
The initial search yielded 7796 publications. After removal of duplicates, author responses and conference abstracts, 4656 publications remained. Of these, 4622 were excluded. Most of the excluded studies did not meet the inclusion criterion at all, and comprised complete different topics, aims, and objectives than education or assessment in LUS or thoracic ultrasound. Because of the wide search strategy, the amount of publications not relevant for this systematic review was large. Figure 1 presents the eligibility process and exclusion of articles. Causes of the full-text exclusions were: diagnostic accuracy studies (n = 6), testing the effectiveness and use of different models/phantoms or hands-on facilities for LUS (n = 7), describing implementation, use and feasibility of LUS (n = 3), train-the-trainer course (n = 1), and assessment of respiratory therapists’ theoretical and clinical skills in LUS (n = 1). The reference lists of included papers were screened without leading to inclusion of further studies. Study design, participants, learning strategy, hands-on facilities, and assessment are described below. Additional information is shown in Tables 1 and 2.
In total, there were 12 pre- and post-test studies that used improvement in written test scores to evaluate the educational Cochrane [13,14,15,16,17,18,19,20,21,22,23,24]. Five of the pre- and post-test studies had a follow-up time from 1 week to 6 months, average 13 weeks ± 4.83 [14, 16, 18, 20, 25], and one recorded number of scans performed from baseline to follow-up . Three descriptive studies were identified [25,26,27] and one randomized controlled trial . Five of the studies (31%) were courses in general critical care ultrasound, or basic skill ultrasound, where thoracic or lung ultrasound was a specific and independently evaluated topic [17, 19,20,21, 24].
Most study participants were ultrasound novices, and especially novices in clinical LUS, and varied from medical students to respiratory therapists, emergency department residents, and anesthesiologists. Three studies also included other healthcare professionals as prehospital providers, nurses, and veterinarians [18, 22, 24]. Two studies excluded participants with the previous ultrasound certification or attendance in a formal critical care ultrasound course within 12 months [20, 28], and two studies only included a study population with no experience [21, 24].
Learning strategies in the studies included were heterogeneous in both time spent on lectures, theoretical presentation, and method used for assessment. The most commonly used educational tool used was didactic lectures (n = 12, 75%), with a variation of time spent from 30 min sessions  to 2.5 h sessions . Abbasi et al. presented a single topic course (detection of pneumothorax with LUS), and time spent on didactic lecture was 30 min. This study was the only single topic course that used didactic lecture as educational tool . Remaining studies introduced classroom-based learning covering a more comprehensive introduction to full LUS, primarily with 15–30 min education in each of the main topic. Some studies had a clear overview and description of topics included in the didactic lectures, whereas other studies only stated the overall general topics (Table 1).
Four studies describe a full day to 3 days courses with alternating theoretical and hands-on sessions [14, 19, 20, 24]. Four studies incorporated live ultrasound examinations by instructors in the theoretic session to combine the theoretic and practical understanding [19, 20, 24, 26]; otherwise, images and video clips were frequently used in the lectures.
Web-based learning or online presentations were used in 7 (44%) studies [16, 19, 21, 23, 25, 27, 28]. Four of those had only online presentations or web-based learning modules without didactic lectures or hands-on sessions [16, 25, 27, 28]. Cuca et al. studied a web-based learning program evaluated by nine experts of the international lung ultrasound consensus committee , and used the same written tests, topics, and curriculum as the study by Breitkreutz et al. . Cuca et al. compared the results from the two studies. Krishnan et al.  presented a 5 min online presentation in the use of ultrasound as a diagnostic tool to confirm pneumothorax. Gargani et al. had a 26 min online presentation with primary focus on b-line presentation, interpretation, and the possibility of real-time demonstrations or meeting with instructors on Skype. Subsequently, participants were to upload seven LUS examinations for evaluation. When the instructors had approved the seven videos, the participants could proceed to the second part of the training, including a set of 44 videos with the focus of counting b lines . In the randomized trial by Edrich et al., one of the study groups received a web-based educational learning program and had no hands-on session, another group had a 45 min classroom-based lecture and 20 min hands-on, whereas the control group had no lectures at all. The participants were evaluated with a pretest, post-test, and 4 week retention test .
Hands-on training facilities
Twelve of sixteen studies included hands-on sessions in the educational program [13,14,15, 17, 19,20,21,22,23,24, 26, 28]. Simulators were used in three studies [19, 20, 26], and healthy live models in eight studies [14, 15, 19,20,21, 24, 26, 28]. In five studies, emergency department patients or patients with respiratory failure in other departments were assessed as a part of the training program [15, 17, 23, 26, 27], including three studies, where LUS video clips from patients hospitalized were obtained and used in the assessment [13, 18, 25]. Porcine models were used in two studies [14, 22]. Four studies combined the use of different models, patients and/or simulators [14, 15, 19, 20, 26].
Thirteen studies used written examinations to assess theoretical knowledge obtained at the educational programs [13,14,15,16,17,18,19,20,21,22,23,24,25]. They all used multiple-choice items format covering true/false questions, one-best-answer questions, single-correct-answer questions and multiple-response questions, all included images and/or video clips in the questions. None of the studies described gathering validity evidence for neither the pre- and post-tests nor the practical skill assessment tools. One study, however, had the multiple-choice questions (MCQs) peer-reviewed by the instructors ahead of the study , but the vast majority of the assessment checklists, written tests, and curricula were described as based on the international consensus recommendations for point-of-care lung ultrasound by Volpicelli et al. .
Eleven studies assessed participants’ practical skills [14, 15, 17, 19,20,21,22,23,24, 26, 28]. The most common method used for evaluation and assessment of practical skills was observer checklists but varied greatly. Participants in See et al.  scanned 12 zones with an instructor bedside, who was allowed to comment or help if needed, videos were stored, and participants then interpreted the clips in front of the instructor. Connolly et al.  assessed the participants’ practical skills by letting participants scan four windows, and videos were stored and rated by blinded instructors. Breitkreutz et al.  had 16 predefined sonoanatomical structures that participants should present and were then rated on a standardized sheet. Respectively, 46 and 84 checklist items were to be scanned in Hulett et al. and Dinh et al. [17, 20] and were evaluated regarding image acquisition and interpretation. Furthermore, Dinh et al. presented four cases with 20 case questions each . Heiberg et al.  performed online testing of the students’ practical skills by correct/incorrect and offline evaluation of image quality and interpretation. Greenstein et al. used 20 standardized examination tasks and 20 video-based examinations , whereas Oveland et al. presented scans on porcine models with confirmation or validation of pneumothorax, oral feedback from instructor and yet another scan session .
Level of evidence of the included studies is presented in Table 2 according to OCEBM guidelines, and assessment of risk of bias in Table 3. No studies scored the highest level of evidence, one study scored 2, remaining part of the studies scored 4. Bias was assessed as high in the majority of the studies (Table 3).
The vast majority of the currently published LUS learning studies are one-group pre- and post-tests studies with low level of evidence. This study design can just inform us that trainees learned something from the specific intervention, but does not provide any evidence on how to build a curriculum . The studies are heterogeneous in choice of: educational program, teaching methods, participant assessment, and study outcome. In addition to conventional classroom-based didactic lectures, web-based learning was often chosen as an alternative or additional method and was used in 7 of the 16 included studies [16, 19, 21, 23, 25, 27, 28], but only one study measured the effect of the two educational methods, and compared the results from the two groups in a randomized controlled trial .
Web-based learning strategies have been proven to have several advantages. Ruiz et al. describe increased accessibility and flexibility as important advantages. It standardizes course content and delivery independent of teacher presentation and variation. Students are in control of their learning sequence and learning pace, and web-based learning can be designed to include outcome assessment [31, 32]. Furthermore, it is possible to implement different types of multimedia such as graphics, videos, animations, and texts to increase learning ability. A meta-analysis by Cook et al.  proved that medical web-based learning was significantly superior to no intervention, and participants could achieve results similar to traditional learning methods like classroom-based learning in numerous diagnostic and therapeutic content areas. Edrich et al.  correspondingly found the same improvement. Since web-based education has similar outcome as classroom-based lectures, it would be obvious to include other parameters like maintenance of both theoretical and practical skills with follow-up assessments, time efficiency, and user satisfaction surveys. The meta-analysis, like this systematic review, suffers from considerable heterogeneity in study participants, learning methods, and outcome measures.
Web-based learning in general point-of-care ultrasound has advantageously been evaluated in several studies [34,35,36]. In Kang et al. , outcome measures were not only improvement in test score, but also hours spent on organizing the course and course costs. In both cases, web-based learning was more cost-effective. None of the studies included in this systematic review incorporated cost–benefit analysis, but one concluded that an ultrasound symposium requires a massive setup and great financial resources because of the number of ultrasound machines, phantoms, volunteers, instructors, and rooms. When building a theoretical curriculum in medical education, the teacher:student ratio can be low without affecting the learning ability significantly. However, when training practical skills, it requires a closer relation and interaction between instructor and trainee, and the most optimal trainee to instructor ratio is as close as 1:1 as possible. Oveland et al.  also discussed cost–benefit issues and concluded that porcine models as simulators and animal laboratory training in general, combined with ethical considerations, may be an option but have time, venue, and cost dilemmas.
The practical skill assessments of course participants in the included studies diverge in amount of checkpoints and topics. Even though the studies included used various checklists to keep the assessment as objective and standardized as possible, only two studies had blinded reviewers scoring the stored images or ultrasound sequences afterwards [19, 28], and no validity evidence was provided for any checklists.
LUS imaging and examinations differ from other point-of-care ultrasound examinations, because image interpretation and pathological recognition are based on sonographic artifacts instead of directly imaging diagnostics as, e.g., thickening of gallbladder wall, pericholecystic fluid, and sludge as a sign of acute cholecystitis. Therefore, there is a great need for a standardized and validated tool for assessing the understanding of LUS, image acquisition, and image interpretation, additionally, to demonstrate the capability to correlate the patterns and interpretations to lung pathology and physiology.
In general, when introducing a new assessment tool, validity evidence should be gathered, to ensure the reliability, and to make it possible for meaningful interpretation. Today, one of the most described and recognized frameworks for validity testing is by Messick . Five distinct sources of validity evidence in scientific experimental data have been discussed; content, response process, internal structure, relationship to other variables, and consequences . Some types of assessment demand a stronger emphasis on one or more sources of evidence depending on the curriculum, consequences, and properties of inferences. All sources should be researched with the highest level of evidence possible, but within this setting, an assessment tool should emphasize content-related evidence with some evidence of response quality, internal structure, and consequences.
A new study have constructed and gathered validity evidence for an instrument to assess LUS competences by obtaining international consensus by experts in multiple specialties . The objective structured assessment of lung ultrasound skills (LUS-OSAUS) could form the foundation of further and more homogeneous studies in the future.
The theoretical assessment was a preferred method for measuring the degree of obtained theoretical knowledge before and after a course, but single-group pretest post-test design suffers from minimal internal and external validity. In the case of evaluating medical education through this set-up, it would be surprising if an increased post-test score was not found. This setup has been discussed and criticized for decades and is today considered obsolete [30, 40, 41]. A single topic curriculum like presented in Krishnan et al., where participants were presented for a 5 min online presentation in detection of pneumothorax with LUS, and assessed theoretical with 20 videos, proves that even a very short theoretical session leads to increased knowledge and pattern recognition. However, it does not provide any guarantee that the trainees can obtain the ultrasound images themselves, or connect the patterns to relevant differential diagnosis in a clinical setting.
One study reported that their theoretical test was validated, but did not describe how this was done . Another had the questions peer reviewed by authors of the study . Written tests, in general, are proven to be authoritative motivating, facilitating the learning process and cost-effective . Disadvantages of using the same theoretical test as pretest, post-test, and follow-up test are recall bias or “learning the test” [43, 44]. The majority of the studies have tried to eliminate this bias by changing the order of questions as well as the order of answers. None of the participants in the included studies were blinded to the studies. Since the participants knew that they were being evaluated, they may have been more motivated to enhance their performance in the tests.
There were large differences in the use of healthy live models, patients with respiratory failure or lung diseases, phantoms/simulators, or porcine models for the hands-on training. The overall conclusion was that all models could contribute to increased hands-on competencies. Summarized, the different models could contribute to different aspects of the learning process; healthy live models were well suited for getting comfortable with the ultrasound devices, learning advantages and disadvantages of various transducers, improving image optimization, and learning hand–eye coordination. When using porcine models, it was possible to create pneumothoraces or pleural effusions allowing trainees to train the visual understanding of these diagnoses, but as discussed animal laboratory models have several other limitations. Dinh et al.  discuss the use of patients in an educational setting, and found it difficult to incorporate and standardize live pathology given the logistical challenges of recruiting patients with specific diseases and sonographic pattern. See et al.  reported problems with only a minority of the trainees scanned patients with pneumothorax due to a low prevalence of pneumothoraces. In addition, it is crucial not to delay diagnostic or initial treatment when using admitted patients in a learning study. Two studies used simulators for learning pathological patterns; both found simulators useful, and state that with the use of simulators, the students engage in both acquiring image and interpreting the abnormal finding while assimilating muscle memory with cognitive learning .
We acknowledge that the literature review was constrained by the quantity and quality of available evidence. Three databases were searched, decided being relevant for the topic, but a broader search strategy could potentially reveal more studies eligible for this systematic review, and we did not include data that were not published. However, all reference lists of publications eligible for full-text reading were searched with no additional findings. A minor part of the excluded publications contains education in lung ultrasound in context with ultrasound in other organ systems, e.g., abdominal ultrasound or eFAST (extended focused assessment with sonography for trauma). Different alternative expanded protocols for lung ultrasound or combined ultrasound have been developed and anchored in different specialties, and the evaluation of education of these different protocols was beyond the aim of this study. Therefore, studies were only included if the educational outcome was based on lung ultrasound separately.
The included studies failed to contribute to compelling body of evidence to support the educational evidence in LUS, and a meta-analysis was not possible to conduct because of the differences in assessment tools, and lack of comparability.
Standardized recommendations for education and certification in LUS is not possible to establish based on published studies because of heterogeneity in study design, low evidence-level, and high risk of bias among included literature. All courses showed progress in both theoretical and practical skills no matter which educational method used. If recommendations should be assigned from the current studies included in this systematic review and existing medical education literature, it would be ideal to use a three-step mastery-learning approach. First, trainees should obtain theoretical knowledge through either classroom-based education or web-based lectures with a curriculum based on experts’ opinion and a validated post-test with a pass–fail standard to ensure sufficient theoretical knowledge. Second, focused hands-on sessions on simulators, pigs, or healthy subjects until competency are demonstrated in the training environment using a performance test with solid evidence of validity. Third, supervised scanning of real patients with feedback from a trained instructor who preferably uses an assessment tool to decide when the trainee is ready for independent practice. Virtual-reality simulators could play an important role in the training of LUS, especially of pathologic cases, and could also provide standardized and objective assessments of competence. As far as we know, no studies have developed valid simulator-based tests of competence in LUS, even though simulators are commonly used in other specialties and are demonstrated to have a great potential for reproducible and objective assessment and effects on skill and behavior [45,46,47].
In conclusion, more uniform, competency-based training programs and assessment tools are needed to ensure a higher standard of education and assessment in LUS. Furthermore, simulation training could potentially `bute to the hands-on training in a calm environment making it possible to train high-risk cases without putting patients in risk.
British Thoracic Society
European Federation of Societies for Ultrasound in Medicine and Biology
Preferred Reporting Items for Systematic Review and Meta-analysis
Oxford Centre for Evidence-Based Medicine
Grimberg A, Shigueoka DC, Atallah AN et al (2010) Diagnostic accuracy of sonography for pleural effusion: systematic review. Sao Paulo Med J 128:90–95
Alrajab S, Youssef AM, Akkus NI et al (2013) Pleural ultrasonography versus chest radiography for the diagnosis of pneumothorax: review of the literature and meta-analysis. Crit Care 17:R208
Laursen CB, Sloth E, Lassen AT et al (2014) Point-of-care ultrasonography in patients admitted with respiratory symptoms: a single-blind, randomised controlled trial. Lancet Respir Med 2:638–646
Pivetta E, Goffi A, Lupia E et al (2015) Lung ultrasound-implemented diagnosis of acute decompensated heart failure in the ED: a SIMEU multicenter study. Chest 148:202–210
Alzahrani SA, Al-Salamah MA, Al-Madani WH et al (2017) Systematic review and meta-analysis for the use of ultrasound versus radiology in diagnosing of pneumonia. Crit Ultrasound J 9:6
Long L, Zhao HT, Zhang ZY et al (2017) Lung ultrasound for the diagnosis of pneumonia in adults: a meta-analysis. Medicine 96:e5713
Education, Practical Standards Committee EFoSfUiM, Biology (2006) Minimum training recommendations for the practice of medical ultrasound. Ultraschall Med 27:79–105
Havelock T, Teoh R, Laws D et al (2010) Pleural procedures and thoracic ultrasound: British Thoracic Society pleural disease guideline 2010. Thorax 65(Suppl 2):ii61–ii76
Neskovic AN, Hagendorff A, Lancellotti P et al (2013) Emergency echocardiography: the European association of cardiovascular imaging recommendations. Eur Heart J Cardiovasc Imaging 14:1–11
Moher D, Liberati A, Tetzlaff J et al (2010) Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. Int J Surg 8:336–341
Medicine CfE-b (2009) Oxford Centre for evidence-based medicine 2011—level of evidence
Higgins JP, Altman DG, Gotzsche PC et al (2011) The Cochrane Collaboration’s tool for assessing risk of bias in randomised trials. BMJ 343:d5928
Noble VE, Lamhaut L, Capp R et al (2009) Evaluation of a thoracic ultrasound training module for the detection of pneumothorax and pulmonary edema by prehospital physician care providers. BMC Med Educ 9:3
Oveland NP, Sloth E, Andersen G et al (2012) A porcine pneumothorax model for teaching ultrasound diagnostics. Acad Emerg Med 19:586–592
Breitkreutz R, Dutine M, Scheiermann P et al (2013) Thorax, trachea, and lung ultrasonography in emergency and critical care medicine: assessment of an objective structured training concept. Emerg Med Int 2013:312758
Cuca C, Scheiermann P, Hempel D et al (2013) Assessment of a new e-learning system on thorax, trachea, and lung ultrasound. Emerg Med Int 2013:145361
Hulett CS, Pathak V, Katz JN et al (2014) Development and preliminary assessment of a critical care ultrasound course in an adult pulmonary and critical care fellowship program. Ann Am Thorac Soc 11:784–788
Bhat SR, Johnson DA, Pierog JE et al (2015) Prehospital evaluation of effusion, pneumothorax, and standstill (PEEPS): point-of-care ultrasound in emergency medical services. West J Emerg Med 16:503–509
Connolly K, Beier L, Langdorf MI et al (2015) Ultrafest: a novel approach to ultrasound in medical education leads to improvement in written and clinical examinations. West J Emerg Med 16:143–148
Dinh VA, Giri PC, Rathinavel I et al (2015) Impact of a 2-day critical care ultrasound course during fellowship training: a pilot study. Crit Care Res Pract 2015:675041
Heiberg J, Hansen LS, Wemmelund K et al (2015) Point-of-care clinical ultrasound for medical students. Ultrasound Int Open 1:E58–E66
Sanchez-de-Toledo J, Renter-Valdovinos L, Esteves M et al (2016) Teaching chest ultrasound in an experimental porcine model. Pediatr Emerg Care 32:768–772
See KC, Ong V, Wong SH et al (2016) Lung ultrasound training: curriculum implementation and learning trajectory among respiratory therapists. Intensive Care Med 42:63–71
Greenstein YY, Littauer R, Narasimhan M et al (2017) Effectiveness of a critical care ultrasonography course. Chest 151:34–40
Krishnan S, Kuhl T, Ahmed W et al (2013) Efficacy of an online education program for ultrasound diagnosis of pneumothorax. Anesthesiology 118:715–721
Abbasi S, Farsi D, Hafezimoghadam P et al (2013) Accuracy of emergency physician-performed ultrasound in detecting traumatic pneumothorax after a 2-h training course. Eur J Emerg Med 20:173–177
Gargani L, Sicari R, Raciti M et al (2016) Efficacy of a remote web-based lung ultrasound training for nephrologists and cardiologists: a LUST trial sub-project. Nephrol Dial Transplant 31:1982–1988
Edrich T, Stopfkuchen-Evans M, Scheiermann P et al (2016) A comparison of web-based with traditional classroom-based training of lung ultrasound for the exclusion of pneumothorax. Anesth Analg 123:123–128
Volpicelli G, Elbarbary M, Blaivas M et al (2012) International evidence-based recommendations for point-of-care lung ultrasound. Intensive Care Med 38:577–591
Cook DA, Beckman TJ (2010) Reflections on experimental research in medical education. Adv Health Sci Educ Theory Pract 15:455–464
Ruiz JG, Mintzer MJ, Leipzig RM (2006) The impact of e-learning in medical education. Acad Med 81:207–212
Cook DAL, Levinson AJ, Garside S (2010) Time and learning efficiency in internet-based learning: a systematic review and meta-analysis. Adv Health Sci Educ 15:755–770
Cook DA, Levinson AJ, Garside S et al (2008) Internet-based learning in the health professions: a meta-analysis. JAMA 300:1181–1196
Platz E, Goldflam K, Mennicke M et al (2010) Comparison of web-versus classroom-based basic ultrasonographic and EFAST training in 2 European hospitals. Ann Emerg Med 56:660–667
Hempel D, Sinnathurai S, Haunhorst S et al (2016) Influence of case-based e-learning on students’ performance in point-of-care ultrasound courses: a randomized trial. Eur J Emerg Med 23:298–304
Kang TL, Berona K, Elkhunovich MA et al (2015) Web-based teaching in point-of-care ultrasound: an alternative to the classroom? Adv Med Educ Pract 6:171–175
Messick S (1995) Validity of psychological assessment: validation of inferences from persons’ responses and performances as scientific inquiry into score meaning. Am Psychol 50:741–749
Downing SM (2003) Validity: on meaningful interpretation of assessment data. Med Educ 37:830–837
Skaarup SH, Laursen CB, Bjerrum AS et al (2017) Objective and structured assessment of lung ultrasound competence. A multispecialty delphi consensus and construct validity study. Ann Am Thorac Soc 14:555–560
Campell DT, Stanley JC (1963) Experimental and quasi-experimental designs for research. Houghton Mifflin Company, Boston
Yarris LM, Gruppen LD, Hamstra SJ et al (2012) Overcoming barriers to addressing education problems with research design: a panel discussion. Acad Emerg Med 19:1344–1349
Schuwirth LWT, van der Vleuten CP (2003) ABC of learning and teaching in medicine: written assessment. BMJ 326:643–645
Downing SM, Yudkowsky R (2009) Assessment in health professions education. Routledge, New York
Haladyna TM, Downing SM, Rodriguez MC (2002) A review of multiple-choice item-writing guidelines for classroom assessment. Lawrence Erlbaum Associates, Inc., Mahwah
Ostergaard ML, Nielsen KR, Albrecht-Beste E et al (2017) Development of a reliable simulation-based test for diagnostic abdominal ultrasound with a pass/fail standard usable for mastery learning. Eur Radiol. https://doi.org/10.1007/s00330-017-4913-x
Jensen K, Bjerrum F, Hansen HJ et al (2017) Using virtual reality simulation to assess competence in video-assisted thoracoscopic surgery (VATS) lobectomy. Surg Endosc 31:2520–2528
Cook DA, Hatala R, Brydges R et al (2011) Technology-enhanced simulation for health professions education: a systematic review and meta-analysis. JAMA 306:978–988
Conception and design: PIP, KRM, LK, and CBL. Analysis and interpretation: PIP, KRM, OG, LK, BUN, and CBL. Data collection: PIP, KRM, and CBL. Overall responsibility and guarantor: PIP, OG, LK, and CBL. All authors read and approved the final manuscript.
All authors disclose no financial or personal relationship with other people or organizations that could inappropriately influence (bias) their work.
Availability of data and materials
Corresponding author, Pia Iben Pietersen, has full access to all data in the study and takes responsibility for the integrity of the data, and on behalf of the authors, gives permission to Critical Ultrasound Journal to publish all data and material used in this study.
Consent and copyright
This manuscript including part of its essential substances or figures has not been published and is not under consideration for publication elsewhere. In addition, no simultaneous submissions of similar manuscripts have been made. All authors have given permission for Critical Ultrasound Journal to use any published material from the manuscript if the journal should choose to publish the manuscript.
Ethics approval and consent to participate
Application for The Regional Committees on Health Research Ethics for Southern Denmark was send, and no permission needed (S-20172000-44).
This research received no specific grant from any funding agency in public, commercial or not-for-profit sectors.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
About this article
Cite this article
Pietersen, P.I., Madsen, K.R., Graumann, O. et al. Lung ultrasound training: a systematic review of published literature in clinical lung ultrasound training. Crit Ultrasound J 10, 23 (2018). https://doi.org/10.1186/s13089-018-0103-6