One of the major problems in medical education is that knowledge and skills acquired through university teaching do not transfer well to clinical practice [1, p. 149]. In spite of learning a great deal content knowledge about the functioning of the human body, medical students struggle to transfer that knowledge to one of the core disciplinary practices—differential diagnosis. Differential diagnosis requires “the distinguishing of a particular disease or condition from others that present similar clinical features” . The lack of transfer may stem from current ways of instruction, which are largely based on what we call the “Knowledge as Starting Point” paradigm. In this paradigm, the focus is on first instructing massive amounts of basic knowledge without an adequate attention to situating the knowledge in disciplinary practice. Consequently, in classical medical school settings, there is limited access to real-world practical training and a significant lack of disciplinary practice.
One solution to the problem above, supported by theories of situated cognition and learning, is to situate learning of medical students in the core disciplinary practice of differential diagnosis (see Figure 1). That is, instead of first learning about the medicine and then trying to transfer it to differential diagnosis, we aim to design differential diagnosis as the very anchor for learning. For example, instead of just studying static knowledge such as the anatomy of the knee, and potential knee injuries, learning starts with an actual problem: knee pain. Starting from this situation, knowledge about the knee can be developed and diagnostic procedure skills picked up and practiced. Thus conceived, instead of “Knowledge as Starting Point”, we advance the “Situations as Starting Points” [3, pp. 26–34] approach in medical education.
Defined by the new learning objective catalogue of Switzerland for medical students, called PROFILES , “Situations as Starting Points”are situations medical students have to be able to deal with on the first day of their residency.Whilst we acknowledge there are several possibilities for implementing “Situations as Starting Points”, we aim to explore the use of a computer-based virtual environment (CVE).
For example, doctor-patient scenarios can be implemented into a CVE, which will provide students with the opportunity to train differential diagnosis skills and anchor learning to practice in a safe environment.
In research, it has yet to be shown, why and when CVEs are effective in general and in (medical) education. Especially, research on virtual simulation-based learning environments intending to train differential diagnosis skills does not exist. With this project we aim to close a gap in learning sciences and education research and come up with suggestions when and how to best implement a CVE for differential diagnosis into a medical curriculum.
Figure1. Initial situation, reasoning, solution approaches and implementation plans of the present project.
Situated learning is a type of learning activity where learning is “situated” in a particular context. Key features of situated learning are: authentic contexts, authentic activities, observing expert performances, multiple perspectives, coaching and scaffolding, integrated assessment, collaboration, reflection and articulation i.e. –. Generally speaking, from a situative lens, all learning is situated. Hence the term “situated learning” needs to be specified more precisely. For this project, situated learning is defined as “learning which is situated in disciplinary practice”. An example according to the present study is the internship experience of being in a doctor’s room of a medical practice or hospital.
Immersed into this situation, students can be cognitively and physically active in the actual work environment. Situated learning allows students to carry out tasks and solve problems in an environment that reflects the nature of such tasks. It has been shown that “being situated” is a critical element in fostering learning , . Information acquired in a meaningful context and related to prior knowledge can develop better, larger, and more linked conceptual understanding when related to prior knowledge and experiences. Additionally, learning that is situated (in disciplinary practice) may generalize better to a wider range of situations compared to traditional classroom teaching .
There are different ways to implement situated learning. Currently, in medical education, situated learning and simulated practical training is based on real persons (actors) portraying patients with a given health problem . Training those actors costs time, money and effort. As well, the number of scenarios is limited. Within the present study we focus on the implementation of situated learning into a CVE.
This combination reveals the opportunity to create virtual doctor-patient scenarios, which save money and effort in the long-term. Furthermore, students could train differential diagnosis skills in a safe and always available medical learning environment.
Preparation for Future Learning
A further benefit of situated learning, as conceived above, is that “Situations as Starting Points” can then be designed as a preparation for future learning. Medical students cannot possibly experience all situations they will face in their professional life during their studies. But it is possible to educate future doctors to be capable of adaptation and change, with minds that can encompass new ideas and developments [15, p. 4]. Hence, adaptively transferring recently acquired knowledge and skills into subsequent learning situations (e.g. clinical context).
reparation for future learning is understood to be the capacity to learn new information, to use resources effectively and innovatively, and to invent new strategies for learning and problem solving in practice [16, p. 115]. For example, as clinicians work, particularly in situations of novelty and complexity, they often find that straightforward applications of their knowledge are insufficient to address patient needs. Instead, they are required to use their knowledge flexibly to develop an effective solution within the contexts of patient, social and system they find themselves. Those who are able to do so work adaptively to provide optimal care for their patients, while gaining from the experience as part of their own continuous learning [16, p. 116].
There is evidence that clinical students may not be learning effectively from all facets of their practice, potentially because their training has not fully prepared them to do so [16, p. 115]. Instead of teaching adaptive expertise, contemporary curricula are heavily focused on developing routine expertise –.To address this gap, we aim to explore how best we can integrate a CVE into the curricula to aid the development of expertise in differential diagnosis.
Computer-based Virtual Environments (CVEs)
Virtual environments provide a way for people to visualize, manipulate, and interact with computer-generated environments and exceptionally complex data where other human senses can be engaged. For our study, virtual environments will be created for and applied on screens.
A CVE promise the unique property to generate almost any relevant scenario and align those with “Situations as Starting Points”in a safe environment. CVE tools and platforms provide an immersive experience to users and afford learning to be individualized. Training in a CVE is scalable and not limited to a certain number of students. Even scalability is at costs (number of headsets), the use of a CVE is therefore potentially beneficial for institutions and lecturers from an economic and efficient perspective. Virtual training simulations have shown to have significantly beneficial outcomes on learning in many medical fields. Acquisition of medical expertise, development of technical skills, increase of knowledge representation, contextualization and transfer are some examples , , , , .However, there is a lack of research regarding how, why and when CVEs are effective in general and in (medical) education. Especially, no research on virtual simulation-based learning environments intending to train differential diagnosis skills has been conducted.
The three major goals of the project are to:
- Develop a medical CVE platform tool to train differential diagnosisskills based on situated learning, using design-based research;
- Examine the effectiveness of the medical CVE using controlled and quasi-experimental studies;
- Conduct studies in ecologically valid settings to examine the influence of CVE practice before or during the clinical internship in the hospital on clinical differential diagnosisfinding performance.
These three goals will be the organized and implemented via three work packages described next.
Implementation and Methods
Work Package 1: Building a CVE Platform for differential diagnosis– Design-based Research
The first step of the project contains the technical development and the design-based research to build a CVE platform that meets the required properties of situated learning. We will develop a suitable CVE platform for differential diagnosis training in cooperation with the Game Technology Center of ETH Zürich. The first scenario we will develop is going to be based on a patient with pain in his knee (“Situation as Starting Point”: “swollen or painful joints, morning stiffness, reduction of joint motility” [3, p. 30]).Within a simulation, the students will be able to conduct a complex clinical patient scenario, before giving the most likely diagnosis to the patient. The classical trajectory of differential diagnosismanagement includes (i) a patient interview, (ii) a physical examination, (iii) laboratory test ordering, (iv) imaging and (v) the (working) diagnosis.
The design of the CVE will consist of scaffolds derived from situated learning theories. Furthermore, forms of formative feedback will be embedded to scaffold students’ thinking and actions to improve learning. Focus will be on the fine tuning of these scaffolds, the adaptation to differential diagnosisscenarios in CVE and the implementation. Performance progress will be trackable and the learners’ clinical reasoning and decision-making be fostered.To create genuine aspects of interviewing, a database of authentic doctor and patient questions will be developed and integrated into the scenario.
To assess students’ performance, we developed the following criteria catalogue based on clinical demands and scientific evidence (further and more detailed criteria will be defined): Questions asked, physical exam maneuvers, time, diagnostic testing, costs, harm to patient, confidence of found diagnosis and empathy. Performance benchmarks will be set by collecting data from medical experts while they are going through the CVE scenario (without providing feedback). This will serve as point of comparison for the further work packages. Same procedures will be performed for further scenarios based on other “Situations as Starting Points”.
Work Package 2: Experimental Studies – CVE effects on Preparation for Future Learning from instruction
Having developed the CVE, we will next examine the effectiveness of the CVE experimentally. As represented in Figure 2, there will be two subject groups going through a different sequence of learning activity. The experimental condition will set CVE training prior to direct instruction (CVE-I). The comparison condition will follow the typical sequence where CVEs are used as a learning tool after direct instruction (I-CVE). A pre-test related to differential diagnosiswill be conducted. The test will be related to the differential diagnosisprocess and conclusions about a patient’s diagnosis. It will assess the learning development of students regarding facts and theoretical knowledge about differential diagnosisconduction. Therefore, the test will be repeated after session 1 and at the end of the intervention. All subjects will be provided with an instruction tutorial about the CVE platform. For each group, the CVE activity will take a time / number of rounds playing through a scenario which has to be determined yet.
Regarding instruction, there are two conceivable ways how this can take place. First, electronically, where students can observe a medical expert examination while synchronously getting explanations. Second lecture-based, where study subjects are attending lectures corresponding to differential diagnosisat ETHZ. Two post-tests will be performed. Beside the test mentioned above, a simulation-based test within CVE will be completed. The CVE post-test will assess the students disciplinary practice performance based on selected criteria.
Figure 2. Study protocol to investigate effects of CVE on PFL in instruction. Two different sequences of learning activity will be compared to evaluate the potential of CVE in preparation for future learning. Test 1, 2 and 3 are equal and will be performed outside CVE, i.e. based on multiple-choice questions.
Work Package 3: Ecological field study – Comparison of differential diagnosis skill development in CVE and clinical practice internship
We will evaluate the differential diagnosis skill development during the clinical internship and compare it with the development of skills resulting from CVE training only. Additionally, the CVE effects on preparation for future learning in clinical practice will be investigated.
It will be a challenge to define comparison conditions between CVE and clinical training/assessment. This is mainly because there is no standardized evaluation tool to assess medical interns in clinical practice. Two groups will be formed. Subjects will be medical master students starting their year of clinical internships. The topics focused on will be orthopeaedics and internal medicine.
In a first step, we will directly compare clinical internship training with CVE training (test 2 compared to test 1). This will reveal potential differences regarding the differential diagnosis skill development. One group will go through the clinical internship in orthopaedics in a way that is standard in medical education. The second group will first go through CVE training focused on orthopaedics. During the intern time, the students will be consistently assessed by their medical supervisor. We have yet to elaborate how this will be done. For the CVE group, the number of CVE scenarios to be completed has to be determined and depends on the number of cases students in the intern group encounter in clinics. I.e.: if an intern encounters 50 cases in hospital, a student going through CVE training will complete 50 scenarios as well (where some can be identical or similar). This aims to create valid comparison conditions.
In a second step we will evaluate general and specific transfer from CVE training into clinical practice. Following this goal, the CVE group will be tracked on their further clinical trajectory during the interns in the fields of orthopaedics and internal medicine. As the CVE scenarios were on the topic of orthopaedics, comparing these test results with the performance in clinical internal medicine will evaluate specific transfer. Whereas the comparison with clinical orthopaedics will evaluate specific transfer. Evaluation will be based on comparison of the manual assessments about the intern made by medical supervisor. Results of CVE tests 2, 3 and 4 will be taken in account too.
Figure 3. Study protocol to investigate transfer and differences in developing differential diagnosis skills between clinical practice and CVE. Comparing assessments of intern made by medical supervisor in the further intern trajectory will evaluate effects of CVE on general and specific transfer in to clinical practice.
Within this project we will build a computer-based virtual training tool for medical students. We willshow whether medical CVE simulations are an adequate approach to improve students’ differential diagnosis skills. With findings about the working mechanisms of CVE we will fill a gap in learning science and education research and come up with suggestions when and how to best implement CVEs for differential diagnosis into a medical curriculum. These findings could as well be applied to many other disciplines beyond the medical field.
We will be the first to establish such a CVE training tool in German and therefore meet several regulatory requirements of medical education in Switzerland. By providing such a tool we aim to support the implementation of the new PROFILES learning objective catalogue into the curricula of Swiss universities.
 S. Peters et al., “Enhancing the connection between the classroom and the clinical workplace: A systematic review,” Perspectives on Medical Education. 2017.
 “Merriam-Webster (Dictionary),” 2019. [Online]. Available: https://www.merriam-webster.com/dictionary/differential+diagnosis?show=0&t=1419934834. [Accessed: 11-Sep-2019].
 P. Michaud, P. Jucker-Kupper, and Approved by the Joint Commission of the Swiss Medical Schools (SMIFK/CIMS), Principal Relevant Objectives and Framework for Integrated Learning and Education in Switzerland. Bern: Joint Commission of the Swiss Medical Schools, 2017.
 S. Nicola, I. Virag, and L. Stoicu-Tivadar, “VR medical gamification for training and education,” in Studies in Health Technology and Informatics, 2017.
 C. Moro, Z. Štromberga, A. Raikos, and A. Stirling, “The effectiveness of virtual and augmented reality in health sciences and medical anatomy,” Anat. Sci. Educ., 2017.
 N. E. Seymour, “VR to OR: A review of the evidence that virtual reality simulation improves operating room performance,” World Journal of Surgery, vol. 32, no. 2. pp. 182–188, 2008.
 A. Al-Khalifah, R. McCrindle, P. Sharkey, and V. Alex, “Using virtual reality for medical diagnosis, training and education,” Int. J. Disabil. Hum. Dev., vol. 5, no. 2, pp. 187–194, 2006.
 T. Raupach, C. Münscher, T. Pukrop, S. Anders, and S. Harendza, “Significant increase in factual knowledge with web-assisted problem-based learning as part of an undergraduate cardio-respiratory curriculum,” Adv. Heal. Sci. Educ., vol. 15, no. 3, pp. 349–356, 2010.
 G. Makransky et al., “Simulation based virtual learning environment in medical genetics counseling: An example of bridging the gap between theory and practice in medical education,” BMC Med. Educ., vol. 16, no. 1, 2016.
 W. C. McGaghie, S. B. Issenberg, E. R. Petrusa, and R. J. Scalese, “A critical review of simulation-based medical education research: 2003-2009.,”Med. Educ., vol. 44, no. 1, pp. 50–63, 2010.
 J. S. Brown, A. Collins, and P. Duguid, “Situated learning and the culture of learning,” Am. Educ. Res. Assoc., 1989.
 J. Lave and E. Wenger, Situated learning: Legitimate peripheral participation (learning in doing: Social, cognitive and computational perspectives). 1991.
 J. L. Kolodner, “An introduction to case-based reasoning,” Artif. Intell. Rev., 1992.
 T. D. Parsons et al., “Objective structured clinical interview training using a virtual human patient.,” Stud. Health Technol. Inform., 2008.
 “Tomorrow’s Doctors: Recommendations on Undergraduate Medical Education,” London, 1993.
 M. Mylopoulos, R. Brydges, N. N. Woods, J. Manzone, and D. L. Schwartz, “Preparation for future learning: A missing competency in health professions education?,” Med. Educ., 2016.
 M. Mylopoulos and G. Regehr, “Cognitive metaphors of expertise and knowledge: Prospects and limitations for medical education,” Med. Educ., 2007.
 M. Mylopoulos and G. Regehr, “How student models of expertise and innovation impact the development of adaptive expertise in medicine,” Med. Educ., 2009.
 M. Mylopoulos, G. Regehr, and S. Ginsburg, “Exploring residents’ perceptions of expertise and expert development.,” Acad. Med., 2011.
 W. C. McGaghie, S. B. Issenberg, E. R. Cohen, J. H. Barsuk, and D. B. Wayne, “Does simulation-based medical education with deliberate practice yield better results than traditional clinical education? A meta-analytic comparative review of the evidence,” Acad. Med., vol. 86, no. 6, pp. 706–711, 2011.
 J. M. Weller, “Simulation in undergraduate medical education: Bridging the gap between theory and practice,” Med. Educ., vol. 38, no. 1, pp. 32–38, 2004.
 P. R. Lichtstein, “The Medical Interview,” in Clinical Methods: The History, Physical, and Laboratory Examinations, 3rd ed., H. K. Walker, W. D. Hall, and J. W. Hurst, Eds. Boston, MA: Butterworths, 1990, pp. 29–36.
 J. P. Kassirer, “Imperatives, expediency, and the new diagnosis,” Diagnosis, 2014.
 A. Verghese, E. Brady, C. C. Kapur, and R. I. Horwitz, “The bedside evaluation: ritual and reason,” Ann. Intern. Med., vol. 155, no. 8, pp. 550–553, 2011.
 K. Henriksen and J. Brady, “The pursuit of better diagnostic performance: A human factors perspective,” BMJ Quality and Safety. 2013.
 D. Berger, “A brief history of medical diagnosis and the birth of the clinical laboratory,” Med. Lab. Obs., 1999.
 I. L. Vegting, M. Van Beneden, M. H. H. Kramer, A. Thijs, P. J. Kostense, and P. W. B. Nanayakkara, “How to save costs by reducing unnecessary testing: Lean thinking in clinical practice,” Eur. J. Intern. Med., 2012.
 S. G. Pauker and J. P. Kassirer, “Therapeutic Decision Making: A Cost-Benefit Analysis,” N. Engl. J. Med., 1975.
 S. G. Pauker and J. P. Kassirer, “The Threshold Approach to Clinical Decision Making,” N. Engl. J. Med., 1980.
 J. P. Kassirer, “Our stubborn quest for diagnostic certainty: A cause of excessive testing,” N. Engl. J. Med., 1989.
 R. M. Epstein, B. S. Alper, and T. E. Quill, “Communicating evidence for participatory decision making,” Journal of the American Medical Association. 2004.