Although the assessment of medical competency has changed considerably over  the last 50 years (Schuwirth & van der Vleuten, 2020), the foundations for a fair assessment – with an eye for the authentic and professional development – in the area of medical interviewing were laid in the seventies and eighties (Kraan & Crijnen, 1987).

The medical interview is considered the ‘holy grail’ in the assessment of medical competence, because:

  • The medical interview is a competence in it’s own right
  • Competencies related to the medical interview, such as medical problem solving, knowledge, attitude, skills, are involved
  • Important outcomes for both patient, such as facilitation, insight and compliance, and physician, such as diagnostics and the installment of therapy, are achieved through the medical interview.

While Elstein et al. (1978 ) informed educators about the mechanics of medical problem solving, Kraan and Crijnen (1987) informed them about the dynamics of the medical interview.

It is unfortunate that Schuwirth & van der Vleuten (2020) ignore the medical interview as a core constituent of medical competence in their History of Assessment in Medical Education, and take no notice of the early contributions that educators, such as Crijnen and Kraan, made to the medical interview as measurement, as judgment, and as a system for reflection, feedback and assessment in the eighties (1987).

Medical interview is a core medical competence and MAAS MI a robust measure of interviewing skills over time

MAAS MI remains a robust measure of interviewing skills over the years and, despite societal changes, the dynamics between patient and physician have not changed across time. We included, for example, after reviewing the literature 40 years after initial construction only one new item Shares valid resources on the internet regarding diagnosis and treatment plan, as an example of the robustness of MAAS MI.

In the next paragraphs, we will contrast the development of medical competence over the last 5 decades with the accomplishments on the medical interview which were mostly achieved in the eighties.

MAAS MI as Measurement

While assessment as measurement may well induce a very restrained approach to scale construction in the area of medical competence in general, we broadened our scope from the beginning and arrived at two goals for our model of medical interviewing skills:

  • The skills should form an ideal model of the initial interview
  • The model could be used in education for instruction, self-evaluation and assessment.

During the construction of MAAS Medical Interview, we relied heavily on the notion of a nomological network where the items and scales, constituting together a comprehensive medical interview at the high-end level of proficiency, were considered to serve the processes of relation building and information exchange necessary to achieve the purposes of the interview. See also: Validity.

While formulating items and criteria for scoring, we tried to formulate an item as behaviorally as possible to facilitate practice, reflection and feedback by inexperienced learners, while including a degree of judgment to foster meaning and face validity in the measures of MAAS medical interviewing skills. See also: Scale Construction.

We were the first to understand that reliability is rather a sampling problem, and disentangled the variance components contributed to observers, medical case, the patient as a person under normal and constraint conditions, and estimated the number of observations necessary for a fair measure of interviewing skill. We concluded that with 16 – 20 observations, collected in a portfolio, teachers and supervisors obtain a robust measure of a learner’s achievement. See also: Achieving Fair Measures of Interviewing Skills in General Practice and Mental Health.

We also examined the scalability of the MAAS MI scales and found that each scale measured one, single dimension of interviewing skills as well as that each item measured a well-defined level of ability and proficiency on that dimension, because MAAS MI fulfilled the strict criteria of the 1-parameter probabilistic scale model by Rasch.

This has two consequences:

  • First, the measurement characteristics of the MAAS MI scales are very robust
  • Second, the behaviors contributing to the ability to interview are strictly hierarchical performed by physicians and well-organized in order and ability which has consequences for instruction.

See also: Scalability of MAAS MI.

Taken together, medical interviewing skills as measurement developed different from other medical competencies over the last 5 decades, because:

  • Separate items of interviewing skill contributed intrinsically meaningful to the separate constructs and to the nomological net of the interview as a whole
  • Medical interviewing skills acknowledged the necessary contributions of related competences
  • Human judgment was allowed and even valued to ensure the validity of measures
  • Reliability was approached – for the first time – from a sampling perspective allowing the formation of Triangulation Tracks.

It is unfortunate that Schuwirth & van der Vleuten (2020) do not take the medical interview – as the holy grail of assessment of medical competence – into account in their history of assessment in medical education.

MAAS MI as Judgement

In the nineties, the notion of assessment as measurement changed into assessment as judgment whereby assessment should foster independence and critical thinking and learners were expected to be involved in the assessment on the condition that they should receive meaningful feedback (Schuwirrth & van der Vleuten, 2000).

Independence, critical thinking and meaningful feedback should be fostered while mastering complex competences, such as medical interviewing skills

Many of the recommendations formulated in the nineties for the assessment of medical competence in general, however, were already applied in the assessment of medical interviewing skills in the eighties (Kraan & Crijnen, 1987).

In MAAS MI medical interviewing skills:

  • Candidates were assessed in (simulated) consultation hours in real life settings
  • Assessment was based on direct observation of behavior
  • Candidates were instructed about one comprehensive model of the medical interview at the end level of competency throughout medical school
  • Examiners were instructed about the clinical content of the interview, informed about what to look for, how to interpret, and where to draw the line, and manuals with instructions were available throughout the interview and later for feedback
  • Measures of well defined behavioral descriptions of interviewing skills were combined with global measures to foster self assessment, reflection, feedback and self-regulation and adaptation.

In their history of assessment of medical competence, Schuwirth & van der Vleuten (2020) noticed an improvement in the understanding and accuracy of examiners’ judgments since 2010. They attributed this improvement to staff development and to a shared understanding by staff of the competencies under study.

Although this conclusion may apply to developments in the assessment of medical competences in general, this inference definitely doesn’t apply to the assessment of medical interviewing skills.

In medical interviewing, the formulation of MAAS MI in the eighties (Crijnen & Kraan, 1985) with the different tools and accompanying research, incited the improvement in proficiency of examiners in the medical interview and led to a shared understanding of what constitutes a good interview in recent decades.

MAAS MI enhances the understanding of a good interview and improves the proficiency of medical examiners

MAAS MI formulated at end level of performance

Schuwirtz & van der Vleuten (2020) describe, as an example of this development, the positive impact of the formulation of Entrustable Professional Activities (EPA) on the judgments by experts and the subsequent improvement of psychometric criteria of the assessment, and mention, for instance, the formulation of EPAs for Workplace-based Assessment in General Practice.

Whereas in medical interviewing, contrary to medical competence in general and illustrated by a recent example, MAAS MI inspired the formulation of EPAs on Shared Decision Making. 32 Experts on SDM and medical education in General Practice in the Netherlands, went through 3 Delphi rounds and achieved agreement on 4 EPAs with 18 behavioral indicators for postgraduate medical education (Baghus et al., 2021). A closer look at the behavioral indicators brings to the light that the items can be traced back – in identical order and identical wording without any autonomous or authentic contribution – to the MAAS MI items  Exploring Reasons for Encounter and Presenting Solutions (Crijnen & Kraan, 1985). 

MAAS MI inspires the validation of Shared Decision Making

A further look reveals that MAAS MI (Crijnen & Kraan, 1985) is to be preferred above EPA formulations of SDM (Baghus et al., 2021), because in MAAS MI:

  • The context for SDM is taken account of, whereas in EPA the context is lacking
  • The relation with the patient is included, whereas EPA operates in isolation
  • MAAS SDM takes into account the nature and impact of the underlying medical condition and provides the necessary information, whereas the medical condition is ignored in EPA SDM
  • The trade-off between History-taking and Presenting Solutions is taken into account, whereas EPA focuses only on presenting and sharing information while ignoring cognitive overload in the patient.

We conclude – other than Schuwirtz & van der Vleuten – that a fair understanding of the competence at end level, combined with a comprehensive set of behaviorally formulated interviewing skills constituting the competence under study, enhances the quality of the learning and assessment experience.

Educators should try to understand medical competence at end level rather than engage in distracting activities, such as the formulation of EPAs and the narratives of judgment

Meaningful Feedback

As the development of assessment as judgment was intended to foster a learner’s development by providing a.o. meaningful feedback, educators in medical competence developed a set of Workplace-based Assessments (WBA), but, again, remarkable differences are observed when comparing these developments with those on the medical interview.

Schlegel et al. (2012) acknowledged the importance of feedback by (simulated) patients as a tool to provide specific information about a student’s performance relative to a standard. The authors redeveloped the Quality of Simulated Patient Feedback Form because they were not satisfied with the ‘only existing instrument,’ the Maastricht Assessment of Simulated Patients (2004).

However, a comparison of these instruments with the Patient Satisfaction with Communication Checklist, the first available instrument for systematic and standardized feedback by (simulated) patients about the communication by students and physicians (Crijnen & Kraan, 1985), reveals at least three remarkable differences:

  • First, the items in the QSPFF can be characterized as formal and procedural, and devoid of real-life meaning and significance; items are e.g. SP first gave positive feedback and SP gave feedback from patient’s perspective. Whereas PSCC-items go straight to the core of the interview for the patient and provide meaningful feedback, with items such as The doctor gave me the opportunity to tell my own story. For feedback to be successful, it should target issues at the core of the medical interview that are directly relevant for any encounter between doctor (or nurse) and patient.

Patient Satisfaction with Communication targets core issues directly relevant for any doctor-patient encounter

  • Second, QSPFF has only been used with simulated patients, whereas PSCC is developed and standardized on both real and simulated patients encounters, allowing for the most relevant feedback by both real patients and simulated patients, and allowing for feedback from day one on until the end of medical school and later. Because standardization is established on both real and simulated patients, students and educators obtain feedback on how students would do in real life.
  • Third, as QSPFF focusses only on the formal procedures of feedback, students and residents are deprived of instruction and understanding of the notion about what is of real importance for the patient in a medical consultation.

Students should be instructed about what a patient values in the consultation

Taken together, second-generation Workplace-based Assessments consider the person who provides the judgement (patient, expert, educator) as a barrel that is filled during the interview and relieved during debriefing while leaving it entirely to the judge where the barrel is filled with; we see a similar phenomenon in feedback by simulated patients (Schlegel et al., 2012), in raters’ performance (Govaerts et al., 2013), in expert feedback in clerkships (Bok et al., 2015), and in training in doctor-patient communication (Giroldi et al., 2017) where judges develop personal schemas once they are involved in a consultation.

Whereas in first-generation Workplace-based Assessments, such as MAAS-Global (Crijnen & Kraan, 1984) and Patient Satisfaction with Communication (Crijnen & Kraan, 1987), common ground and generalizability across physicians and patients in judgement are sought; in MAAS-Global, judgement is supported by behavioral anchor points based on a fair understanding of the medical interview, and in Patient Satisfaction with Communication, judgement and feedback are based on those dimensions in the communication valued by all patients.

Judgement of a student of the medical interview should be based on a fair understanding of the processes of the interview and the outcomes for patient and physician

Private judgements by experts

A third recent development in the assessment of complex medical competences concerns the nature and formulation of private judgment by experts. Valentine & Schuwirth (2019) examined the narratives of experienced physicians while formulating feedback to their registrars, and found that clear, information-rich terminology regarding the demonstration of expertise, personal credibility, professional credibility, use of a predefined structure, and relevance would provide for meaningful feedback that assists the learner.

The clear, information-rich terminology of MAAS MI is essential in articulating a judgment of performance by experts 

Although these findings may well contribute to helpful feedback in poorly defined and under researched areas of medical competence, we argue that regarding medical interviewing skills a fair understanding of the interview contributes most to fair, well-informed and well-articulated feedback.

Table 1. Articulating Judgment of Performance in MAAS Medical Interview
Schermafbeelding 2022-03-18 om 09.50.11

The quality of feedback and judgment by teachers is directly influenced by the level of their experience and expertise in the competence under study (Bok et al., 2015). Reality is that in medical school, feedback on medical interviewing skills is often provided by psychologists and other educators with no experience in interviewing patients. In residency training, supervisors may well be very qualified in their field of expertise and have tons of experience in interviewing their patients, but are no experts in the medical interview.

Recommendation

We recommend, therefore, that a fair understanding of the medical interview, through teaching and instruction, combined with the clear, information-rich terminology of MAAS MI be applied while articulating judgements of performance on medical interviewing skills.

MAAS MI as a System for Reflection, Feedback & Professional Development

Ultimately, complex competencies, such as the medical interview, are best learned by combining – lots of – practice with self-reflection and feedback with an understanding of the skill at end-level. Triangulation, the process of bringing together three sources of information, provides for an assessment that is both robust, but also nuanced and refined enough, to steer the learner and educator towards the next stage of achievement. In the next paragraphs, we will develop a comprehensive system of Triangulation Tracks to foster reflection, feedback and assessment on medical interviewing skills in medical school and residency training.

Triangulation, combining 3 sources of information, poises robustness with nuance and refinement in feedback and assessment

Integrated and holistic assessment

Although over the years, for most assessments of clinical competence, the competency under study was subdivided into smaller and smaller – and thereby meaningless – units, requiring a reset to provide for an integrated and holistic assessment, the same development did not hold for medical interviewing skills (Schuwirtz & van der Vleuten, 2020).

MAAS Medical Interviewing skills – 68 well described and well defined interviewing skills organized in 6 scales constituting together a comprehensive medical interview at the end level of medical competence – remained stable across medical school and residency training, and across the last 4 decades (Kraan & Crijnen, 1987-2023). 

The approach to base decisions of medical competence on meaningful triangulation of different sources of information on different occasions, taken by educators to keep the assessment meaningful and holistic over the last 10 years, was available for medical interviewing skills since 1987.

5 Triangulation Tracks in MAAS MI provide for meaningful self-assessment, reflection, feedback, formative & summative evaluation

Self-assessment, reflection, feedback by peers and supervisor, formative as well as summative assessment are facilitated by the integrated set of MAAS MI measures of the medical interview and related competences including patient satisfaction.

Five Triangulation Tracks on the Medical Interview

MAAS MI recognizes 5 Triangulation Tracks:

  1. MAAS MI Self to reflect about interviewing skills and to set personal goals for study and improvement
  2. MAAS MI General and MAAS MI Global to assess an interview objectively, to reflect about the quality, and to provide meaningful feedback in a respectful way
  3. PSCC Facilitation, Insight and Intention to Comply to include core values of the patient in the assessment and feedback about the medical interview
  4. PSCC Disrupted Communication and Directivity to identify critical incidents in the communication
  5. MAAS MI with related competences to integrate interviewing skills with competences such as diagnostics, treatment selection, and further management

Five Triangulation Tracks are further recognized in Table 2.

Table 2. MAAS MI As System For Reflection, Feedback and Assessment Through Triangulation of Sources of Information Regarding the Medical Interview--Triangulation Track 1: Reflection & Self-assessment
Schermafbeelding 2022-02-28 om 16.10.41
Table 2. Cont'd--Triangulation Track 2: Reflection, Feedback and Formative & Summative Evaluation
Schermafbeelding 2022-02-28 om 16.10.56
Table 2. Cont'd--Triangulation Track 3: Patient Satisfaction with Communication
Schermafbeelding 2022-02-28 om 16.23.37
Table 2. Cont'd--Triangulation Track 4: Patient Dissatisfaction & Disrupted Communication
Schermafbeelding 2022-02-28 om 16.23.13
Table 2. Cont'd--Triangulation Track 5: Medical Interview and Related Competencies
Schermafbeelding 2022-03-02 om 08.39.43

Triangulation Tracks in Practice

Triangulation Tracks can be organized adjacent to:

  • Early years in medical school
  • Clerkships
  • Residency training
  • Post-graduate education.

Tools necessary for triangulation can be found under TOOLS and TOOLS Cont’d with descriptions and explications under Explanation and In Context while we are working on integrated registration forms.

MAAS Medical Interview is available at any time and any place on your mobile and other devices.

MAAS MI in Residency Training in General Practice and Psychiatry

In line with the recommendations regarding adaptive expertise (van der Schaaf, 2020), we suggest to develop a comprehensive but consistent set of occasions for expertise development which include opportunities for reflection and self-assessment, feedback, and assessment while taking into account the developmental state of the resident.

While forming a professional identity (Barnhoorn, 2023):

  • In the early years of residency training, residents are occupied with achieving competence and fluency in their medical interviewing skills
  • Whereas in their final years, they are becoming a general practioner or medical specialist by becoming engaged in more demanding and different situations.

Therefore, we recommend to organize a system of occasions for expertise development at 4 levels of implementation:

1. Ongoing Assessment of Professional Development

MAAS MI was explicitly developed for education and research. MAAS MI Portfolio includes, therefore, around 16 videotaped medical interviews with systematic data-collection by the patient, the physician themselves, peer-residents and supervisor over a 3-4-year residency trajectory.

MAAS MI recognizes 5 Triangulation Tracks:

  1. MAAS MI Self to reflect about interviewing skills and to set personal goals for study and improvement
  2. MAAS MI General and MAAS MI Global to assess an interview objectively, to reflect about the quality, and to provide meaningful feedback in a respectful way
  3. PSCC Facilitation, Insight and Intention to Comply to include core values of the patient in the assessment and feedback about the medical interview
  4. PSCC Disrupted Communication and Directivity to identify critical incidents in the communication
  5. MAAS MI with related competences to integrate interviewing skills with competences such as diagnostics, treatment selection, and further management.

2. Phase A: Achieving Competence and Fluency in the Medical Interview in Early Residency

Medical Interview: Co-conducting with senior-resident

In this setting, young residents are conducting around 3 interview simultaneously with a senior-resident in the early stages of residency. Patients are informed and consent requested; our experience is that patients appreciate this format of interviewing, because the process is very transparant and the interview is experienced as of high quality. After a couple of minutes of interviewing, the senior-resident interferes in the proces by asking questions; the junior-resident has the opportunity to formulate any issues, to discuss subsequent strategies, and to put them into practice. Although demanding, the procedure is experienced as valuable with long-lasting positive effects.

Medical Interview: Obtaining different sources of feedback

In this setting, residents are videotaping around 8 interviews over a periode of 1,5-2 years and obtaining feedback from the patient, from junior-residents in their peer group, and from senior-faculty.

  • Residents review their interviews with MAAS MI, formulate issues and area’s for improvement, and report their findings in the portfolio; as such they are stimulated to reflect. Residents are also stimulated to report the medical findings thoughtfully in the files, because this report is an important form of deliberate practice.
  • Patient information is included in the review.
  • Peers are requested to provide in pairs systematic and well-formulated feedback through MAAS-MI.
  • Senior-faculty will observe the interview by means of MAAS MI, examine the different results, and discuss the findings and feedback with the resident.

Medical Interview: Providing feedback

Residents are paired and the peers provide feedback with MAAS MI about the aforementioned interviews. Peers discuss the issues with their colleagues and report their findings in the portfolio; as such they are stimulated to reflect, to formulate carefully and respectfully, and to write with care. Moreover, by observing their peers’ interviews, they expand their own repertoire of skills.

3. Phase B: Forming a Professional Identity

Medical Interview: Co-conducting with junior-resident

In this setting, young residents are conducting around 3 interview simultaneously with a senior-resident in the early stages of residency. Patients are informed and consent requested; our experience is that patients appreciate this format of interviewing, because the process is very transparant and the interview is experienced as of high quality. After a couple of minutes of interviewing, the senior-resident interferes in the proces by asking questions; the junior-resident has the opportunity to formulate any issues, to discuss subsequent strategies, and to put them into practice. Although demanding, the procedure is experienced as valuable with long-lasting positive effects.

The challenge for the senior-resident is to decide when and how to intervene, and to be explicit in the formulation of their responses.

This procedure can also be conducted with students from related professions, such as psychologists, specialized nurses, etc.

Medical Interview: Obtaining different sources of feedback

In this setting, which is similar as the setting formulated above, residents record and reflect about their interviews, but these interviews deal with more demanding patients or circumstances, such as talkative or anxious patients, demanding patients and conflicts around treatment, etc.

Again, patients, peers and senior-professionals provide feedback and discuss challenges in the interview.

4. Overarching Processes

As residents are only temporarily engaged in the patient journey and the professional development of a department, a number of relevant processes regarding the medical interview and patient management may well be beyond their opportunity to become acquainted with. Simply, because these processes take longer or are harder to understand.

However, these processes may wel be relevant for their professional development, because they provide highly relevant opportunities to analyse practical situations, to discern patterns and mechanisms, and to understand theoretical principles – important for the development of adaptive expertise.

Therefore, we recommend that senior professionals/instructors organize twice a year a longer-lasting session (>2 hrs.) for all residents.

Topics may be:

  • A patient’s journey including the early stages and later development with positive and negative health outcomes
  • A patient’s journey including experiences of loss and death with pain, grief, impaired functioning
  • A patient’s journey with unexpected challenges, choices, or outcomes
  • A department’s journey in response to challenges regarding safety
  • A department’s journey regarding assessment and improvement of quality.

We are aware of one example in psychiatry, two young children who witnessed the murder of their mother and were followed-up for a period of 10 years. The complexities in the recovery proces are shared with residents through 12 short videos showing the psychiatrist speaking (and treating) with the children.

It is not necessary to say, but these sessions are highly valued by medical students and residents!

References

Baghus, A., Giroldi, E., Muris, J., Stiggelbout, A., Van De Pol, M., Timmerman, A., & Van Der Weijden, T. (2021). Identifying Entrustable Professional Activities for Shared Decision Making in Postgraduate Medical Education: A National Delphi Study. Academic Medicine, 126–133. https://doi.org/10.1097/ACM.0000000000003618

Barnhoorn, P. (2023). On becoming a GP – professional identity formation in GP residents. Leiden University Medical Center, Leiden, the Netherlands.

Bok, H. G. J., Jaarsma, D. A. D. C., Spruijt, A., Van Beukelen, P., Van Der Vleuten, C. P. M., & Teunissen, P. W. (2016). Feedback-giving behaviour in performance evaluations during clinical clerkships. Medical Teacher, 38(1), 88–95. https://doi.org/10.3109/0142159X.2015.1017448

Crijnen, A. A. M., & Kraan, H. F. (1985). MAAS MI General Practice (Maastricht History-taking & Advice Checklist).

Elstein, A. S., Shulman, L., & Sprafka, S. (1978). Medical Problem Solving. Harvard University Press, Boston.

Giroldi, E., Veldhuijzen, W., Geelen, K., Muris, J., Bareman, F., Bueving, H., Van Der Weijden, T., Van Der Vleuten, C. (2017). Developing skilled doctor-patient communication in the workplace: a qualitative study of the experiences of trainees and clinical supervisors. Adv in Health Sci Educ, 22, 1263–1278. https://doi.org/10.1007/s10459-017-9765-2

Govaerts, M. J. B., Van de Wiel, M. W. J., Schuwirth, L. W. T., Van der Vleuten, C. P. M., & Muijtjens, A. M. M. (2013). Workplace-based assessment: Raters’ performance theories and constructs. Advances in Health Sciences Education, 18(3), 375–396. https://doi.org/10.1007/s10459-012-9376-x

Kraan, H. F., & Crijnen, A. A. M. (1987). The Maastricht History-taking and Advice Checklist: studies of instrumental utility. Lundbeck, Amsterdam.

Schlegel, C., Woermann, U., Rethans, J.-J., & Van Der Vleuten, C. (2012). Validity evidence and reliability of a simulated patient feedback instrument. https://doi.org/10.1186/1472-6920-12-6

Schuwirth, L. W. T., & van der Vleuten, C. P. M. (2020). A history of assessment in medical education. Advances in Health Sciences Education, 25(5), 1045–1056. https://doi.org/10.1007/S10459-020-10003-0

Valentine, N., & Schuwirth, L. (2019). Identifying the narrative used by educators in articulating judgement of performance. Perspectives on Medical Education, 8(2), 83–89. https://doi.org/10.1007/S40037-019-0500-Y

Schaaf van der, M. (2020). CaReducation – Care, Research & Education, Utrecht, the Netherlands.