Multiple prompt strategies across contexts: feedback in classroom, lab and professional practice

Research team member David Boud provides an overview of the feedback in this subject.

Summary

Feedback practices for a subject often function as individual islands amid the wider course context. This case study demonstrates an innovative approach to managing feedback and assessment strategies by planning a large section of a course as a single subject, rather than four individual subjects.

In this Optometry subject, learners are prepared for professional practice through classroom activities, laboratory work, and clinical placements. The feedback design of the subject integrates a mix of feedback modes and practices, each suited to disparate learning activities and outcomes. The subject is also committed to an ongoing process of reshaping feedback and assessment practices in response to learner comments and evaluations, and these improvements are clearly communicated to each commencing cohort of learners.

Key features of this case study include:

  • Scope for planning a coherent programme of feedback across a single large subject, rather than four individual subjects;
  • Use of different feedback processes for different tasks, as suited to the context in which learners are operating, including peer discussions;
  • Feedback information returned to learners in realistically rapid turnaround times; and
  • Systematic and ongoing improvements to feedback practices based on learner comments.

Click to download a printable version of this case study.

Keywords

cross contexts; multiple feedback strategies; professional practice

The case

This subject prepares learners for clinical practice as optometrists. It combines classroom activities, laboratory work and placement in practice. Learners prepare for and undertake real professional work within a single course subject, under the same teaching team. Credit points allocated to this subject are four times the number for typical subjects within the Faculty.

Feedback practices in this subject have evolved over many years, and were initially based around what the educators-in-charge considered “the most convenient form [of feedback]” for educators. However, the subject’s feedback design has been progressively reshaped in recent years, with a focus on making feedback more effective for learners. Changes to the subject are made in response to comments gathered from learners as part of the University’s regular subject evaluation process, and these comments are used to systematically plan improvements. Recent learner feedback has focused on possible improvements to feedback design, and the subject is undergoing a current cycle of change to improve feedback practices, with a different feedback process the focus of each iteration. All improvements to the subject based on learner comments are reported to the subsequent learner cohort in detail in the subject guide.

Teaching and feedback processes for this subject take place across several diverse environments: classroom, laboratory, and various clinical settings. By locating this range of activities within a single subject, the educator-in-charge is able to oversee the whole learner experience; teaching can be conducted by the same teaching team; and feedback processes for each element are designed to complement one another and form a holistic feedback design across the subject.

Feedback processes for this subject are tailored to the different learning activities, and feedback information is provided to learners within rapid turnarounds. Learners receive immediate responses to multiple-choice questions, detailed group comments on observed clinical simulations at the next meeting, and one-to-one discussions of clinical performance on the same day.

The classroom component of the subject uses a well-known approach called Team-Based Learning. In team-based learning, feedback information is immediate and collective. Learners study using specific resources prior to their class. In class, learners take a short multiple-choice test and then, in a small team, complete the same test. As the educator-in-charge explains, the teams discuss the questions to “arrive at a consensus on the answer, and they scratch [a] card to see if they’re right or not. They’re immediately finding out whether, individually, they got these questions right or wrong”. If a team find that they do not understand or cannot arrive at an answer shown on a feedback card, they are encouraged to research that topic or learning outcome as a group, and so better understand the topic with which they are having difficulty.

The laboratory component focuses on clinical skills development. Learners undertake practical activities at a series of stations, where they are given clinical scenarios. Using the provided equipment, learners perform clinical techniques that they would use as part of an optometry consultation. Potential scenarios include considering different diagnoses and their management outputs, or responding to questions from a simulated patient (played by an actor). Feedback information is provided in groups to the learners, based on detailed observations of their performance, and educators also provide informal feedback comments. As a co-ordinating educator for this component explains, “just inherent to the contact that I have with the [learners] I do give one-on-one feedback to them each time I’m seeing them or teaching them, essentially just on how they’re going and what they should be improving on”. These laboratory simulations lead into a clinical placement, during which learners treat patients at the Australian College of Optometry. Learners’ clinical placements are completed under the supervision of an experienced optometrist, who provides the learner with immediate verbal feedback after they have seen a patient, followed by written performance information.

Feedback loop enacted within weekly team based learning seminars.

 

Why it worked

The design

In this case, feedback was considered to be successful because of the following key elements:

  • Feedback design offers multiple strategies for multiple contexts: the design is closely related to the learning outcomes and the opportunities made available by the various learning activities. It is recognised that quite different activities and forms of feedback information are needed for different purposes. Multiple strategies are deployed to reflect multiple outcomes, multiple occasions of practice and multiple providers of feedback information. While it is not always possible for learners to take the different activities in an ideal sequence, they are integrated as part of the overall subject design.
  • Educators aim to provide feedback within realistically rapid turnaround times: realistically rapid turnaround times for feedback are a major emphasis for this subject. The aspiration is to get answers or useful feedback information to learners either on the spot or for the next class. Information is given on written assessments within a week, and verbal feedback the following day or later on the same day.
  • Learner evaluations of the subject are heeded and implemented: subject evaluation responses from learners are used to refine the feedback processes for each cycle of the subject, and learners are informed about the subsequent changes made.

Research team member David Boud explains what worked in this subject and why.

Enablers

Some of the enabling factors for this feedback design included:

  • Subject is four times the size of an ordinary course subject: learners gain four times the credit points upon completion of this subject, compared to other subjects in the Faculty. The subject has a correspondingly higher allocation of teaching time, which enables a range of learning activities to be offered under the umbrella of a single subject. This allows a degree of integration and consistency to be designed across different feedback processes, which adds coherence to the learner experience.
  • Authentic practice is integrated into the subject: the professional nature of the subject is inherently motivational for learners. It creates a good anticipatory effect: learners look forward to clinical practice and can see how the other learning activities are designed to prepare them for clinical placements.
  • Different educators lead different components of the subject: although having each element of the subject led and staffed by different educators adds complexity to planning, this arrangement allows educators to develop feedback practices suitable to activities for their part of the subject. It also allows educators to refine their feedback processes over time.

Challenges

Some of the challenges for this feedback design included:

  • Managing a large and diverse teaching team: this subject’s teaching team is large and educators have a range of different responsibilities and roles across the subject. In addition, the educator-in-charge has relatively little influence over the clinical placement supervisors, who are employed in an external organisation. While the teaching team is well-briefed, it can be challenging to ensure all educators understand the subject’s aims and their role within the subject, and to monitor feedback provision. This is an issue common to most disciplines with professional placement activities.
  • Balancing multiple, complex components: this subject is large and multifaceted, with a range of teaching activities and learning settings. This sheer complexity means that coordinating and balancing all of the elements is a challenging and ongoing process.
  • Training non-university staff: while the training of educators is important to ensure consistent feedback provision, it can be difficult to implement when the educator-in-charge does not have direct influence over the time of non-university placement supervisors.

What the literature says

The literature emphasises the importance of timeliness of feedback; however, this case demonstrates a different – and perhaps more important – aspect of timeliness: the time before the next occasion on which learners will use the feedback information they receive (see Boud & Molloy, 2013 for further reading). There are plenty of opportunities for the feedback loop to be completed, as learners are utilising classroom knowledge in the laboratory and the clinic. Learners also utilise the clinical skills developed in a simulated environment in closely connected practice.

There is no one approach that may be used for all feedback moments, as learners need different kinds of feedback information at different times and for different purposes. Feedback on practical and technical skills needs to be immediate and further practice undertaken if learning is to be consolidated (see Shute, 2008). Feedback information about interactions with clients also needs to occur close to the interaction, so that there is good recollection of what has occurred by both the learner and the person providing feedback information (Junod Perron et al., 2013).

Finally, there is considerable research on feedback in clinical settings that addresses: the educational needs of learner practitioners (Cote & Bordage, 2012); their responsiveness to feedback (Eva et al., 2012); linking formative feedback to summative assessment events (see Harrison, Konings, Molyneux, Schuwirth, Wass, & van der Vleuten, 2013); the direct observation of clinical skills (see Kogan, Conforti, Bernabeo, Durning, Hauer, & Holmboe, 2012); and the importance of reflection in feedback processes (see Pelgrim, Kramer, Mokkink, & van der Vleuten, 2013).

Moving forwards

Advice for educators

The participants in this case offered several suggestions for educators wishing to trial the feedback design:

  • Emphasise real-world use: design subjects with the end point of what learners will be required to do in professional practice in mind. This gives a clear focus for design, and contributes to learner motivation when they are fully aware that their learning will be utilised in the settings they will enter on graduation.
  • Engage with learner feedback: it is important to listen to learners and respond accordingly – both to act on learner evaluations, and show evidence of how this has changed the subject. Learners will react poorly, as it is a sign of disrespect, if they think that concerns of the past have not been addressed.
  • Supervise feedback provision: consider if educators are providing feedback information in the ways it has been designed to work and support staff accordingly, through resources, briefings and training as required.
  • Provide feedback information close to performance of each task: feedback information should be incorporated as a normal component of each learning event, rather than being seen as an add-on to be provided separately to learners outside these occasions.
  • Maintain reliability of feedback: tell learners what you’re going to do and then do it consistently, on the timeline you say you’re going to follow. The reliability of the turnaround is as important as the efficiency.

Advice for institutions

This case offers several useful insights for leaders within institutions wishing to support similar feedback designs:

  • Co-ordination and training of educators is vital: for the subject (and indeed the course) to be a coherent experience for learners, educators need awareness of the overall subject design and the various roles of the teaching team. Within this, each educator needs to understand their own role and how it fits into the overarching subject design. The more complex the subject design, the more crucial it is that the teaching team understands (and enacts) their unique role.
  • Avoid tokenistic approaches to learner feedback: while it is becoming a routine item in subject outlines, information to learners about changes that have been made based on prior learner input needs to be convincing and not just recorded to ensure compliance with a template. Learners are sensitive to tokenistic reports of change, and will feel more confident if they are persuaded that the course has been designed to suit their needs and that learners have influenced current design.
  • Learner evaluation data is used as a normal part of quality improvement: detailed information from learner evaluation surveys should be used to inform meaningful modifications to feedback designs for each cohort.

Resources

For further information, resources and research about team-based learning, visit the Team-Based Learning Collective website: http://www.teambasedlearning.org/definition/

References

  • Boud, D., & Molloy, E. (Eds.). (2013). Feedback in Higher and Professional Education. London: Routledge.
  • Cote, L., & Bordage, G. (2012). Content and conceptual frameworks of preceptor feedback related to residents’ educational needs. Academic Medicine, 87(9), 1274-1281.
  • Eva, K. W., Armson, H., Holmboe, E., Lockyer, J., Loney, E., Mann, K., & Sargeant, J. (2012). Factors influencing responsiveness to feedback: on the interplay between fear, confidence, and reasoning processes. Advances in Health Science Education. Theory and Practice, 17(1), 15-26.
  • Harrison, C. J., Konings, K. D., Molyneux, A., Schuwirth, L. W., Wass, V., & van der Vleuten, C. P. (2013). Web-based feedback after summative assessment: how do students engage? Medical Education, 47(7), 734-744.
  • Junod Perron, N., Nendaz, M., Louis-Simonet, M., Sommer, J., Gut, A., Baroffio, A., Dolmans, D., & van der Vleuten, C.P. (2013). Effectiveness of a training program in supervisors’ ability to provide feedback on residents’ communication skills. Advances in Health Science Education, 18(5), 901-915.
  • Kogan, J. R., Conforti, L. N., Bernabeo, E. C., Durning, S. J., Hauer, K. E., & Holmboe, E. S. (2012). Faculty staff perceptions of feedback to residents after direct observation of clinical skills. Medical Education, 46(2), 201-215.
  • Pelgrim, E. A., Kramer, A. W., Mokkink, H. G., & van der Vleuten, C. P. (2013). Reflection as a component of formative assessment appears to be instrumental in promoting the use of feedback; an observational study. Medical Teacher, 35(9), 772-778.
  • Shute, V. J. (2008). Focus on formative feedback. Review of educational research78(1), 153-189.