Perspectives on Program Evaluation: Interview with Drs. Megan Boysen-Osborn, Dara Kass, and Andrew King

As part of the ALiEM Faculty Incubator Professional Development Program, Drs. Megan Boysen-Osborn (Program Director at University of California-Irvine), Dara Kass (Editor-in-Chief FeminEM), and Andrew King (Assistant Program Director at The Ohio State University Wexner Medical Center) participated in a Google Hangout where they provided perspectives and expert advice on program evaluation. Their perspectives and wisdom are summarized below.

As part of the ALiEM Faculty Incubator Professional Development Program, Drs. Megan Boysen-Osborn (Program Director at University of California-Irvine), Dara Kass (Editor-in-Chief FeminEM), and Andrew King (Assistant Program Director at The Ohio State University Wexner Medical Center) participated in a Google Hangout where they provided perspectives and expert advice on program evaluation. Their perspectives and wisdom are summarized below.

Google Hangout Video Conversation (34:26 min)

[su_spoiler title=”Time Stamps for Google Hangout Discussion” style=”fancy” icon=”chevron-circle”]

  • 00:38 – What is program evaluation? Why do we care about program evaluation?
  • 01:41 – As educators, are we using the terms assessment and evaluation correctly?
  • 02:51 – For Dr. Dara Kass – What are the goals for FeminEM, and you determine whether or not these goals are met?
  • 07:09 – Who are some of the important stakeholders in program evaluation at the graduate medical education level?
  • 11:09 – Describe a goal that you had in the development of FeminEM. Was there an unexpected outcome that you received from an evaluation?
  • 15:16 – How will you be able to determine that you accomplished defined goals at the upcoming FeminEM FIX conference?
  • 18:17 – Can you think of a grand educational plan that was implemented into the residency program where program evaluation results illustrated that the goals and objectives were not met?
  • 20:35 – Importance of not immediately reacting to dissatisfaction and balancing with quantitative data
  • 21:43 – Feedback must be taken in context. Always let the evaluators know that you are listening to their feedback regardless of its implementation.
  • 22:10 – How do you most effectively let evaluators know that their recommendations were considered but not implemented?
  • 23:05 – People just want to be heard and acknowledged. Feedback doesn’t need to be followed.
  • 23:33 – Make sure evaluators understand the questions and the purpose of the program evaluation.
  • 24:17 – Discussion of The ACGME Program Evaluation Survey
  • 27:33 – High stakes program evaluation
  • 28:30 – Final message and important points from Andrew on program evaluation
  • 30:24 – Final message and important points from Dara on program evaluation

[/su_spoiler]

Abbreviated Podcast Version

Program evaluation and its importance

Program evaluation, as the name suggests, evaluates programs. Its importance lies in the fact that developing an educational program is not a final destination but rather an ongoing process. The program, once created and implemented, needs constant improvement to remain effective for learners and educators. This is accomplished via program evaluation.

Appropriate use of the terms “assessment” and “evaluation”

The terms assessment and evaluation are often used interchangeably; however, they are fundamentally different. The term program evaluation appraises a program, a curriculum, a class, an educational content, or some sort of educational product. In contrast, the term assessment appraises the performance of learners or educators. Assessment can be formative or summative.

  • Formative assessment helps to guide learning, provide reassurance, promote reflection, and shape values.
  • Summative assessment helps to judge competence, fitness to practice, and qualification for advancement in higher level of responsibility.1

Case study: Evaluating FeminEM

The overarching goals of FeminEM involve the improvement of gender equity in emergency medicine (EM) through the creation of a space specifically for the support of women physicians in EM. The program intends to fill this void in the existing process by using blog content to create macro informational pieces, a podcast to have a conversation around one specific micro question that needs more depth and interpersonal conversation, and a conference to bring people together to exchange of ideas.

Listen to Dr. Dara Kass, founder of FeminEM, discuss her approach to program evaluation.

Important stakeholders in program evaluation at the graduate medical education level

  • Patients: Measuring patient outcomes is a difficult aspect of evaluating an educational program because we rarely see how education translates directly into patient’s outcomes.
  • Learners: Medical student, residents, and perhaps junior faculty
  • Educators: Both the education and clinical faculty members who are involved in delivering education to learners.
  • Departmental and institutional leadership: The departmental and hospital leadership who invest in education and learning.

The list of stakeholders depends on the type of educational projects. For example, with quality improvement projects, the hospital leadership would likely want to be closely involved as this type of education can directly impact patient care.

Determining if program goals have been achieved

Repeat evaluators are important

The perspective gained over time from participation in an educational experience allows repeat evaluators to gauge the success of an intervention because they have before and after views.

Avoid amorphous, open-ended questions

Make sure evaluators understand the reasoning and purpose of the evaluation. Ask concrete questions about the components you wish to evaluate and include follow-up questions for negative responses. Utilize the power of your evaluators to inform positive change by asking them to suggest ways to improve on goals they feel were not fully met.

Analyze your data objectively

Based on the information obtained, you must be ready and willing to analyze and adjust the program. It is important to ask hard questions and equally important to be willing to act on it. For a conference people pay to attend, an indirect marker of success can be the number of repeat attendees as they are voting with their wallet. Another marker might be the number of people submitting presentations to speak at the conference, following the previous year when they were an attendee.

Utilize Kirkpatrick’s model for evaluation

Below is an illustrative example of Kirkpatrick’s model. When evaluating a program, ask questions that reach a higher-level tier than merely participant satisfaction. We should be targeting the higher levels of changes in the workplace and on society.2

Refining the program based on evaluation

Collect various feedback

It is important to gather and incorporate both subjective evaluative feedback from learners and educators and objective feedback to refine the educational program. Subjective feedback can be utilized to measure level 1 of Kirkpatrick’s framework, but examination scores and patient outcomes are necessary to ensure higher levels are also reached.

Avoid overreaction to negativity

There are times that objective data such as increases in medical knowledge may differ from learner/educator satisfaction. If a particular intervention results in the achievement of knowledge acquisition (Kirkpatrick’s 2nd level) or improved patient outcome (Kirkpatrick’s level 4) that may at times outweigh individual or even group happiness. As an educator, one may need to discuss this balance, or try to find ways to achieve happiness alongside the higher levels. Only focusing on satisfaction can be to the detriment of an educational endeavor, if utilized as the only marker of success.

Acknowledge that you have heard the feedback

Patrick Lencioni, an expert on leadership and team dynamics, explains that people are able to support an idea, if they had a chance to speak their mind and feel they’ve been heard.3 If a leader decides not to alter the program based on feedback received, it is important to acknowledge that stakeholders were heard and to explain why changes were not made.

Inform evaluators of the purpose of the evaluation

Discuss the goals of the evaluation with stakeholders and evaluators beforehand to help ensure that feedback is directed towards the correct purpose. It is imperative to ensure that evaluators understand the questions and what they are asking, in addition to what has been done to address each question since the previous evaluation.

Have high stakes and low stakes evaluation opportunities

People need the opportunity to be heard. If you only provide this in the form of high-stakes program evaluation, such as the ACGME survey, that will become the place to air every minor grievance and complaint. By providing ample opportunity to evaluate the program, evaluators can focus on important and accurate evaluation during high-stakes evaluation and utilize other outlets to share every opinion they have.

Final pearls about program evaluation

  1. Developing an educational product is just the first step in the process. It can be easy to neglect the program evaluation phase as we are all busy; however, obtaining feedback and quantitative data to improve the educational experience for learners and educators is a vital part of the process.
  2. Make sure the goals and purpose of an evaluation are clear to the evaluators before releasing the evaluation in order to ensure clear, accurate, directive feedback.
  3. Learn to detach from the emotional investment in the educational product you have created. Emotional attachment can hinder the implementation of changes to improve the program. Greater success comes if you can remain objective and accept potential failure as part of the process.
1.
Epstein R. Assessment in medical education. N Engl J Med. 2007;356(4):387-396. [PubMed]
2.
L. Kirkpatrick D, D. Kirkpatrick J. Evaluating Training Programs. Berrett-Koehler Publishers; 2006.
3.
M. Lencioni P. The Five Dysfunctions of a Team. John Wiley & Sons; 2002.

Author information

Andrew King, MD FACEP

Andrew King, MD FACEP

Assistant Professor

Assistant Program Director

The Ohio State University Wexner Medical Center

The post Perspectives on Program Evaluation: Interview with Drs. Megan Boysen-Osborn, Dara Kass, and Andrew King appeared first on ALiEM.

0 comments