Welcome to our store!

New collections added on a weekly basis!

FREE SHIPPING

for all orders over $99.99 within CONUS

Program Evaluation: What is it, and what are key considerations?

Aaron Brown, MD, FACEP |

program evaluationRecently the ALiEM Faculty Incubator had a dynamic discussion about Program Evaluation. This Google Hangout featured Dr. George Mejicano, Dr. Chad Kessler, and Dr. Megan Osborn, facilitated by Dr. Lalena Yarris. Listen to the podcast version of their conversation as they take a deep-dive into program evaluation and learner assessments. We also provide a text-based synopsis of the discussion.

What is Program Evaluation?

Program evaluation serves as a means to identify issues or evaluate changes within an educational program. Thus program evaluation allows for systematic improvement and serves as a key skill for educators seeking to improve learner outcomes. There are many considerations for a successful educational program evaluation.

Evaluation and Assessment

Evaluation and assessment are not one in the same and often mistakenly used interchangeably. In medical education, evaluation refers to a program or curriculum, while assessment refers to learners and learner outcomes. Learner assessment is a separate topic. When performing a program evaluation it must be assured that the focus is on an evaluation of the program or curriculum rather than an assessment of the learners.

When approaching program evaluation, what do you need to know?

When conducting a program evaluation, it is important that you know the stakeholders and what the stakeholders desire. In other words, who is the evaluation for? Is the goal accreditation via ACGME or ABEM? Are you interested in evaluating with respect to learners such as residents/medical students. In addition, you should ask yourself, what is important to these stakeholders? Are they interested in meeting accreditation standards, improving learner outcomes, or improving learner happiness, to name a few. These are very different outcomes that require different evaluative methods.

You must know what tools you have available, and what you wish to do with the information you gather. This will allow you to choose a feasible method that will provide you with the necessary information for your evaluation goals. The most appropriate method may not provide you with the highest level of outcome. For instance, an interview or survey often may provide you with the most useful information but not the highest level of evidence.

Getting beyond Kirkpatrick’s first level1

Getting past the first level of Kirkpatrick’s education outcomes (satisfaction) can be difficult. Some suggestions on moving past the first-level to higher-level outcomes in your evaluation include:

  • Develop tools that you can use multiple times or use tools that have already been validated. This often leads to higher level and more consistent data. In addition, surrogates (others who may be equally qualified to deliver the specific tool: medical students, residents, fellows, other attendings, etc) can administer these tools saving you and other educators time without diminishing the quality of the data.
  • Use existing databases so that you are not the only one collecting data (i.e. databases on usage of blood products or antibiotics). This allows the researcher to compare their local outcome data to the larger database to allow for greater assessment of patient outcomes (higher Kirkpatrick levels) and, in part, an evaluation of their curriculum.
  • There are other perspectives on outcomes out there beside Kirkpatrick’s. These include examining return on investment for an outcome rather than just the outcome. These may be very valuable if you are looking at fiscal interventions or situations where resources are limited.2
  • Remember that patient-oriented outcomes are not always the same as your educational goals, and therefore may not be the most appropriate outcomes for your program.3
  • Do not dismiss Kirkpatrick’s first level of evaluation. Learner satisfaction is still very important! It may be important for your program and curriculum, however, expanding this into scholarship can prove hard to publish.

Tips for examining long-term outcomes

You must stay in touch with your learners. Educators struggle with long-term outcomes. Long-term data often reflect higher level outcomes and true programmatic success. In order to obtain long-term data, educators and their institution must stay connected with their learners in the long term. If you can stay connected, you can start looking at long-term variables like board certification rates, MOC, and adverse actions from medical boards. The holy grail in data for program evaluation would be complete career-length portfolio data for previous learners, but this remains elusive.

What big questions are looming on the horizon?

Attribution: Those involved in program evaluation want to know that an outcome can be attributed to a specific educational intervention. To do this, the outcome data would need to be traced to the individual learners and behaviors, not just the institution. Think of it as attempting to establish a causal relationship.

Data sharing processes: Collecting data about what and how learners learned necessitates the need for protection of this data including the responsibilities, liabilities, and concerns for privacy. Much of the information collected regarding learners is sensitive, and learners have concerns about the sharing of this information with future employers, evaluators, or accreditation organizations. Rules regarding the sharing of this information need to be established as we collect this information for program evaluation purposes. Similarly, student information is protected by FERPA.

Off-cycle learners: We must address what to do when, in the future, learners are entering and exiting at different times. In the future, it is likely that learners will not all begin on July 1 and end on June 30 because of competency-based advancement rather than time-based advancement. This provides a challenge for program evaluation as well.

Other considerations include program evaluation of CME education programs as well as quality and safety measures in educational programs.

JETem

Journal of Education & Teaching in Emergency Medicine, or JETem, is a new journal and repository for educators to access curricula that have been implemented or designed elsewhere as well as small group exercises, team-based learning activities, etc. Publication of a research project is not the goal of JETem; however, when submitting a piece of work to JETem, it is important to state what program evaluation was performed and the outcomes.

How to get started in program evaluation

Start with the problem/question that you want to solve, then ask what data you will need to solve the question. This can be helpful in determining when to retire a program or modify it for future needs. In addition, when you are creating an educational program or making changes, simultaneously plan how you will evaluate it in the future. As we discussed previously:

  1. Identify the stakeholders
  2. Find out what data they are interested in
  3. From whom you will collect the data

Remember, the stakeholders often have their own language. Make sure you are speaking their language to improve recognition of your outcomes and success (i.e., ACGME, LCME, residents, medical student, dean, or chair).

Final Thoughts

Michael Quinn Patton is the grandfather of program evaluation and his work is a helpful resource regarding program evaluation.

Useful Resources for Additional Reading

  • Goldie J. AMEE Education Guide no. 29: evaluating educational programmes. Med Teach. 2006 May;28(3):210-24. PubMed
  • Frye AW, Hemmer PA. Program evaluation models and related theories: AMEE guide no. 67. Med Teach. 2012;34(5):e288-99. PubMed
  • Shershneva MB, Larrison C, Robertson S, Speight M. Evaluation of a collaborative program on smoking cessation: translating outcomes framework into practice. J Contin Educ Health Prof. 2011 Fall;31 Suppl 1:S28-36. PubMed
1.
Kirkpatrick D L, Kirkpatrick J D. Evaluating Training Programs. Berrett-Koehler Publishers; 2006.
2.
Cook D. Twelve tips for evaluating educational programs. Med Teach. 2010;32(4):296-301. [PubMed]
3.
Moore D, Green J, Gallis H. Achieving desired results and improved outcomes: integrating planning and assessment throughout learning activities. J Contin Educ Health Prof. 2009;29(1):1-15. [PubMed]

Author information

Aaron Brown, MD, FACEP

Aaron Brown, MD, FACEP

Assistant Professor
Curriculum Director, Emergency Medicine Residency
University of Pittsburgh Medical Center

The post Program Evaluation: What is it, and what are key considerations? appeared first on ALiEM.

Leave a comment

Please note: comments must be approved before they are published.