Why You Should Evaluate Your Long-Standing K-12 Museum Programs

From my years as a museum educator, I know well that program evaluation is one of those things that we know is important, but often find hard to prioritize. Program managers juggle a million responsibilities just to get a program off the ground, and it can feel overwhelming to also have to dedicate time and resources towards assessing a program’s impact once it’s running. This is especially true for K-12 educators, who often rely on long-standing programs (like field trips or school partnerships) as their bread and butter. When a K-12 program has been running smoothly with little intervention for years, it’s hard to justify evaluating its efficacy.

But all that hard work is exactly why it’s worth periodically evaluating long-standing K-12 programs. You want to make sure that their outcomes match the labor, energy, and resources you put into them.

This post highlights some of the benefits of evaluating long-standing K-12 museum programs, with examples from our recent evaluation of the Tang Academy for American Democracy (TAAD) at the New-York Historical Society (N-YHS). At the time of our evaluation, the TAAD program had been running for three years. We worked closely with program managers (who designed the program) to develop an evaluation plan that would measure student learning.  Along the way, we also discovered unexpected ways program managers could support the part-time educators who teach the program. We think these lessons are useful for anyone working closely with a K-12 program, because they remind us that evaluation is not only a way to discern a program’s impact, but also uncover surprising ways to refresh and improve long-standing programs.

Evaluation can reveal what other disciplines have to offer long-standing programs

K-12 program evaluation methods, like observation, enable educators to look at a program with fresh eyes. This is especially useful when those eyes bring with them varying backgrounds and expertise. For example, while TAAD is primarily a history and civics program, we were able to apply an arts education lens and offer suggestions for tweaking students’ final project to enhance its creative potential. With small adjustments we were able to not only better align the project with the goals of the program, but also identify unintended outcomes around creativity and artmaking.

Evaluation allows you to turn your logistical challenges into opportunities 

Hands down one of the most challenging things about K-12 programs is managing logistics. The best laid plans, most detailed lesson plans, and all the pre-visit communications in the world can’t prepare you for a bus that’s an hour late or an educator that calls out sick at the last minute. Evaluation can’t solve all your logistical problems, but it might help you realize that there are simpler solutions than you might have realized.

For example, our work at N-YHS revealed that a major challenge for TAAD educators was adjusting their lesson plans to accommodate late arrivals. Because it is a multi-day program, it was important for educators to cover the requisite material on the first day, so that they could build on that content on the second day. However, with so much material to cover, it was hard for educators to feel confident in what they absolutely needed to keep, and what they could cut if schools arrived late. You could chalk this up being one of those inevitable challenges that come with running K-12 programs, but we saw an opportunity for program managers to develop an alternative lesson plan for their educators that indicates what they should focus on if they unexpectedly find themselves with limited time. Hopefully, this will ensure that students stay on track and educators will feel less stressed.

Evaluation generates new insights through a variety of methods

K-12 program evaluation is definitely not one size fits all, and each data collection method has its opportunities as well as its limitations. You can learn a lot about a program using only one or a combination of different methods. Each method will tell you something different. If you’ve only ever surveyed your teachers, try a student survey. If you’ve only ever done surveys, try a handful of interviews. Or a teacher focus group. 

Depending on what a client wants to know, we often recommend a combination of methods to paint a fuller picture of a program’s impacts. TAAD operates for four consecutive days with the same class, which meant we could observe several classes over a period of many weeks. This was instrumental for both measuring what students were learning, so we created a pre-post quiz that students completed on the first and last day of the program. But we knew a quiz alone wouldn’t allow us to fully understand other contextual factors that could influence student learning (like class size or if it’s an English Language Learner class), so we also observed several classes. Together, these methods told us not just how much and what students were learning, but also what types of program activities supported TAAD’s intended outcomes the most. For example, we observed that students were most engaged during activities that enabled them to connect program content to their own lived experiences, such as debates designed to model the democratic process. And trust me, you don’t know how passionate students can get until you ask them to debate whether to get rid of homework or tests! 

In the end, evaluation did more than just identify some areas for program improvement. N-YHS staff agreed that the evaluative process overall helped them see TAAD in an entirely new light, enabling them to eliminate some misaligned program goals and add new ones that were in line with their future vision for the program. The project was a great reminder of how K-12 program evaluation can go beyond a “pluses and minuses” framework to reveal opportunities to refresh long-standing programs in unexpected ways.

Hannah Heller

Hannah brings over 10 years of experience in inclusive qualitative research and museum education to her position as Researcher at Kera Collective.

Hannah loves drawing from her background as a museum educator. Her dissertation research on Whiteness and how it impacts gallery teaching practices has lent her a sensitivity to ideas around power and control in researcher/participant relationships. This continues in her work at Kera Collective in how she strives to meet our partners where they are and ensure a collaborative approach at every step. 

Hannah is Co-Editor-in-Chief of Viewfinder, a digital journal focused on the intersection of social justice and art museum education. She has published her research in several journals and has presented alongside her dissertation participants at various art education conferences. 

When she’s not working, you can find Hannah throwing at her ceramics wheel (but never for keeps–glazing is way too stressful!), cooking new things, and exploring her new city, Philadelphia.

Hannah’s favorite museum is the American Folk Art Museum. In addition to having lots of great teaching memories there, she loves how every exhibition showcases a new approach to understanding folk and self taught art—and in turn, what it means to be an artist.

You can reach Hannah at hannah@keracollective.com.

Previous
Previous

How We Keep Projects Organized (and How You Can, Too) 

Next
Next

Reflect and Respond: Understanding Lived History Through the Lok Virsa Heritage Museum