While the perception and practice of assessment has certainly come a long way, it’s still not uncommon to hear challenges with buy-in, closing the loop, and culture of continuous improvement. As with any type of work, there may indeed be some cases of genuine opposition to doing assessment, or at least to doing it “right.” However, more often than obstinance, instead there are very often gaps in understanding the purpose, expectations, and benefits of the assessment process.
What if there was a way to close the gaps around the value of assessment?
Weave sat down with Drs. Beth Hinga, Assistant to the Senior Vice Chancellor for Academic Affairs; Director of Assessment, and Scott Unruh, Faculty and Assistant Dean, College of Education, from University of Nebraska at Kearney, to learn more about how they are redefining assessment and culture one program at a time.
About University of Nebraska Kearney
University of Nebraska Kearney (UNK) was founded in 1905 as the Nebraska State Normal School at Kearney, and became part of the University of Nebraska system in 1991. Located in Kearney, UNK serves primarily the western half of the state, with a student population that includes many first generation college students. The total enrollment is 6,200, including 1,500 graduate students, and UNK offers 170 majors.
How it Started – the Dreaded Data Dump
As is the case in so many of these stories, this one begins with a 2008 accreditation visit that didn’t go so well. Beth shares that program and general education assessment both needed better documentation; a new office was created to tackle it from a high level, and departments also mobilized to better meet the requirements.
However, it wasn’t really just about documentation. Beth and Scott saw a habit of colleagues simply turning in data without any connection to student learning, and without any inter-departmental conversations about the findings.
Scott describes it as, “just throwing data to the data gods – report the data, and then nobody ever sees it again.”
As Beth worked with many programs she discovered when she supported faculty in closely examining their data, they had several ah-ha moments. In two examples, faculty were disappointed with senior year writing and presentation projects, which the data showed were poor. Beth asked where the instruction for those projects took place and what it entailed, and as discussion continued, faculty discovered these skills were not being taught or evaluated, or not early and often enough. After making some of those changes, the student performance (and data) improved. Most importantly, the faculty were interested and engaged in this part of the process.
Was it possible to replicate this genuine conversation and thoughtful program adjustment?
Scott is a faculty member in the College of Education, and found himself in a new position assisting the graduate program with assessment, which eventually led to assisting the Teacher Education Department as well. As he worked with the Dean, they noticed several challenges colleagues were facing:
- Expectations: Faculty felt confined to using proprietary instruments so they could ensure adherence to CAEP (the program accreditor) guidelines, rather than creating ones that provided valuable information on their program and students.
- Status Quo: This wariness also meant faculty did not feel comfortable changing program outcomes and objectives in any assessment plan, resulting in outdated outcomes that most did not likely have an interest in.
- Disconnection: These constraints made the process “a patchwork, spiderweb approach rather than linear and connected” – much more focused on compliance than learning.
- Lack of Purpose: A misunderstanding of how it all fit together did not inspire collaboration or conversation; when asked how the data helped them examine their program, faculty expressed that was a regional accreditation item, with another faculty member assigned to it.
Scott shares, “When asked, there was an absolute disconnect on what their measures were, what they were for, and even what the data looked like. Several told me they never looked at the data – they just turned it in to the right person.”
These observations and challenges are not surprising or unique – they plague programs everywhere. Scott and Beth devised a plan to see if they could remove these challenges and help faculty find value in assessment.
Closing the Assessment Gaps – Meet Them Where They Are
First, Scott and his Dean did what many may not advise – they started over. Their approach was to move from focusing on department level assessment to program level. This included more common objectives, and an expectation that each area would create a plan and assess.
Next, Scott and Beth considered two things: how to replicate the genuine conversation from the programs Beth had worked with, while alleviating the challenges the Education faculty were facing. They agreed the work needed to happen in person, to foster conversation and questions. Also, as with the objectives, it made sense to start over in a sense, at the very beginning of the assessment process. In short, to meet the faculty where they were with their assumptions and understandings, and work from there.
Workshops: Conversation, Questions, Change
Beth and Scott designed a series of in-person workshops, filled mostly with time for programs to work together. Each program had a table, screen, and of course, food. The workshops were scaffolded in four-hour sessions:
- Curriculum Mapping: Scott created a current curriculum map in excel for each program to start the discussion. “The maps forced faculty to come together to look at their objectives. They saw their classes in context and everybody that engaged in it said it was helpful.” They started discussing which courses were needed, and asking about outcomes that weren’t addressed in any course, or courses that didn’t address any outcomes. The visual provided by the map was an easy way to begin an engaging discussion about something everyone really cared about – what they teach students.
- Assessment “Grids”: Also an excel spreadsheet, the grid has areas for program objectives, assessment instruments, and what data the instrument yields. The next workshop was devoted to completing the grids, which again resulted in productive conversation about how to meaningfully measure. “There had been no discussion happening on a larger scale about ‘oh, well, this is what the results are telling us? Maybe this is a change we should make.’ When filling out the grid, they asked, ‘what ties back to our objectives?’ And saw, nothing, really. Things were coded, but what did it mean? I asked what is important to them? What is the data that you need to have to measure that? This entire exercise was meaningful because it is at THEIR program level – it doesn’t have to do with anybody else.”
- Strategically Using Technology to Tie it Together: As we all know, the number of software systems a faculty member uses is astounding, and Beth and Scott wanted to make sure any technology they introduced was supportive and not a distraction. For this group, using excel and GoogleDrive was the most comfortable for everyone, at least to start. As the process has matured, Beth has created an area in Weave’s assessment management system for each program to work on entering the assessment grid and data for regional accreditation.
The real key was listening to faculty to understand their perceptions and assumptions, and then using a productive venue that allowed for conversation, thinking, and adjustments. While this sounds resource intensive, working with just one college as a pilot allowed Beth and Scott to learn from the process as well.
Debunked Assumptions
- “Someone” is taking care of all this for us
- We all care about our students and curriculum – assessment belongs to all of us
- The data we collect and turn in doesn’t have more than one use or purpose
- Findings and results can be used for multiple accreditors, and most importantly for program improvement
- We are teaching our stated outcomes
- Courses are not divorced from outcomes. Regularly discussing our curriculum map ensures we are addressing learning outcomes often enough and at the right time
- We are using appropriate measures to assess what we care about
- Some instruments are not telling us what we need in order to assess learning – they may not be a good fit, and we can choose others
- We can’t change any of the assessment process
- Assessment’s purpose is improvement – all parts of the process should be examined and updated to ensure value and impact
- This has no meaning to our daily work
- When done genuinely, assessment helps us understand our contributions in the larger context and influences changes to curriculum and student learning
How it’s Going – Contagious Assessment Culture
Beth and Scott are very excited with how well this process went and has continued to grow in the College of Education. The workshops have redefined assessment for the faculty, and supported programs in dedicating time to this important work.
Beth says “The biggest thing I’m excited about with this whole project is that people are talking to each other again. They went from ‘I’m teaching this class, and it’s important for my students’ to ‘What do our students start out with? How do they progress throughout their education? And where do we need them to end up?’ And faculty considering the part that every course plays in that, I think that is extremely valuable.”
Scott reflects on how the workshops initiated faculty being open to making changes, such as considering new measures, something they had been apprehensive to do. “There was really this fear of ‘we can’t change that assessment tool because it’s got to meet the sufficiency criteria, and that’s why we have this proprietary one,’ And now, we’ve literally created an entirely new rubric and assessment measure for the lesson plan that follows students through their program.”
A deeper understanding of how data can be used meaningfully promotes more engagement and less redundancy. Using some of the workshop time to reiterate how findings can apply to program accreditation, regional accreditation program review, budget requests, and program improvement certainly adds value. Scott was so happy to hear attendees understand:
“Oh, this is all the same data.’ Yes, accreditation data is the same thing that’s being reported in multiple places. It’s our quality assurance, it’s our assessment data, and it’s more, and it’s also really all the same thing. The documentation is just where we’re housing the outcomes, measures and all the rest.
And finally, a focus on program changes. Scott reflects that a good reviewer will ask “what program changes have you made as a result of your assessment? There’s no way to answer that well if the previous pieces are not in place. This has been Beth’s focus from the start:
“One thing I really believe in is having faculty members take a really active role in assessment. I want them to look at their programs, determine what is really important, what they want students to get out of their programs, and set up their own assessment plan, truly based on that. And then I want the department members to sit around the table and talk to each other about what they’re finding, and what improvements they can make that will improve the students’ experience and learning.”
The real proof that it’s working:The word has gotten out, and Beth has been approached by faculty from other departments about mapping, measures, and how they can engage.
Tips for Your (peaceful) Assessment Revolution
- Collaborate – with Dean, faculty, and IE office. Find a willing champion, and start there.
- Spell out the assumptions you see, and then make a plan for collaboratively addressing them.
- Let the faculty lead: Scott shares, “One of the things I really try to do is let the faculty build.” Provide a venue and time for them to concentrate together, and see value rather than just reporting.
- Measure what you value, and value what you measure: Encourage and support faculty in thinking outside the box about ways to find out what they need to know about students. Scott: “If it’s information that’s important to them, they’re going to engage it. If it’s just they feel like they have to put it somewhere and it’s data for data’s sake, they’re not going to engage in it. So make them a part of it – that’s where I think bringing it down to the program level is really important.”
- Make change feel safe: Encourage discussion, new or even aspirational thinking, brainstorming, and long term goals. Bringing the focus back to student learning promotes valuable discussion and a shared goal that can merit change.
- Find the right technology:
- Emphasis on improvement, not data: Make sure there are fields to document changes and improvements. Beth says it best: “In my opinion just moving data around really defeats the purpose of assessment, which is to try and find solutions at the department level. Part of our process at UNK is a question in Weave about what changes that have been made based on observations and data. And that really gets people thinking – almost every department has something that they’ve changed. And to me that’s the real value of all of this – improvement of student learning and improvement of the student experience.”
- Consider curriculum design functionality: Curriculum is at the heart of what faculty do every day and what they are passionate about. A platform that streamlines that thinking visually is an ideal place to start and revisit.
- Don’t focus on the technology: It can be a distraction or excuse, when really it should be a tool. Beth has said to faculty, “Let’s forget about Weave for just a second, and talk about what it is you need to know about your program. What do you value? And then we’ll figure out how to put it in Weave.”
While it doesn’t happen overnight, shifting assessment culture and creating an environment of improvement is possible. Starting small and meeting your audience where they are promotes collaboration and progress.
Additional Resources:
- Strategies for Buy-In from Iowa Central Community College
- Implementing and Sustaining Assessment with Mount Saint Mary’s University (MSMU)
- Okay, I know what needs improvement, now what?
- Why Assessment and Faculty Development Need Each Other: Using Evidence to Improve Student Learning
- Using Assessment Reports to Spark Change and Opportunity
- Using Assessment Tools Effectively: An Overview of Three Tools