Assessment in Higher Education
Erik Gilbert’s recent article in The Chronicle, “Does Assessment Make Colleges Better? Who Knows?” and Joan Hawthorne’s response has made for very interesting conversation of late. Gilbert asserts that after all the time and resources spent on assessment, there’s little or no proof student learning has been improved, which is why the students and parents show no interest in it. Also, he calls for accreditors to reconsider the emphasis put on assessment, for the same reasons. Hawthorne argues assessment has brought around several shifts, primarily that the responsibility for learning is now on the faculty rather than the students, and that students must do more than simply show they know something – they must demonstrate it. She also states this has evolved into more intentional curriculum and course design. So – does assessment make colleges better? In reality it is a question that has been rattling around in lots of minds, and from reading the responses to both pieces it is clear we aren’t very close to an answer.
The relationship between accreditation and assessment
So where to start? First, it is imperative that the relationship between accreditation and assessment be more clearly articulated so assessment can be more readily accepted as good pedagogical practice as opposed to a checklist item. Gilbert (along with many faculty members) sees assessment primarily an accreditation requirement. While it is true assessment is part of accreditation, any faculty development office or Education faculty member will exclaim that at its root assessment is the foundation for making decisions about learning, and there are few faculty members who don’t care about what their students learn. Additionally, the accreditors are not a foe; they are comprised of other higher education professionals and faculty members. These peers are the liaison between institutions and the Department of Education, which is largely not comprised of higher education professionals. If the accreditors go away, institutions will be dealing directly with the federal government. Many say this would be a giant step toward higher education looking more like K-12 education in the area of standardized tests, benchmarks, etc. While the current assessment system in higher ed is not ideal, we have much more control over it now than we would if the DOE became involved.
Second, what is it about assessment that feels so broken? What are the most critical pain points? There seem to be struggles from top and bottom – higher education costs are higher than ever and quite a lot of federal aid is not being repaid, so the DOE needs to know and see what is so valuable about a college degree. On the other side, faculty is being asked for more data, more reports, and clearly being made to feel that all that they do is not valuable. Worst of all, neither top nor bottom seems satisfied any improvements have been made or demonstrated. The students in many ways are stuck in the middle. This is strange, considering Hawthorne’s observation of the shift to being more student-focused.
Reinvisioning assessment
Perhaps it is also time for a shift in assessment, a legitimate reimagining of how to execute it so it is valuable. Assessment doesn’t seem to have changed much in the last twenty years, except we now realize the key piece at the end is missing – what do we do with all this information? How can we completely re-envision what assessment should look like, and yet leave enough autonomy for individual institution, program and course differences?
One place to start would be to more critically examine what we have and are doing. Gilbert’s assertion that there is no body of research on how assessment improves colleges appears to be accurate. All one has to do is look at the sessions presented at any of the multitude of assessment conferences to see that many have not moved much past simply gathering data; there are very few practitioners who can show that data was analyzed and then used to make decisions regarding student learning. Gilbert calls for formal research, but in 2011 Banta and Blaich could not even find enough data to write an article. Closing the loop is critical to assessment, and yet after all this time there is little hard evidence that it’s happening. Why? Perhaps by attempting to answer this question we can reshape the process to make it more genuine and valuable to all stakeholders.
So how do we close this loop? Ideally we could bring the assessment train to a halt and closely examine what needs to go, stay, and be added before starting back down the track. However, we do not have the luxury of stopping. Instead, we must find ways to keep going but make sensible adjustments. How do we present assessment so it is not perceived as a stick? How do we ensure gathered data is useful? How do we demonstrate we used data to make improvements? One commenter suggested looking at assessment as research. What questions are there? How can answers be found? What else needs to be considered? This “research” could apply to a course or program, but the questions get much bigger than that. How do we work more closely with K-12 so students are prepared for college? How do we handle the student loan crisis? How do we ensure students are ready for life after college? Who else can answer these questions? If we can get assessment right, we can do a lot more than answer if assessment makes colleges better – it will make students better before, during, and after their experience with us.