Colleges Are Measuring Learning, But Not Because of Students. Is That a Problem?

Blog Post
Jan. 28, 2014

Last week, the National Institute for Learning Outcomes Assessment (NIOLA) released its latest survey (PDF) of learning activities in higher education. The group has been reliably conducting these surveys of provosts and chief academic officers for several years and consistently provides the best national picture of learning assessment activity in all types of colleges.

There's a lot that's encouraging in their findings. The percentage of colleges engaged in learning assessment activity has increased from 2009 to 2013. This includes 84 percent of schools having common learning goals for students and colleges using an average of five different assessment tools, up from three in 2009. Just as in the older report, national student surveys are the most popular form of assessment, with over 80 percent of responding institutions saying they make use of them. But there's also been substantial growth in the use of more formal-seeming approaches, such as rubrics (nearly 70 percent of respondents) and portfolios (40 percent).

But the actual assessment mechanism is just one part of a successful system of measuring learning outcomes. The way they are used and communicated also carries a great deal of importance. After all, a fantastic tool with lots of information that sits in the digital equivalent of a broom closet isn't doing anyone much good. And here the news is more a mixed bag--one that reflects well on institutional action but raises significant questions about the role of student learning and consumer decision-making.

To see why, look at the chart below form the report. As it shows, accreditation agencies are by far the biggest drivers of learning assessment activity. This seems to corroborate arguments made by individuals who want to tweak the existing accreditation system more than replace it that these entities do cause colleges to undertake a lot of work around quality and learning that they might otherwise not. The importance of accreditors certainly should not absolve concerns about their proper oversight role or need to be more transparent, but it does add depth to considerations of what they do or don't do well.

Extent to which it drives

But who doesn't seem to matter for assessment activities? Students and families. As the highlighted portion of the chart shows, prospective student and family information are among the least common uses of assessment results, just ahead of the catch-all "other." The only thing less important is "alumni communication." In other words, the people who already did or might pay for the education are the least important ones for carrying out assessment activities.

It's easy to look at the chart above and blame the institutions (and yes, I know how the sentence before this one read). And they certainly have more work to do--the NILOA report notes that under one-third of schools put assessment information on their websites.

But we have a major consumer problem too. Namely that they do not seem to have much interest in information on learning and don't seek it out in the limited instances in which such data may be available. For example, NILOA's March 2012 review of the College Portrait--an online voluntary accountability system with information on a few hundred colleges-- found that just 1 percent of all traffic on the site went to student learning outcomes pages. And almost 20 percent of all schools on there didn't log a single visitor to their learning outcomes page. That's of people who actually managed to find their way to the site and click around--an achievement that probably puts the individual in the top 1 percent of all higher ed consumers. And if they aren't looking at this information, what does it say about those conducting less sophisticated searches?

Again, this isn't solely a consumer problems. The College Portrait is not exactly the easiest to use website and some schools don't even populate the learning information section.  But we cannot expect to solely have a college-focused solution to this problem and expect the data to be meaningfully used. But that doesn't mean we can ignore the tension between what students care about and want to know and what we'd like them to make use of--as the Chronicle of Higher Education does a nice job outlining in this article discussing what happens when you tell students about costs versus graduation rates. We need some way to get consumers to use the informational equivalent of broccoli without just telling them they can't have dessert.

Do we need consumers?

It is fair to ask whether it really matters whether students and families demand learning outcomes information or not. After all, other useful outcomes measures like graduation rates and cohort default rates came about through acts of Congress, not necessarily driven by popular desires. Certainly data can be made available without consumers. And the NILOA reports demonstrate that the provision of more data can be incorporated into various institutional improvement efforts that presumably do benefit consumers in the form of a better education, not to mention potentially some state-led efforts.

But we have a hands-off higher education selection system. Choice is largely left up to students through whatever way they want to make decisions. The result is uneven. Some students conduct rigorous searches involving available data, but many others pick based on geography, quality of marketing materials, the weather, or sports teams. Barring some unlikely shakeup that involves more paternalistic college selection, the consumer element in using data will be important.

Practically, then, what should we do? It's clear that the importance of learning needs to get better translated into the language of students. This doesn't just mean presenting things in clearer language, though that's obviously a start. But it also means connecting it back to things they do care about and make the case for why they should about learning. Some of this will have to be about cost. Consumers to worry about expenses, maybe even more than anything else. That means learning information needs to be better connected to value--to show how spending money at a given school can produce tangible learning of some sort that is worth the expense. Such contextualized learning information also needs some post-school context. Do places where students learn more translate into better earnings? Ability to go to grad school? Job stability? Drawing a line from initial tuition spending all the way through what occurs after with learning as the common element in the middle could be quite valuable because now it's included in two things that students do seem to care about.

Realistically, that's a long-term goal and we aren't ready for it yet. That's because while policymakers know they want to care about learning and quality, we don't even know what questions we should be asking of schools yet. There's no consensus on what common indicators (and yes it's not a one-size-fits-all approach) should be made available. Nor is there agreement on what level these data should present, such as college-wide, program, or even course-specific information.

Instead, what's needed is a two-prong plan. First, there's a big disconnect right now between the learning assessment work colleges do and the learning assessment information they present. According to NILOA, 84 percent of responding colleges have stated student learning outcomes, about the same percentage use national student surveys and two-thirds use some kind of classroom-based assessment. But only 30 percent of responding schools actually put any of that information on their website. That needs to change. We can't educate consumers to make better use of data if colleges don't provide what they have. It's time to start sharing.

Second, consumers need to learn to ask two simple questions: what will I learn? And how will I know that I've learned it? This doesn't mean demanding uniform information or anything like that. But just conditioning learning to be one of the key things they should ask about. Finding out what they get in response and its perceived value can then provide a foundation for figuring out how to create metrics that work better for them.

It's great that colleges are engaging in more learning assessment for their own reasons or due to pressure from groups like accreditors. Hopefully that will translate into higher-quality educational offerings. But  simply saying "if you improve it, they will come is not enough." Consumers should play a more active role in demanding these kind of efforts too and rewarding those colleges that are willing to do such hard work."