Data Alone, Without the Human Element, Can Be a Recipe for Disaster

SAT Adversity Score Highlights Importance of Human Element in Using Data and Analytics
Blog Post
June 5, 2019

Last month The College Board announced that it would be expanding its Environmental Context Dashboard, popularly known as the SAT “adversity score”, to more higher education institutions. There has been no shortage of debate on the issues surrounding the SAT’s adversity score, such as lack of transparency in the algorithm used to calculate the score, the potential for bias and stereotypes, whether the score (and test) is needed at all, and concerns that it may become yet another opportunity for privileged families to game the system. The SAT’s adversity score also highlights a major issue with implementing the use of data to make choices about students’ education: that data alone, without critical thinking, and more importantly, the human element, could do more harm than good.

Increasingly, institutions and programs are using data to be more effective in their work. Some institutions have implemented learning analytics into their programs to better tailor learning experiences to student needs in real time. Institutions such as Georgia State University, University of Maryland University College have used massive amounts of data about their students to create predictive algorithms that can alert counselors when students are at-risk for dropping out. Georgia State has been lauded for closing equity gaps in graduation rates while increasing its Pell eligible population. Other institutions use data and analytics to predict which students are most likely to enroll, saving the institution thousands of dollars in recruitment costs.

But not all schools use data equally, and data and algorithms are imperfect. Quantifying student characteristics provides an incomplete and potentially inaccurate picture of the person the data is trying to describe. Data alone, without the human element, can be a recipe for disaster.

While some have applauded the adversity score as a helpful and efficient mechanism for providing more context for a student’s SAT score, this tool has only been tested with a little more than 50 institutions. Implementing this tool nationwide may sound fine in theory, but like with many tools, issues arise in the implementation. Not only do we not know what exactly how data is weighted in the score, but a number can’t really capture a student’s full story.

Prior to my work in higher education research and policy, I worked as an adviser for low-income, first-generation college students in California. These students’ lives were complex, and while I appreciate that a tool like the SAT’s adversity score may help overworked admissions counselors better understand their applicants, my own experience shows how the implementation of this tool (and data in general) can be harmful to students if it is used without the human element, critical thinking, and a holistic application package.

Take Tony (name has been changed), for example. When I was advising Tony, he lived in one of the wealthiest neighborhoods in the city with guardians who were college educated. But Tony was only this situation because as a pre-teen he had chosen to live with a close friend’s family and leave his biological family due to neglect and family substance abuse issues. Under these living circumstances, Tony’s adversity score would have been low (indicating less adversity experienced, although we don’t know how the score’s factors are weighted), because of the privileged characteristics of his school and neighborhood. How would the SAT’s adversity score fit into the review of his application at regional public four-year institutions that base admission on GPA and SAT score? How could an admission officer know the challenging circumstances that put Tony in a wealthier and more privileged context?

Or take Josie (name has been changed), who lived on the edge of the same neighborhood as Tony and just a couple of blocks from the area’s strong business district. Josie’s adversity score could have also been low, indicating low adversity, but Josie’s parents do not have legal status in the country. They worked long hours in low-wage jobs to be able to afford a tiny apartment in the neighborhood that allowed their children to attend one of the better public schools in the district. Josie also applied to several regional public four-year universities that base admission on GPA and SAT score and have no opportunity for a personal statement. What would have been Josie’s admission results if admission officers looked at her SAT and adversity scores without other context?

These two real life examples highlight how the SAT’s adversity score and other data tools are just that- tools. Without the critical human element throughout the creation, testing, and implementation process of these tools or the use of other mechanisms to understand a student’s story, equity gaps can be exacerbated. New America has created a framework for using predictive analytics in higher education ethically, and many of the recommendations outlined in the framework, such as working to ensure the proper use of data and designing algorithms and models that avoid bias, can be applied to data usage and tools like the SAT’s adversity score.

As higher education moves to using more data analytics tools, it should certainly take note of the lesson coming out of the SAT adversity score controversy: data is a tool, but without the human element to deeply analyze quantified social characteristics, has the potential to do more harm than good.

Enjoy what you read? Subscribe to our newsletter to receive updates on what’s new in Education Policy!

Related Topics
Higher Ed Data Assessment High School Graduation Requirements and Degree Pathways Predictive Analytics