AI Agents Are Coming to Campus. Who Will They Work For?

Blog Post
Students walk around a beautiful campus.
David Schultz via Unsplash
Oct. 8, 2025

Educators have been noticing that homework coming in from their students has surged in quality. Students are using more sophisticated vocabulary, cleaning up their syntax, and delivering arguments stemming from information found outside of their textbooks and classroom lectures.

They’re also failing their exams.

In the last year, the use of generative AI chatbots has metastasized across classrooms around the world. But that was just the beginning of AI tools becoming more publicly accessible—AI agents hit the market this year, boasting features more expansive than their predecessors. Agentic AI refers to an Artificial Intelligence ‘agent,’ which is a type of machine learning model that can proactively make decisions for you—distinguishable from traditional chatbots, which can only react to your questions.

Tired of sifting through a full email inbox every morning? No problem. Not only can the agent do that, but it can also create a daily to-do list based on the emails and schedule focus time. Still don’t have time to respond to the sixth email asking to ‘circle back’ on a redundant project? The agent can draft a response that will hopefully stop the seventh. The more you use the software, the more the agent will get to know you—how you think, how you act—until it can mimic you without prompting.

AI agents will soon be included in Chat GPT’s Edu, a package specifically geared at college students, professors, and administrators. Higher education is an environment where nearly 80 percent of the population is using AI regularly, as opposed to 40 percent of the general population. Agents may not be sitting in the front row, but they are definitely headed back to school.

AI agents will soon make decisions in both the classroom and campus offices. The question is whether we will direct them toward equitable outcomes or let them quietly direct us. AI is a tool, and it should be used to serve human possibilities—not the other way around.

The Legitimate Benefits of Agentic AI in Higher Education

At Georgia State University, students use a chatbot to help them keep track of assignments, course content, and different kinds of academic support. A study to assess the chatbot's impact showed that receiving messaging from the chatbot meant students had a 16 percent higher likelihood of receiving a B or higher than their classmates who did not receive the reminders.

The most interesting finding was that students who struggled the most overall (those who displayed lower academic performance in high school, for example) derived the most benefit from chatbot intervention: First-generation students receiving the messages earned final grades about 11 points higher than their peers.

AI holds real promise to boost success for the students who’ve long faced the steepest barriers on campus—a breakthrough that matters more than ever as political winds threaten to roll back hard-won gains in diversity, equity, and inclusion.

Another place where agents might be able to support equitable outcomes is in administrative settings, like the financial aid office. Financial aid has traditionally been the largest barrier for first-generation and underprivileged minority students. Roadblocks like navigating confusing language, applying for the relevant loans and scholarships, and understanding how the system works can be overwhelming.

Access to clear, comprehensive financial aid information can ease this burden, but at most colleges, it’s nearly impossible to even get an appointment with an advisor because the offices are so overwhelmed and understaffed. And if you do manage to see someone, it could be after the deadline has passed for applying to whatever loan or grant that would have actually been helpful.

Agentic AI would mean that the more rote and mundane questions, like student access codes, FAFSA help, and grant eligibility, are easily addressed. Human resources can then be directed to addressing more complex cases and offering personal support. Crucially, this means that agents can be implemented not as a way to reduce support, but to amplify it—offering much-needed relief to overwhelmed staff and students alike.

The Risks of Agentic AI in Education

Universities have the opportunity to utilize these technologies in ways that improve learning outcomes and increase educational parity, but AI agents can also reproduce systemic inequities. If agentic AI is not used thoughtfully, there is a risk that it will be used to ‘manage the masses’ while human-to-human interaction becomes even more of a luxury that only more privileged students have access to.

The populations of students who will need the most help with navigating the system of financial aid, for example, are typically already systemically disadvantaged. This means that if agents are to be used with those tasks more than others, the overwhelming majority of students who are interfacing with the models will come from a lower socioeconomic background. It is likely they will have already received less personalized support from teachers, SAT prep tutors, guidance counselors, parents, and other sources relative to their wealthier peers.

For students who have limited guidance and support in a complex system, the role of financial advisors in university administrative offices can be incredibly personal and vulnerable. What happens when human empathy is not a factor in such a sensitive equation?

Human resources are necessary, and not just in the event of inevitable roadblocks or confusion. It is critical to use the agentic systems as a support to the existing human staff, rather than using human staff to support a predominantly AI-based system. How many times have you been on the phone with an automated bot on the other end asking questions, while you just keep insisting that you want to speak to a human? Even if an AI agent might have the ability to solve a problem, many people would prefer the real-time problem-solving and experience of a person. It’s a matter of trust, certainly, but it may also be a matter of feeling heard and understood.

How to Best Implement Agentic AI on Campus

As a recent Stanford study reminds us: The real question isn’t just what AI can do, but which tasks students, faculty, and administrators want to keep control over and which they’re willing to delegate. The study attempted to address a central question of the future of work amidst the rise of agentic AI systems: What do workers want? The research showed that people want to retain control over creative or relational work, such as designing programs or developing interpersonal vendor relationships. By contrast, workers were happy to divest their own time and workload from occupational tasks like file maintenance or appointment scheduling. They wanted to free their time so they could engage in more high-value contributions.

Asking what tasks people want AI agents to help automate or augment, as opposed to just focusing on the capabilities of the technology, puts human agency front and center. In higher education, this means we must intentionally define where AI agents fit—not only as automated helpers, but also as partners that enhance human decision-making without replacing the vital human connections students rely on. Freeing up time so that college administrators can prioritize student-facing interactions is about more than efficiency; as Georgia State’s implementation of their AI chatbot has shown, it may lead to an actual equity gain.

Some see AI agents as the silver bullet to the relentless grind of busywork—a way to cut costs, boost efficiency, and relieve the burdensome tasks of thankless administration. But allowing AI systems to permeate our classrooms might be fostering a growing dependence on these systems for our learning and thinking. Increasingly, studies are warning against the long-term impact of generative AI use on skill acquisition and development. These findings hold true not just in educational settings but in personal and professional ones as well. To protect learning and preserve human agency in students’ education, we must remember that AI tools are just that: tools. And we need to make sure they stay that way.

With the introduction of these technologies in higher education, we must continuously evaluate how we want AI to exist in relation to our students, rather than solely making decisions based on its capabilities. When there is already such a lack of parity in who has access to higher education, the introduction of new technologies must be evaluated against the risk of inadvertently making existing inequalities more stark. We must consider not only the promises of efficiency, but also the reality of how these systems will be used for and by students.

The real opportunity presented by agentic AI here is not to cut back on student support, but to expand it—ensuring that human-centered services are properly funded and strengthened. Efficiency alone won’t solve higher ed’s challenges; equity will. AI must be our tool, not the final arbiter of student futures.

Related Topics
Artificial Intelligence