When a Student Brings ChatGPT To Their Advising Appointment
- Feb 24
- 3 min read

Since academic advising started as a distinct practice in higher education, advising has operated on a simple assumption: one advisor, one student, one conversation.
That assumption no longer holds in the era of AI.
Today’s students rarely arrive at advising appointments alone, even if they physically sit across from you by themselves. They arrive having already consulted another voice. Increasingly, that voice is their trusted personal assistant: Artificial intelligence.
ChatGPT has reviewed their draft email to you or written it. It has suggested a major for them. It has outlined a semester plan. It has compared transfer options. It may have even generated pros and cons about dropping a course.
In other words, you are not advising an individual in isolation. You are now advising a two-"person" team that is the student and their AI assistant. Therefore, it's time to approach advising with the lens of advising a human-AI team, and not just a human.
Students now curate guidance from AI tools before ever walking into your office. They test scenarios. They ask for summaries. They request five-step plans. They explore “what if” questions privately before they explore them with you.
Therefore, the advising approach must shift accordingly. Instead of assuming you are the primary or sole source of information, assume you are the valuable interpreter and contextualizer within a human–AI partnership.
This shift matters for three reasons.
1. Authority Has Changed
Advisors were once gatekeepers of institutional knowledge. Degree audits were confusing. Policy language was inaccessible. Registration systems were opaque. Now, AI tools can instantly generate course sequences and policy explanations. Students may enter with a proposed plan already drafted by ChatGPT.
If we dismiss AI or try to reassert control, we miss the opportunity.
Instead, we can ask:
“What did ChatGPT suggest?”
“What feels aligned or misaligned about its plan?”
This approach positions you not as a competitor to AI, but as a critical thinking partner. You are not there to out-search the student. You are there to contextualize.
2. Decision-Making Is Now Collaborative (Even When It’s Invisible)
Students increasingly use AI as a sounding board. They refine questions through it. They ask for confidence before approaching you. They may even arrive with language crafted by AI to present their case.
When a student says, “I’m thinking about switching majors,” that statement may reflect an internal human–AI dialogue that has already unfolded.
Rather than asking only, “What do you want to do?” advisors may need to ask:
“What options have you already explored with AI?”
“What questions have you been asking AI?”
“What did AI help you clarify. Where are you still uncertain?”
These questions acknowledge the advising team already in place. Ignoring the AI influence does not eliminate it. It simply leaves it unexamined.
3. Your Role Shifts from Information Gatekeeper to Decision Architect
In a human–AI advising reality, your immense value lies in structuring the decision process. AI can generate options, outline pathways, and simulate scenarios in seconds.
But advisors help students:
Clarify criteria for decision-making
Weigh long-term tradeoffs
Identify institutional nuances AI may miss
Align choices with personal values and academic readiness
You are not replacing AI. You are helping the student think more effectively alongside it.
This also means advisors must become comfortable discussing AI openly.
If a student says, “ChatGPT told me I can graduate in three semesters,” your role is not to invalidate the tool, but to verify, refine, and educate. Putting down AI and discouraging it is like telling them to stop talking to a good friend or highly trusted family member. Instead, you might say:
“I'm glad to see you're using tools and working hard on this. Let’s look at how it calculated that. Here’s where institutional nuance comes in.”
AI often lacks context about course rotation patterns, prerequisite sequencing bottlenecks, enrollment caps, or informal departmental norms. Your institutional literacy becomes more valuable, not less. (Related Reading: The Part of Sense of Belonging Nobody Talks About)
Advising a Human–AI Team Requires Intentional Framing
Some advisors worry that AI will weaken the advising relationship. But transparency strengthens it.
When you invite students to bring their AI-generated plans into the conversation, you model intellectual humility and adaptability. You show that you are confident enough in your expertise to engage with new tools rather than avoid them.
The post-ChatGPT student is no less prepared. They are differently prepared. They arrive with more information but not always more clarity. Related Reading: The Secret to Success for First-Year Academic Advisors: The 70/20/10 Model)
The advisors who thrive in this era will be those who surface that dynamic, engage it thoughtfully, and teach students not just what to choose but how to evaluate what AI suggests.
In a world saturated with instant answers, helping students develop discernment may be the most valuable advising skill of all.



