When I read the Poets & Quants article “We Expected More: Stanford GSB Students Call for Higher Teaching Standards,” I felt a familiar mix of agreement and urgency. Students at one of the world’s most prestigious graduate schools were speaking plainly about their disappointment. Their courses, especially the required core ones, often felt disconnected from the world they were preparing to enter. They described outdated material, uninspired delivery, and assignments that had not been updated to account for the reality of artificial intelligence reshaping work and learning. The critique is not limited to Stanford or to business school. It is relevant to higher education as a whole and to K–12 education as well. In both contexts, teaching standards, curriculum design, and the relationship between theory and practice must meet the demands of the world as it exists now rather than as it did ten or twenty years ago.
The Impact of AI on What and How We Teach
The last two years have brought a profound change. Artificial intelligence tools like ChatGPT are no longer futuristic. They are here, embedded in how people work, learn, and make decisions. In higher education, this means that course design must acknowledge that a student can now generate a spreadsheet model, draft a memo, or run a basic analysis, and so much more in seconds with AI assistance. In K–12, it means that even younger students are growing up in an environment where AI-driven tools are part of their daily lives. The temptation is to conclude that if AI can do these things, there is no need to teach them. This is a mistake. Foundational skills such as writing, spreadsheet modeling, and structured analysis are still essential. Without understanding how these systems work based on knowledge that already exists, students cannot design them properly, evaluate AI outputs for accuracy, or adapt them with creativity to unique contexts. The shift is not from doing everything manually to doing nothing, but from manual execution to using AI as a tool for conceptual mastery and oversight.
Where the Students’ Critique Hits Home
In my own career, I have seen how theory can transform practice when it is used well. For example, at Stanford, I took courses from W. Richard “Woody” Powell, who challenged me to see theory not as an academic luxury but as a way to think more clearly about practice. Theory was not separate from the work. It was the lens that could reveal underlying patterns, expose flawed assumptions, and guide more sustainable solutions. Those lessons continue to shape my teaching today. Later in my critical policy analysis courses, I introduce students to frameworks like Critical Race Theory, sociocultural theory, and community engagement theory. Then I challenge them to apply these frameworks to real-world problems. In my Harvard Educational Review work, for example, we used CRT to evaluate social studies standards, showing whose voices were missing and how narratives about race and power were being shaped. This was theory as a tool, not as a decorative concept.
The Stanford GSB students’ frustration is not with theory itself. It is with how theory is being delivered. They see a gap between the abstract content taught in required courses and the applied, relevant content in electives taught by practitioners. In their view, core courses too often remain frozen in time. They fail to integrate new tools and realities into the curriculum. Assignments sometimes measure students’ ability to reproduce processes that AI can now complete instantly. This is not a call to abandon those skills. It is a call to update how they are taught. Students should still learn how to design a spreadsheet model from scratch, but they should also learn how to use an AI-generated model, adapt its logic to complex situations, and embed ethical considerations into its use. The same is true for writing. AI can produce paragraphs quickly, but students still need to understand structure, argument, and tone to guide AI toward useful results and to know when its output is misleading or inappropriate. One of my classroom approaches is to allow students to use AI in their writing, but I set a limit: no more than 20 percent of the work can be AI-generated. This encourages students to treat AI primarily as a revision tool or to revise AI drafts extensively, which ensures they still learn how to do the writing themselves while using AI as a supportive resource.
Accountability and Creativity in Teaching
One of the sharpest points in the Poets & Quants article is about accountability. Students feel that faculty can deliver poor teaching without consequence. Evaluations are collected but do not seem to result in meaningful changes to course design or delivery. In K–12, we see versions of this as well. Teachers are evaluated, but the quality of that evaluation and the follow-up action varies greatly by school and district. If we expect students to meet high standards, institutions must hold themselves to equally high standards for teaching quality. That means setting clear expectations for relevance, clarity, and engagement in every course. It means ensuring that instructors at all levels have the support and are prepared to integrate new tools, respond to student needs, and connect theory to practice.
The world has changed in the last two years, and we must change with it. We must acknowledge that AI has altered the landscape of work and learning. Humans still need to learn to write, analyze data, and design systems, but they must also learn how to direct, evaluate, and improve AI-assisted outputs. This requires a shift in both curriculum and pedagogy. The emphasis should move from rote execution to conceptual understanding and critical oversight. Humans who understand how a process works can design better versions of it, identify errors, and adapt it to new situations. Those who only know how to prompt AI without understanding the underlying structure will be limited in their capacity to lead, innovate, or solve complex problems.
Principles for Higher Standards in Teaching
The issues raised by the GSB students point to changes that should be embraced across education. First, we need clear AI teaching standards that can apply to every course, from ninth-grade English to graduate-level finance. These standards should ensure learning, relevance, clarity, and active student engagement.
Second, we should integrate contemporary tools like AI thoughtfully, not as shortcuts but as opportunities to require deeper judgment and creativity.
Third, we should blend theory and practice deliberately, pairing academic experts with practitioners to co-teach and model the interplay between analysis and application.
Fourth, we need feedback loops that actually close. Student evaluations should lead to innovations, and educators should have opportunities to be transparent and share what those changes are.
Finally, despite the temptation of allowing AI to manage work, we must preserve foundational skills like writing, analysis, and research, adapting them so students learn to envision, test, and improve systems rather than merely execute tasks.
Why This Matters for the Future
Whether we are talking about an MBA program, a doctoral seminar, a high school government class, or a middle school math course, the underlying challenge is the same. The world students are entering is not the one their parents or teachers grew up in. Tools, expectations, and possibilities are shifting rapidly. Education cannot be static if it hopes to prepare students for leadership, problem-solving, and civic participation in this environment.
We cannot teach as if AI does not exist, but we also cannot allow AI to replace the human capacity for critical thinking, creativity, and ethical judgment. Theory remains essential, not as an academic exercise, but as a guide to understanding systems, identifying leverage points, and designing meaningful change. Practice remains essential, not as a routine, but as the testing ground where theory proves its worth and adapts to reality.
I do not want this blog to give the impression that I see AI only as an opportunity for teaching. There is much to worry about, and I have written extensively about these risks in earlier entries in the AI Code Red series (see below). From the enormous water and electricity demands of training large language models, to the real danger of reduced human learning when students outsource too much to machines, to the looming disruptions in the labor market, large scale plagiarism for training the model the stakes are high. Add to that the choices of powerful figures like Elon Musk, who has been accused in the media of altering AI systems to be deceptive, have bias, or to withhold information, and it becomes clear that AI is not simply a neutral tool. It is part of a contested, political, and ecological landscape that deserves urgent attention.
The fact that AI is so fraught only strengthens the case for redesigning how we teach. If students are to navigate a future where these risks are real, they must learn how to use AI responsibly and critically. They need classrooms where they can explore both its promise and its dangers, where they are challenged to ask ethical questions, and where they are taught not to mistake convenience for truth. That is why I emphasize mindful and careful use of AI, not blind adoption. The classroom must be a space for discernment, not just technological experimentation.
The bridge between the two is built by teaching that is relevant, rigorous, and responsive. That is what gets me from theory to practice every day, whether I am in a university seminar, a policy meeting, or a community forum. And that is why I believe the students’ call for innovation and creative teaching standards should be heard far beyond one business school. It is a reminder and a mandate for educators at every level to prepare students for the world they actually live in, not the one we imagine from the comfort of outdated assumptions.
Julian Vasquez Heilig is a nationally recognized policy scholar, public intellectual, and civil rights advocate. A trusted voice in public policy, he has testified for state legislatures, the U.S. Congress, the United Nations, and the U.S. Commission on Civil Rights, while also advising presidential and gubernatorial campaigns. His work has been cited by major outlets including The New York Times, The Washington Post, and Los Angeles Times, and he has appeared on networks from MSNBC and PBS to NPR and DemocracyNow!. He is a recipient of more than 30 honors, including the 2025 NAACP Keeper of the Flame Award, Vasquez Heilig brings both scholarly rigor and grassroots commitment to the fight for equity and justice.
Read more posts in the AI Code Red Series by Julian Vasquez Heilig
AI Code Red: “Don’t Teach Kids to Code”
AI Code Red: Why Libraries Will Matter More Than Ever
AI Code Red: The Future Depends on What We Refuse to …
AI Code Red: Supercharging Racism, Rewriting History …




Leave a comment