Public Draft: v.2026.03.29
Note: I write this post on behalf of myself, and in it express my own views, which are not intended to speak on behalf of or reflect the views of any organization, including my current employer.
In the last year, many universities, including my own, have created a new position, sometimes referred to as “AI Czar.” The idea of this position is to have some individual to lead the university’s response to AI. In November 2025, I was appointed as AI Czar at a flagship public university.
In February 2026, academic AI leaders at R1 universities began collaborating as a cohort during a Zoom meeting. One emergent theme was that, because the position is new, it remains ill-defined. All of us felt charged, in addition to leading the university’s response to AI, also to define what exactly the position is and does. More recently, the Chronicle of Higher Education published a provocative article suggesting a fall as rapid as the rise of the AI Czar. Even more recently, Hollis Robbins picked up on that theme and offered an account of what the AI Czar does.
Having served in the role, and full disclosure, also having chosen to leave it, I wanted to offer to the community what I have learned, in hopes of helping anyone in higher education wrestling with the diverse and complex challenges that AI poses. To ensure the welfare of society and the integrity of higher education, it is imperative that we formulate an effective response.
What Is the Problem We Are Trying to Solve?
I begin with two questions: what is the problem that we are trying to solve, and why have previous solutions not been successful?
Among the problems we are trying to solve, I have typically heard a combination of the following offered as “the problem”: (a) employers are demanding that graduates need to be AI literate; (b) while hundreds even thousands of university employees are working on AI in research, teaching, and learning, the work is not coordinated—the university needs an AI strategy; (c) someone needs to put a stop to all this student cheating; (d) we need to prepare an AI workforce because “AI is the future.”
While each has an element of truth, I think they are all incomplete. With regard to (a), “AI literacy” is either impossible to define or, if it is defined, it is at such a basic level that it only begins to capture the problem. Additionally, focusing solely on student AI literacy overlooks the equally significant need for AI literacy among faculty and staff. Regarding (b), while strategy and coordination are important, I believe they do not resolve what I consider to be the fundamental issue. The idea, (c), that AI amounts to student cheating is at odds with preparing students for a workforce whose knowledge practices are increasingly AI-enmeshed. And (d) the slogan “AI is the future,” aside from sounding like industry hype, is unhelpfully vague, and more importantly also belated, because AI is very much the present.
The fundamental issue, in my view, is this.
Part 1. AI’s enmeshment in knowledge-work, not only in the anticipatable future but right now, is changing what it means to have expertise; it is changing expertise’s positioning in the specific configurations of work and regimes of automation that we call “professions”; it is changing how those new configurations create social and economic value, and how and by whom they will be organized and managed.
This isn’t my personal prediction; this is what I hear with increasing urgency from virtually every consultant or business or government leader that I’ve talked to in the past two years, because they are seeing all this first-hand. They are not predicting future change; they are describing present change.
Part 2. Higher education had better be relevant to this change.
I don’t merely mean preparing the workforce—though that is a huge part of it. I also mean figuring how AI ought to be enmeshed in disciplined knowledge-work, for example, in its methods or research outputs; how AI interacts with the ethics of a knowledge practice; how new experts are trained and vetted; how an existing workforce can be upskilled in an era where professions are likely to die and to be born at unprecedented speeds.
What should be clear is that I am no longer merely talking about “AI,” whatever one takes that to mean: I am talking about expertise, about knowledge-work, methods, theories, bodies of knowledge, the ethics of inquiry—I am talking about the concerns that academic disciplines exist to develop, govern, and train people into.
Why Is This a Hard Problem to Solve?
Having spoken to peers, read articles and books, and engaged with hundreds of faculty, students, administrators, and staff directly myself, I now synthesize several reasons why this hard to solve. Without understanding these reasons, it is harder to position a university for success.
Faculty Beleaguerment
Most importantly, I frequently hear from university faculty across various fields and institutions that they feel overwhelmed when dealing with AI. They have good reasons to feel that way, and university leaders, including AI Czars, need to take them seriously. Here a several of them.
First, AI actually is a disruptive technology. We all heard the hype about how MOOCs and VR were going to transform higher education, but they didn’t. I have heard many faculty claim that AI is just the latest shiny object, and higher education shouldn’t fall for the hype again.
Rather than arguing against this view myself, let me share anecdotes that I have heard or read about from faculty at various universities.
- A social sciences professor asked what to do when she strongly suspected a student had used AI to fabricate data, but she could not prove it.
- A faculty member co-chairing a peer-reviewed computing conference said that the number of submissions had anomalously spiked presumably because of AI, threatening to break peer reviewing and diluting the quality of published work, while requiring new policies and practices surrounding desk reject.
- A humanities professor fretted that students were circumventing the point of their assignments—which is to struggle with a difficult text and eventually come up with a hard-won point of view and personal take on it—to instantly generate simulacra of such outcomes.
- One natural sciences professor shared that her office hours have had a precipitous decline in student engagement; when she asked students why they weren’t asking her questions, they answered that they were asking ChatGPT instead. The professor then asked if they at least felt confident with the material, and they said that they did not.
- A librarian described how the library is expending resources fighting off bots attempting to train themselves on public online collections. After a generation of striving to digitize collections and make them public, the librarians were forced to consider making the collections private just to be able to host them at all.
- A social sciences professor who required AI use as part of an assignment had a student request a different assignment, because the student refused to use AI for environmental reasons.
- A life sciences professor learned that students were uploading her slides to an LLM to generate simulated exams to practice with. While supportive of students creating/taking practice exams, the professor wondered about her slides being used to train commercial LLMs without her consent.
- An instructor of a graduate statistics course worried that some students were paying for premium access to GPUs, while other students used the less capable and slower ones that the university provided. The instructor said the students who paid were turning in better work and seemed to be learning more—fretting that the university was effectively offering a pay-for-play system for grades.
- A humanities student complained that an instructor had submitted her work to an AI detector, which had falsely indicated a high probability of AI use, an accusation of misconduct, an appeal, and above all a breakdown in trust between instructor and student.
I have taken the space to include these examples—and I could have tripled that space—to show that not only did MOOCs and VR never create challenges like these, but also that these problems are nearly universal—this is not just happening to one department or discipline. And faculty are telling us that they lack the knowledge, resources, and guidance to deal with them.
Shiny object or not, AI is massively disruptive, and the burden is falling on faculty to deal with the disruption.
Yet in asking faculty members to take up the burden of addressing the implications of AI in research and teaching, and this is my second point, we are asking the majority of the faculty to play to a weakness, rather than their strength. No one would reasonably expect professors of history, sociology, nursing, accounting, music, or journalism to have academic AI expertise, yet when student evaluations, administrators, or politicians complain that faculty “are behind,” need to “get up to speed,” or are “out of touch,” faculty believe they are being held accountable for lacking expertise that no one should expect them to have in the first place.
Meanwhile, and third, once faculty are “up to speed” on AI, whatever that means, what they are supposed to do about all this AI disruption is neither well defined nor resourced. For example, many of us might agree that no student should graduate from a reputable institution ignorant of AI. But how such competencies are specified and measured, and what curricular material is needed to help students achieve them, who is responsible for developing and teaching and maintaining that curriculum, and what currently in the curriculum gets bumped to make room for it, is not obvious. The amount of work to get all this done is daunting, yet resources to do this work are too often scarce. Universities with limited budgets can’t grant unlimited course releases, and few universities are willing to wait years for a volunteer subcommittee’s recommendations.
In short, most of the incentives that faculty experience to engage AI are negative: too much stick and not enough carrot. Below I’ll talk about some ways to pull back the stick and offer more carrots.
Misalignment of Silos
Beyond faculty beleaguerment, another reason that responding AI-instigated change in higher education is difficult is because the structures by which we organize our work do not align well with the changes brought on by AI. Typically, universities organize themselves into research, teaching, and operations. To be sure, all three interact, but these nonetheless have separate leadership, org charts, budget lines, and so on.
One such cross-cutting item is AI infrastructure—a term whose meaning seems to expand by the day—which is not only about technology, but also training, licenses, onboarding, compliance, governance, and support. Everyone needs some version of it, it’s expensive, and compliance mistakes and new policy development and training will all be required.
More subtly, but importantly, addressing AI infrastructure requires faculty, administrators, and IT services experts to work more closely together than before. For example, faculty researchers may have to get more in the weeds on technical requirements and compliance just to start their research, while CIOs and their staff face a sharp uptick in demand on their time and budget, often without new resources to match them. These factors result in interdependencies that reduce the degree of autonomy for all parties involved.
Additionally, if you accept my argument that what we’re calling “AI” is not merely a category of technologies, but rather shorthand for transformations in expertise and knowledge-work brought on by emerging human-technology assemblages, then the separation of research and teaching into silos becomes problematic. A faculty member in English engages writing as a process of creation and self-expression, as a means of encountering alterity, in certain cases as the medium of the highest cognitive and aesthetic achievements of which humans are capable: what is AI doing to/for that? This question transcends any distinction between research and teaching. Analogous versions of the same question apply to disciplines as wide-ranging as art history and nutrition. Indeed, administrators who engage with faculty members on the implications of AI in their work discover that they fluidly go back and forth between research and teaching. What is at stake is their relationship with a specific form of knowledge-work.
Reimagining disciplined knowledge-work itself, not separately plotting out AI’s implications for research protocols and assessment strategies, is the greater opportunity for higher education to show leadership in this moment. If a university automatically uses the research-teaching-service trio to start thinking about AI transformation, it puts itself at a disadvantage from the outset.
Disciplines and/or/vs. Professions
“What is your major, and what do you want to do with it?” This common question in higher education assumes a close relationship between academic disciplines—reflected by college majors and minors—and professions, represented by graduates’ career paths.
But they are not the same. Disciplines are the gold standard of knowledge creation, where discovery, theory, method development, and novel contributions to accepted bodies of knowledge, primarily happen. Professions are where the results of disciplinary work are applied to solve real world problems. In certain professional schools—where students get MBAs or JDs, for example—disciplines and professions might be closely coupled. In others, say, Classics, they might be far apart: I recently read that a sizable subset of Classics graduates end up in financial services. That might have surprised me, except I was once a Medievalist in Comparative Literature Ph.D. who later became Vice Provost for AI and Chief AI Officer.
Anyway, faculty members constitute and represent disciplines, and many of them are relatively disconnected from the professions their students enter into. This is exactly as it should be. While I expect a Classics professor to know their Latin, I obviously do not expect them to have expertise in financial services!
But here is the rub: transformative changes in expertise and in knowledge-work are unfolding in professions, and currently insights about those changes do not always get back adequately to the faculty: more on this below.
Top-Down, Bottom-Up, and Side-to-Side: Shared Governance
This problem is also challenging because universities use a shared governance model. Not unlike having distinctive executive and legislative branches in a state government, universities have presidents/chancellors who are charged with executing the mission of the university, which they do through a top-down chain of command that includes much of the staff. Universities also have faculty senates and other legislative faculty bodies who are charged with driving academic decisions.
Many issues, and AI is one of them, fall across this boundary. University presidents are under extraordinary external pressure to get out in front of all of this. Boards of Advisors, Trustees, and industry leaders, government officials are demanding a response from higher education, and those demands often come through the president. But it is the faculty, as I’ve noted, that is charged with doing the intellectual and organizational labor to effect actual academic change. Top-down directives are often poorly received by the faculty, and yet faculty members are clamoring for guidance and support. If that guidance is top-down (i.e., if I, as AI Czar, try to dictate decisions from above), such actions will encounter political resistance. And they should: I have expertise in a tiny fraction of the disciplines represented at the university, so my chances of getting it right for the whole university are miniscule.
Further, shared governance is a system optimized for quality, not speed. The faculty is not charged with capitalizing on the latest industry trends; it is charged with advancing and maintaining the gold standard of knowledge. For that reason, the faculty does not view its role as primarily preparing students for their first jobs (again, think of the Classics student getting a job in the financial services industry example), but rather as advancing knowledge while also preparing whole citizens for a democracy, which includes preparation for their first job, but also preparation for their tenth, preparation to contribute to their communities, to become leaders, and so on. Many of the competencies that constitute a whole citizen, for example, critical thinking, transcend fads and can be stable for decades or centuries.
Yet AI’s enmeshment in knowledge-work of all kinds is unfolding at dizzying speeds. A shared governance system that was designed to maintain the gold standard of knowledge production was not designed to capitalize on wild sociotechnical transformations. But there are ways to address this as well, so read on!
—
In short, many universities have found themselves in a situation of unclear thinking—about what the problem is, what a solution might be, how to get there, how disciplines relate to professions. The organizational structures and processes that work reasonably well most of the time are not providing the collective action and guidance to advance a clear agenda at a pace appropriate to the changes happening in the class and research communities. An AI discursive framing that is predominantly technological, seemingly aligned with corporatist and military interests, and which positions faculty as though they are behind and need to catch up, disempowers and/or alienates many faculty members. And with that backdrop, many of them nonetheless wind up trying to solve the entirety of these challenges all alone in their research and teaching. For many, all of this comes to feel intractable.
But it is not intractable, because an AI Czar can make a real difference.
How an AI Czar Can Move the Needle
I have argued that the essence of the AI challenge in higher education is that AI’s enmeshment in knowledge work is transforming expertise, work, and value creation throughout society, and that this societal transformation, in turn, implicates the university. Specifically, it counts on universities to lead the way in determining whether, how, when, and where AI becomes enmeshed in any given discipline’s primary objects of inquiry, its research methods, its results and research products, its ethics, its pedagogy, and its standards of rigor. If this description is accurate, it’s clear this isn’t just an issue with technology training, faculty attitudes, or a small curriculum gap.
It is, rather, a university-wide culture change problem. It involves everyone who contributes to the mission of higher education: faculty, students, staff, administrators, trustees, advisory boards, leaders of academic associations (MLA, IEEE, APA, etc.), industry leaders, donors, and legislators. All are needed to change the culture of the university, and all need to do so in a coordinated way.
The Job of the AI Czar in a Nutshell
The job of the AI Czar is to coordinate the culture change of the university. As I’ll unpack in more detail, that includes the following actions:
- Reframing the AI challenge in a way that motivates, rather than alienates, the faculty.
- Listening to all stakeholders with curiosity and humility, learning how to meet them where they are, ensuring that they are heard and respected.
- Translating what groups of stakeholders are experiencing, achieving, and needing to other groups of stakeholders in a way that the former can meaningfully inform the latter.
- Helping to define the strategic directions that the university wants to pursue. Every university has different core strengths and areas of excellence, so there is no one-size-fits-all approach here.
- Developing processes and mechanisms that distribute the work without overwhelming any group, while also avoiding top-down dictates.
- Evangelizing, coordinating, and managing the distributed work of culture change.
For its part, university leadership needs to position the AI Czar for success, which includes granting the authority to lead while also providing resources to support the actual work.
The Heart of It All: Supporting the Faculty
This situation is putting considerable pressure on faculty members. An AI Czar can take several actions to alleviate the pressure and support them as they navigate this change.
Tactic: Reframe AI to Motivate, Rather than Alienate, the Faculty
I have heard, in many institutions and coming from many disciplines, faculty members express the view that they are being asked to drop everything and deal with AI because a few trillionaires, whose personal financial empires depend on the success of AI, have persuaded the whole world that “AI is the future.” Accordingly, faculty response may be tempered by their reluctance to participate in an agenda they oppose, one which moreover requires them to learn additional technologies beyond their current responsibilities.
While I disagree that higher education is merely succumbing to the agendas of multinationals, I respect the underlying concern and recognize it as the starting point for my work. Here’s how I do it.
- Reframe the university’s response away from AI as a set of technologies that somehow define “the future,” and instead position the concrete enmeshments of AI in disciplinary forms of knowledge-work as the proper focus of faculty members’ research and teaching.
This reframing is effective for two reasons. One, many of the problems we are collectively facing are not technical in nature. A therapy bot replacing a trained social worker raises a ton of issues, and only the minority of them are computational. The impact of LLMs on writing is likewise not a computational problem, but rather one of what we believe writing is and does. Sharing millions of private electronic health records with researchers presents governance and compliance challenges that outweigh computational concerns.
These three examples all point to the same insight: the issues raised by the availability of AI are not questions that any amount of AI training will ever answer. They are questions that the disciplines themselves need to address.
And that leads to why this reframing is effective for the second reason: it recenters the problem on faculty strengths, rather than weaknesses. No amount of “catching up on AI” will help the field of social work determine the ways in which chatbots can safely contribute to therapy. In the age of AI, then, we need professors of social work to be experts of social work, not experts of AI. Of course, being an expert of social work requires ongoing learning about how AI is enmeshed in social work, but it does not mean that a professor of social work somehow also needs to become an amateur computer scientist on the side!
- Foreground the mission of higher education, which is to be in service of the public good, not to be in service of multinationals and their agendas. What AI means—the story of AI—is mostly unwritten. Universities must not abdicate their special role to help shape that.
In my experience, many faculty members are turned off by AI because for them “AI” is hype for a corporate agenda. But while it is certainly possible that multinationals will dominate AI agenda-setting, it doesn’t have to be that way. If faculty members opt out of AI because they object to Silicon Valley hype, they also abdicate their role in replacing hype with something more substantive and beneficial.
Mission-driven AI differs from that developed by multinational companies. Consider the University of Michigan’s mission statement, which includes “creating, communicating, preserving and applying knowledge, art, and academic values, and […] developing leaders and citizens who will challenge the present and enrich the future.” The multinationals’ AI agenda isn’t doing that sort of work, so if we in higher education don’t, no one will.
Tactic: Right-Size AI-Related Expectations of the Faculty
I start with a scenario. A faculty instructor (correctly) suspects that almost all of his students are using AI, but at different levels of competence; he also worries that students lack responsible AI and ethical use competencies. He believes that he himself ought to know more about AI—but what, exactly, he doesn’t know. Pedagogies, assignment types, and assessments that his disciplinary peers have used for generations are now broken, so he needs to find ways to prevent students from using AI to circumvent learning. He needs to figure out how to police against cheating. He also knows that he is expected to prepare students for an AI workforce, but he doesn’t know what that means, either.
I’m trying to show that the scope of the problem this faculty member is trying to solve is huge, ill-defined, and (in most cases) outside of his academic expertise. From assessing student AI knowledge to covering basic AI literacy to redesigning disciplinary pedagogy to detecting misconduct to upskilling himself to teaching profession-specific AI skills, this faculty member is doing it all—and none of that actually delivers the primary content of the course—physics, say. This situation is unsustainable and unnecessary.
The tactic to address this scenario is a bit more complex than the last, but let’s start with two interrelated goals.
- Limit the scope of what each individual faculty member is expected to contribute to this change.
- For the majority of faculty members, this scope should be closely aligned with their specific area of academic expertise.
To limit the scope, it helps to decompose and to define different aspects of the problem. In the scenario above, the faculty member felt responsible to teach basic AI literacy and also profession-specific AI competencies, which are two very different AI-related pedagogical goals.
To help break this down, I propose a three-layer perspective on AI learning outcomes. This perspective is based on a distinction that a colleague reminded me of from a different context: learning to read versus reading to learn. The former is about ABCs, sounding out letters and words, learning to access meaning from the written word. The latter is about using the ability to read to acquire new knowledge, e.g., of science or history. The three layer perspective, then, is as follows:
- AI literacy is analogous to learning to read. It includes AI terminology and a general idea of how it works, some experience learning to use consumer-oriented tools (e.g., writing prompts), and an idea of responsible and ethical use of AI (understanding issues of bias and fairness, transparency, privacy, accountability, etc.).
- AI competency is analogous to reading to learn. I further subdivide this into a more general category of competency and a more specific one.
- General AI competency goes beyond exploring consumer tools and understanding at a high level some ideas and issues; it includes issues like the following: data literacy (understanding data inputs and outputs, preparing data); interrogating data sets, deriving insights, and telling a story about them; matching AI tools and methods to tasks and workflows; developing tools (e.g., agents) to automate workflows, etc. These are general because they are not profession-specific, which is the third layer.
- Profession-specific AI competency for obvious reasons, is not possible to spell out here. But if one imagines what AI competencies might be appropriate for drug discovery vs. legal practice, K-12 education, mathematics, or political philosophy, it becomes clear that the determination and prioritization of these competencies, and their manifestation in curricula, can only be accomplished by the faculty experts that constitute a unit or program.
The benefit of breaking AI literacy and competency out in this way is that it becomes easier to specify its learning objectives, what curricular contents should entail, and who should be responsible for them.
So, for example, a university could take the approach that AI literacy would be covered in the first year within and as a part of existing Gen Ed requirements. If it did, then all faculty downstream of that would know that students had at least been taught AI literacy, and so they would not be primarily responsible for teaching it.
Likewise, if general AI competency were defined and covered by a given unit or program, such as school of information, informatics, or data science, then the scope of what instructors of upper level undergraduate and graduate courses would need to worry about would shrink further—to the discipline- or profession-specific AI competencies that in most cases would be coupled with their area of academic expertise.
This process will be further simplified if the faculty is able to arrive at a universally shared definition of what “AI literacy” is (and is not)—something analogous to the way that most universities have a “writing intensive” designation, whose requirements are explicitly defined and universally shared. It would also be easier to indicate that AI literacy is a prerequisite for a given course, because the term would have meaning.
Taking that a step further, if required courses in a given program could be designated as “AI ready” in a way analogous to the “writing intensive” designation, it becomes trivial for department chairs and program directors to look over the list of required courses and assess how AI is integrated into the curriculum as a whole, and proceed accordingly.
Tactic: Pursue De Facto and De Jure Change in Parallel
So far, I’ve mostly talked about defining how the needle is to be moved, rather than the mechanisms by which it will actually move.
To begin, faculty members generally do want the situation to improve—they want coordinated direction, guidelines, frameworks, curricular resources, and so on. While they do not want top-down solutions imposed on them, they also don’t want to start from scratch. Additionally, everyone wants the result to be legitimate—something that depends on faculty participation and buy-in—but everyone also wants help to come fast, which is not easy for a deliberative, interdisciplinary, multi-tasking faculty body to achieve.
One solution looks like this.
- Given a specific problem, say, formulating an AI literacy framework, the AI Czar can research the literature, what other universities have done, etc.; then synthesize the best ideas from the research into a sketch-like proposal. Ideally, this would be vetted with experts from faculty development, education scholars, AI advisory board members, etc. before the next step.
- Offer this proposal to an appropriate faculty subcommittee, and have them develop the sketch into a more compelling proposal.
- At this point, advance that proposal in the following parallel tracks:
- The de facto track: Put it in the hands of deans and chairs, and, having verified their buy-in, ask them to push it into classes and programs. This ensures forward movement. Any learnings that arise from it can also inform the other track.
- The de jure track: Put it in the hands of the faculty body, and follow their formal procedures to get some version of it eventually adopted formally.
The de facto tracks socializes, prototypes, and gets ideas moving into courses and curricula, building up institutional knowledge and competence quickly. The de jure track ensures that whatever the eventual product is, it has legitimacy.
Leading the Slow Work of Culture Change
The set of tactics I have offered should help break the problem down into more manageable pieces and then provide pathways to make significant advances to the curriculum and overall AI strategy. But that is just a part of the broader process of culture change. Here are some additional tactics AI Czars can use to effect meaningful change.
Tactic: Circulate the Internal and External Expertise
An AI Czar is also positioned to improve the circulation of insights that universities already have at their disposal.
For example, I noted in the subsection on disciplines vs. professions that faculty were more aligned with disciplines than professions. Yet if the faculty lack sufficient mechanisms to stay on top of developments in professions adjacent to their disciplines, their ability to adapt to those changes is compromised—I see that happening with AI.
Yet universities in many cases already have access to that knowledge. One source is program alumni. Even a relatively small number of alumni can share real-world cases that are new, and which might influence how a faculty member designs a research study or chooses a reading for students. Yet alumni often engage with chairs and directors, professional staff, and students, but not always faculty. Advisory boards—who typically interact with deans but often not the faculty—are a source not just of information concerning developments in the professions, but also strategic insight about them. Also, many units have mid-career fundraising officers, who get to know program alumni, what they do, their values, their hopes for the program; they use this knowledge to solicit donations, but their knowledge of alumni activity often doesn’t loop back to program faculty. All of these are under-utilized pathways that an AI Czar’s team can use to provide faculty members with insights coming from the professions without requiring them to go out and find it themselves.
Other groups also have special insights that, unless amplified by an AI Czar, might fail to circulate. Centers dedicated to faculty development spend their days creating and deploying faculty training, while listening to faculty requests and feedback; they are neck-deep in offering AI-related modules and have their fingers on the faculty’s pulse. They are an invaluable source of insights, anecdotes, and creative tactics. Professors of education are another group whose academic area of study has extraordinary relevance to the strategic initiatives of senior administrators, including the AI Czar, but it is all too easy for those insights to be disseminated throughout their research communities and to be ignored locally.
The AI Czar and their team can do much to circulate this information themselves, but they can also distribute some of the workload back out. For example, they can organize internal events, including workshops and even conferences, that provide opportunities for diverse stakeholders to engage each other, share best practices, identify challenges and goals, and so forth. Another approach is to conduct and disseminate internal surveys of AI use and attitudes of students, faculty, and staff.
Tactic: Encourage Metacognitive Dialogue
How can literature professors today challenge students to encounter difficult texts, relying only their own intellectual resources? How can cheating be detected—or even defined? How can instructors encourage students to engage with them, rather than chatbots, when they have questions? These questions suggest to me that instructors and students no longer tacitly share understandings of classic pedagogies.
For example, I was trained in literary studies. We would meet in seminar-style classes concerning a sizable selection of reading. We were expected to have done that reading and were also expected to bring to class insights and textual evidence to support them. It was OK to have read some literary criticism of the text, but relying on Cliff’s Notes was not acceptable. If during a seminar discussion, we disagreed with a classmate, certain ways to express that disagreement were acceptable, others not. Everyone understood the norms and expectations, and everyone understood their pedagogical purpose.
Today, this shared understanding classroom pedagogy seems to be breaking down.
The rationales, norms, learning objectives, and standards of excellence that constitute our pedagogies may need a more explicit articulation in our times than many of us are used to. But this is not entirely novel: I recently spoke to a math professor who did not allow the use of calculators for certain assignments, and he had planned out a conversation to be had each semester with the students so that they understood why. He was engaging students in a metacognitive discussion about their learning, to help them understand and accept the constraints imposed on them.
Something analogous probably needs to happen today, where instructors of classes, or researchers leading labs, need to engage in a metacognitive dialogue about their goals, methods, constraints, and the reasons for them. The place of AI would be a part of that conversation. Encouraging metacognitive dialogue applies not only to faculty-student conversations, but faculty-faculty and faculty-staff (e.g., advisers) conversations. In addition to encouraging such conversations about AI, we might also provide simple frameworks to support them. Doing so might help normalize them, so that they come across as practical and comprehensible, rather than confusing or even capricious.
Tactic: Embrace AI Curmudgeons, But Demand Rigor
Anti-AI faculty are not a problem to be solved. Intellectual disagreement is inevitable and healthy. Further, anti-AI faculty members tend to surface a range of issues that deserve attention. Indeed, entire disciplines might show more AI skepticism than others, with their skepticism deriving from theoretical vocabularies and methodologies that are designed to surface unintended consequences or undesirable implications of given technological advancements. Some of those issues might then be ameliorated by others in the community; others may help society to reevaluate or reimagine AI use accordingly. All contribute to the university’s mission.
Less helpful are skeptics whose views are not grounded in academic thought, but rather in mundane opinions as everyday citizens. Everyone has the right to their own opinions; however, when these views lead to disengagement, higher education to neglects its responsibility to shape AI’s narratives and benefit society.
Let me make this even more practical. It is already expected of faculty members at R1 universities to stay on top of developments in their academic fields. I suggest that as a part of their regular commitment to stay on top of developments in their field, it is reasonable to expect faculty members to maintain an awareness of the AI discourse within their field, whether it has to with research (e.g., AI and discovery), teaching (e.g., reimagining assignments in an AI era), or service (e.g., managing the explosion of AI submissions to conferences). Without having to agree with prevailing AI discourse in their field, faculty members should minimally have their own view on it. Ideally, this view is represented in their research and teaching, as appropriate. Chairs, deans, and leaders can set this expectation, with the AI Czar’s role being to highlight it.
Tactics for Students, Staff, and Development
This article so far has focused primarily on faculty, and that is because I believe that in spring 2026, faculty most urgently need engagement and intervention. But I do not at all mean to play down the significance of students and staff—both of whom are part of this culture change. Future versions of this article may feature more content about the AI Czar’s role with regard to them. For the time being, here are a couple of teasers:
Student insights: Students feel a sense of urgency concerning AI, and many are excited about it and want to help the university and their peers.
Tactic: Create peer teaching and learning opportunities that leverage student expertise in widening resources available to all students.
Staff insights: Staff are more motivated than faculty members to learn and use AI for professional development reasons; staff can also be directed to use AI by their supervisors; AI vendors are willing to provide training inexpensively and even for free, and that training aligns better with staff workflows than faculty ones.
Tactic: Build a critical mass of AI competence throughout the university by focusing on staff training, access to AI technologies, and updated expectations.
One area of staff deserves additional attention: fundraisers.
Development insights: Industry leaders and donors are extremely eager for universities to turn the corner on AI, particularly concerning workforce preparation. Many have financial means and/or access to specialized technologies or services and are willing to give if doing so will advance the university’s response to AI.
Tactic: Leverage this moment of high alignment between donor goals and university initiatives to fundraise on AI initiatives: technological infrastructure, cluster hires, academic programs, research centers, or even capital infrastructure.
Conclusion
I believe that AI—understood not in a merely technical sense, but rather as a societal phenomenon that is changing the nature of expertise and knowledge-work—is more than mere hype. Certain AI companies may well be overvalued, and certain AI technologies might come to disappoint. But the enmeshment of AI in the objects, methods, outcomes, and ethics of inquiry is not going away.
I believe that universities need to engage these changes seriously. That means taking ownership of AI agendas and orienting them to the public good, lest less benevolent agendas continue to dominate society’s imaginations and discourses. It also means ensuring that we truly are preparing students—our way, according to our values—for a future where AI is enmeshed in knowledge work. If we fail to act, industry will step in with its own training and certifications, damaging higher education’s value proposition at a time when the public is already questioning it.
Having said all of that, I hope I have also made clear that laying this at the feet of the faculty, without understanding the challenges they are facing, giving them practical guidance, or meaningfully resourcing them, is a failed approach. I hope to have made a strong case that this failure is not the faculty’s; it is the whole institution’s.
The AI Czar is an administrative role that, properly empowered and resourced, can set a university-wide practical agenda, redirect counterproductive tactics and discourses into alternatives that are grounded in university stakeholders’ strengths and values, translate insights across those stakeholders, and, above all, effect meaningful change.
Please Respond!
This article is a draft. I’d love to hear from my readers what resonated, what I missed, what I got wrong, what needs clarification, expansion, or contraction. Please share this, comment below, and/or email me!