Many Faculty Members Disagree with Administration about AI: Both Are Right

Universities are navigating an important conflict concerning the role of AI in the academic mission.

On one side, around 90% of US professionals agree that universities should be training students on AI. About the same number of students agree. We know that AI is wiping out entry-level positions, which is leading to unusually high unemployment rates for the recently graduated, so preparing students to enter an AI-ready workforce is urgent. Donors, parents, industry leaders, state politicians from both major parties agree–all are pressuring higher education to respond by incorporating AI education into the curriculum, and university leaders are clearly hearing them.

On the other side of it, many faculty members are ambivalent about AI. Over half of respondents to a recent national survey said that AI will negatively affect students over the next five years, while only twenty percent felt it would improve them. According to the same survey, 90% of faculty think that AI will diminish students’ critical thinking skills, and 95% think it will lead to student overreliance on AI. A different study appears to validate those fears, showing that students using AI had worse academic outcomes and lower brain activity than students not using it. It is perhaps not surprising that 82% of faculty report AI resistance in their department.

Both sides have validity, but as long as this conflict remains unresolved, students are paying a steep cost, because this is what they hear: While I’m in college, I’d better learn AI if I want to have any kind of career, but if I use AI in my classes, I’m a cheater and might even find myself in a career-threatening academic misconduct hearing.

We need to find ways to move past this conflict. I believe the most promising strategy is to reframe the AI-relevant expectations placed on faculty in a way that aligns with both their academic expertise and their values.

Why Many University Faculty Members Resist AI

I share now a synthesis of issues raised in my interactions over two years with faculty at a few universities, the insights I have gained from them, and the strategies I have developed by way of response. Let me dispel in advance any notion that faculty resistance is rooted in laziness or a lack of concern for student wellbeing or the good of society. I see no way forward if campus leaders fail to appreciate the legitimacy underlying faculty concerns, though that does not entail agreeing with how those concerns are presently manifesting. I introduce those faculty concerns in two groups: those ambivalent about AI and those skeptical of it.

AI Ambivalence

By “AI ambivalence,” I refer to faculty members who see AI as outside the purview of what they are doing.

One common expression of AI ambivalence is this: I am not a technologist, and AI is moving extremely fast. I can’t keep up with it, and there seems little point in learning it in the first place. To understand where this is coming from, one has to recognize that faculty members derive their professional legitimacy based on the high standards their fields impose on the exercise of expertise. Every one of them ran a gauntlet–classes; Ph.D. qualifying and comprehensive exams; dissertation proposal defense; dissertation defense–to earn their Ph.D. No faculty member wants to or should be expected to research or teach outside their qualifications. So given that professors of accounting, history, public policy, biology should not be expected to be experts of clustering algorithms, deep learning, data governance, generative adversarial networks, word vectors, etc., the impulse of some of them to defer AI research and teaching to more computational faculty is a manifestation of their upholding the standards of rigor that defines academic work.

Another objection: AI is not only unhelpful to, but is downright debilitating to, certain kinds of knowledge work. For example: generations of undergraduate literature seminars gather students to engage a selection of difficult texts curated around a theme, era, or author. These texts demand of readers that they throw all of their reading strategies, experiences, and imaginations at the text, which gradually yields meaning. Students then express their thinking in an essay, whose arguments are shared and critiqued by their peers. This process has been used for decades to train students to think critically, to be patient and persistent with difficult material, to engage conflicting perspectives, to discern subtle distinctions, and to craft arguments–all capabilities in high industry demand. But now, AI threatens to trivialize both the challenge of the textual encounter and the job of crafting an argument. It’s as if an athlete brought a forklift in the gym to lift weights and reported all the weights the machine lifted. On this view, AI does not support the purpose of the intellectual work; it defeats it.

Before responding to these two concerns, I want to recognize that both are rooted in academically serious commitments, and they deserve a serious response from administrators. Here is how I have learned to respond to them.

I note that both of these arguments share a view that AI is not overall beneficial to the work of their disciplinary practice. The first objection places AI as external to the disciplinary expertise required to practice, and the second claims that AI is destructive of the practice.

Yet neither view accounts for ways that AI is already changing that disciplinary practice, and–this is key–doing so in ways that only experts of that disciplinary practice are qualified to perceive and to shape.

Let me take literary studies as an example. One crucial topic of literary studies is textuality, which is concerned with the relations among a text’s form and content, its means of creation and dissemination, its societal meanings and consequences. From scrolls to biblical and literary canons, and from the printing press to the hypertext novel, textuality is part of a theoretical vocabulary to analyze reading and writing. But is AI not changing textuality? Is an AI-generated synthesis of 250 essays on Hamlet itself a “text”? Is a human-AI co-written assemblage of lines and stanzas delicately expressing the nuances of a human experience a “poem”? What is the nature of “authorship” concerning such texts? These are literary, not computer science, questions.

To restate more generally: individual academics represent their disciplines; they stay current with them, they implement and train others into their disciplines’ standards; they advance those standards as part of their communities. The question each academic should respond to is how is AI enmeshed with the objects of inquiry, the methods, the tools, the discursive outputs of my own discipline? How are peers in my field engaging with AI? How should I respond to my peers? This set of questions narrows the scope of the problem: no longer is individual academic responsible for “keeping up with AI”; rather, she should be engaging her own discipline as an expert of it, in a world where AI is enmeshed with that discipline, and where that enmeshment can yet be improved.

In other words, AI introduces disciplinary issues that only the disciplines themselves can address. Faculty members in a Law School should be contributing to the conversation about the use, ethics of, and future agenda for AI in jurisprudence. (Other disciplines, of course, can also contribute an AI in jurisprudence agenda from their own perspective.) Corollary to that is that faculty might need to revisit their pedagogies–including their intended learning outcomes and assessments–to reflect the reality that their students will become practitioners in a legal world with AI.

AI Skepticism

AI skeptics create arguments that take a more confrontational stance with regard to AI. Here are a few common examples.

AI is not just a technology, but it is also an agenda–an agenda tied to corporatist interests (or colonial or military interests, etc.) that I do not want to validate or support. Many disciplines share a skepticism towards technology, because the advancement and distribution of new technologies are often caught up in societal phenomena that deserve scrutiny. So, for example, in an earlier generation, skeptics perceived, named, and agitated society to address the “digital divide,” which acknowledges that access to digital technologies and their benefits is not equally distributed. Their efforts have had lasting benefits: even today, policy initiatives–e.g., providing internet infrastructure to rural Americans–reflect their underpinnings and goals. These skeptical disciplines, in other words, are focusing our attention on the undesirable and unadvertised threats that emerging technologies pose to our own values. In the long-term, disciplinary-appropriate skeptical reasoning contributes to better technological implementations, whether through the improvement of algorithms to ameliorate the problems once identified, the formulation of policies that limit their damage, or even just by informing the public.

Another objection is: Pro-technology thought leaders told us that [VR, Blockchain, MOOCs, etc.] was going to disrupt education, and they urged universities to prioritize major investments in infrastructure, training, research agendas, and curricula–all for what turned out be a fad. Here they go again, this time with AI. What lies under this is another reasonable impulse: universities are guardians of the gold standard of knowledge production. The pursuit of knowledge with the highest level of confidence–e.g., experimental studies in medicine–is slow for good reasons. Further, producing knowledge at this standard is expensive, involving involves faculty hires, buildings, technological infrastructure, professional staff support, strategic investments in research, program and curriculum development, etc. If universities chase fads, their use of public resources will become diffuse and incoherent. These skeptics help ensure that we move wisely.

Another one is: We are in a climate crisis, and AI’s environmental impact is serious, and these costs outweigh any benefits. I agree that a phenomenon as environmentally impactful as AI deserves to be subject to cost:benefit analysis. Yet the interests of the AI companies vying for market positioning and/or investment may not be aligned with the long-term public interest: in the rush to encourage adoption, they can’t afford to disincentivize use. Relying only on the tools they provide, it is difficult for most people to ascertain the environmental impact of their own–or anyone’s–use. Once again, these skeptics are providing a service to society, because they are foregrounding matters of concern that other kinds of organizations, such as for-profit multinationals, are structurally unlikely or unable to foreground themselves.

As I did in the AI hesitation subsection, I’d like to generalize across these expressions of skepticism. AI skepticism in higher education is best seen as impeding the narrow, short-term technological progress of AI in the university in exchange for enhancing the university’s long-term capacity to accelerate the benevolent potential of AI in society. Skepticism is often grounded not only in scholars’ value systems as citizens, but also in the theories and methods of their fields, which have been developed across generations and disciplines specifically to support the university’s mission to serve the public good. We should welcome AI skepticism!

That said, I have found the following tactic is helpful for moving conversations with AI skeptics forward, which is to focus the conversation on the fact that AI is not a hypothetical concept from a Philip K. Dick novel, not a technology that might be on the horizon: it’s here, now. (Chances are, if you’re reading this, that an AI recommender system had a part to play in that.) No amount of criticizing AI, however justly, will put it back in the bag, and returning to blue books will not help you or your students contribute to making the rollout of AI more beneficial to the public.

I also point out that the most likely result of faculty members abstaining from engaging with AI as disciplinary experts is that they simply abdicate their ability to contribute to benevolent AI knowledge agendas. While skepticism directed at industry hype is reasonable (and needed), we must reckon with the fact that AI is already here and impacting all of us–including within our disciplines and their adjacent professions. Likewise, while it is easy to juxtapose AI slop and AI-caused environmental harm to argue that AI fails a cost:benefit analysis, it is harder to make the same argument if one changes from AI slop to AI detecting breast cancer earlier than humans can.

In short, one does not have to like AI to help shape a more benevolent societal agenda for it.

How To Move Forward

The nation’s over 2,000 accredited colleges and universities have the collective capacity to envision what a benevolent AI agenda looks like, to pursue it in earnest, and to train a generation of citizens informed by its values. But this work will need to happen discipline-by-discipline. Each faculty member has, I believe, a duty to help their discipline serve the public in the world we live in–and that is a world where AI is changing how work gets done, how knowledge gets made, and how social and economic value are created.

Returning to the dilemma of our students, here is what one such agenda might look like.

In a recent conversation, I had a colleague remind me of the distinction between learning to read and reading to learn. Learning to read is about learning the letters, how to sound them out, how to access the meaning of written words and passages, and so on. Reading to learn comes after that, and it refers to using the ability to read to access meaning and information, e.g., in a history or science book.

If we think about AI competencies by analogy, we get the following progression.

  1. Learning AI: We learn to how to become “users” of AI: the basics of what it is, how it works, and how to use some commercial AI products. We learn about some of the risks AI poses to ourselves, our organizations, and our society: bias, privacy, hallucinations, etc.
  2. Learning With AI. Our competent use of AI supports us in our work or knowledge production. A student might create an AI learning assistant or use AI to generate practice exams. A scientist might interrogate a massive data set to derive insights and craft a story, with visualizations, from them. A finance officer might reconcile accounting records using AI.

There are still higher levels. For example, a bioengineer using AI invents a new material, improving a type of prosthesis; a computer scientist devises a new method that trains LLMs using fewer resources. I don’t think society expects undergraduate students to work at that level.

But I do think our society expects us to teach AI and to teach them to use AI in their knowledge work.

To teach AI, I believe universities need to come to an internal agreement about what “AI literacy” means. Just as a blueprint helps diverse stakeholders such as builders, plumbers, realtors, home buyers, city inspectors, and interior designers all do their jobs and communicate with each other, so an AI literacy framework will enable faculty from all disciplines to decide internally whether and how they contribute to AI literacy, and communicate their intended contributions to other units. It would support program heads in ascertaining whether they have sufficient AI coverage and also an appropriate distribution of it. It would help administrators like me show parents, policymakers, donors, and partners that every student is getting what they need. And it would help ensure that students get consistent messaging on crucial topics, such as AI ethics. It can be done.

Getting to the second level, learning with AI, will require a more decentralized effort, because many aspects of AI do not generalize. For example, AI in health is subject to HIPAA and it commonly uses data derived from electronic health records; AI in finance must comply not with HIPAA but with the regulatory frameworks that apply to finance, and working not with EHRs but rather with credit report data. English, communications, and rhetoric units typically teach writing for the whole university: but what does effective writing practice look like in the era of generative AI and LLMs? Writing faculty are best positioned to answer such questions. Likewise for AI’s present and future enmeshment in natural sciences, journalism, foreign languages, computer science–the list is as extensive as our universities have programs and departments.

Summary

To defuse the conflict introduced at the beginning of this article, administrators and faculty members alike would do well to recognize that the job of individual faculty members is not to “learn AI.” It is, rather, to encourage faculty members to grow their expertise starting from and grounded in their own disciplines, as they have committed to do since graduate school–but to do so in a way that recognizes AI’s enmeshment in that discipline, both in the present and as it could or should be in the future.

If a critical mass of the faculty members act on a principled take on AI enmeshment in their disciplines, be it any variation of pro-AI, AI hesitance, or AI skepticism, what is today a paralyzing conflict could tomorrow become a more impactful advancement of the public good (in a world with AI). In such a way, the university will be both responding to contemporary societal needs, such as preparing a future workforce for a world with AI, and also the university’s need to respect and fully leverage the faculty’s deep-seated and enabling commitments to academic rigor and ethics.

Leave a comment