Generative AI, Knowledge Practices, and Higher Education

In the last year or two, as a researcher, academic administrator, and teacher, I have been confronted almost daily with issues connected to generative AI (e.g., ChatGPT).

As it connects to language and our hopes and fears for the future, this topic has reactivated intellectual interests of mine that date back to my graduate school days, which led to my first academic book on the roles of language in speculative thinking, a theme that has manifested itself throughout my research on creativity, critical and speculative design, intimacy, humanistic HCI, feminist HCI, posthuman design, and research through design.

Lately, my research has focused on the limits of generative AI, trying to understand and account for why it can so easily produce some sentences but seemingly not others. The explanation for that tells us more than something about AI as a technology; it also tells us something about ourselves. At a minimum, it tells us that we can write sentences like that. And that matters for higher education.

AI Can’t Replace Human Creativity

In Robot Proof: Higher Education in the Age of Artificial Intelligence, Northeastern University President Joseph Aoun asks how higher education can prepare high-skilled professionals for a world where AI is transforming (if not outright threatening to destroy) professions.

His fundamental answer is that AI cannot replace human creativity, and therefore a role of higher education is to cultivate creative professionals:

college should shape students into professionals but also creators. Creation will be at the base of economic activity and also much of what human beings do in the future. […] Great undertakings like curing disease, healing the environment, and ending poverty will demand all the human talent that the world can muster. Machines will help explore the universe, but human beings will face the consequences of discovery. Human beings will still read books penned by human authors and be moved by songs and artworks born of the human imagination.

Joseph Aoun, Robot-Proof, xvi

To develop individuals capable of such creativity, Aoun argues that students should have literacy in each of the following: information technology, data science, and the human. And these literacies, in turn, should leverage human capacities such as critical thinking, systems thinking, entrepreneurship, and cultural agility.

At a general level of description, I more or less agree with Aoun. But the book is relatively short and leaves a lot of the details of what this might mean to his readers.

And to this reader, part of the answer to what AI can’t replace about the human is tied to that distinction I raised earlier: the sentences AI can produce versus the sentences it cannot. I have spent the last year trying to understand which are which (a sort of grammatical Turing Test).

Sentences That Only Humans Can Write (And Why)

I’ve noticed that when there are explicit rhetorical features in texts (or language), the model could often reproduce those features competently. For example, prompt ChatGPT with “Once upon a time,” and it will typically produce simple children’s narratives filled with such things as princesses, knights, and castles. Likewise, start a conversational dialogue about an everyday topic, and it can continue to produce more dialogue about it.

Large language model AI systems work by learning to predict which words go with which words, and so for texts with highly regular rhetorical patterns, AI systems can often effectively generate them, and they will almost certainly continue to improve in that area. But importantly, they don’t actually form robust semantic representations of the words internally; in short, they don’t understand what is being said. They don’t participate in any communicative act.

Now, when I look in serious fiction, I can find sentences unlike any I have ever seen AI produce, in spite of my having read thousands of AI-generated sentences.

Here is an example of just such a sentence:

In later years, holding forth to an interviewer or to an audience of aging fans at a comic book convention, Sam Clay liked to declare, apropos of his and Joe Kavalier’s greatest creation, that back when he was a boy, sealed and hog-tied inside the airtight vessel known as Brooklyn, New York, he had been haunted by dreams of Harry Houdini.

Michael Chabon, The Adventures of Kavalier and Clay

At 62 words, and featuring seven commas delineating numerous hierarchically and temporally arranged phrases and clauses, the sentence features more grammatical complexity than anything I’ve seen AI produce.

But it’s not just its grammatical complexity: it’s also a really good sentence. That is, all that complexity is there for some good reasons. It is the opening sentence of a novel, and it introduces character, place, tone, and plot with an almost poetic efficiency. It also includes images, such as being “hog-tied inside the airtight vessel known as Brooklyn,” that are visually powerful, witty, and also illuminating of the protagonist’s lived experience. Additionally, its composition teases the reader with clever delays, before the reward of its provocative finish–“haunted by dreams of Harry Houdini,” where the final word might just be the most important in the sentence.

I believe that Chabon is able to write such a sentence because he is an expert of literary practice. Research on literary writing and criticism as a practice (here are two excellent resources), emphasizes the contexts (authors, audiences, publishers, critics, institutions) in which literature or fiction becomes possible and can flourish. It stresses the ways that the practice is socially regulated so that it is capable of producing high achieving results.

As one example of high achievement, literature is replete with what Harold Bloom calls “literary memory,” which refers to the ability of later authors to rework earlier authors in insightful and surprising new ways, creating complexly allusive, layered meanings that their readers are expected to pick up on, to contemplate, and to engage in using a special mode of appreciative-speculative mental interplay characteristic of aesthetic attention. Chabon’s sentence, for example, deftly mingles comic books, familiar representations of post-war New York City, the coming of age story, and the mythology of Harry Houdini).

Chabon’s first sentence thus accomplishes introducing, illustrating, teasing, naming, characterizing, image-making, alluding, etc., with both economy and also emotional power. And not only does he understand how to use language to create each of these effects, but he also knows that all of his readers can use language to experience them.

In other words, he writes this sentence not because he has internalized millions of explicit features of writing, but because he is in and of the world where literary fiction is a thing, and within that world, he is a certain type of expert within an advanced socially governed practice, and, obviously, he also is a sharp social observer of people. He is skillfully participating in an intentional act of social communication.

Humans and Knowledge Practices

Contemporary professions such as data science, UX design, organizational informatics, cybersecurity, and so on are also socially governed knowledge practices. Obviously the literary specifics that I have just laid out don’t directly apply to a security analysis or IT policy expert, but each profession has its equivalents. Rather than the creative contribution being a finely crafted literary sentence, it might rather be an insightful assessment of technical threat, the development of a model that usefully predicts emergent phenomena, or the design of a new product that no one knew they desired until they saw it and had to have it.

My point is that humans are capable of participating in and advancing the practices by which inventions, products, innovations, services, and so on are made, in part because humans understand humans, can be trained into practices, and can understand and produce texts and other artifacts connected with those practices. They participate in and intentionally extend and improve them. Insofar as they do, humans can achieve outcomes that AI, which cannot understand or participate in those practices, cannot produce.

None of this is meant to denigrate the possibilities of AI! Generative AI is already used extensively and I expect its exponential growth in the coming years, as it becomes embedded in everyday products, including word processors, image editors, and all forms of enterprise software. We are only beginning to discover what AI can do when leveraged creatively.

The question, then, is how to join up the complementary capabilities of humans and generative AI in future professions, and more to the point: how can higher education prepare future professionals to do that work?

AI-Related Professional Competencies

Just to get something on the table, here are some human competencies that I have synthesized from conversations with researchers and industry leaders in IT:

  • Designing new professional roles out of human-AI collaborations (akin to “cobots” in manufacturing), including breaking down and distributing tasks leveraging intelligent systems
  • Narrating, justifying, and documenting professional processes and outcomes, while appropriately accounting for AI’s contributions to it, to diverse stakeholders
  • Prompt engineering in professionally relevant areas
  • Assessing truthfulness and novelty in AI-generated content
  • Performing risk assessments (e.g., for liability or regulatory compliance) of using AI in professional decision making, content creation, etc.
  • Deciding whether and how to use generative AI in a given task (e.g., composing a routine email vs. making a mission-critical business decision)
  • Formatively and summatively evaluating generative AI impacts to processes and products
  • Critically assessing AI’s impacts on broader societal considerations, including social justice and environmental impact

Again, this list is merely illustrative, not exhaustive. But what I’d point out is that each of these is analogous to my grammatical Turing Test: sentences that only humans can write. That is, there are business cases that only humans can present, risk assessments that only humans can conduct, prompts that only humans can engineer, desires that only humans can fulfill, critical judgments that only humans can make.

As with Chabon’s sentence, other domains’ professional achievements are not reproduced by machine-learning millions of explicit textual or visual formations; they are, rather, the outcome of humans fully engaged in, and intentionally contributing to, an assemblage of one or more knowledge disciplines underpinning a professionalized practice leveraging all that they are as humans. I expect generative AI increasingly to become integrated into these disciplinary knowledge practices, but in a way that augments, rather than supersedes, human creative achievement.

Much–probably too much–of the public conversation about generative AI and higher education has reacted to how AI can be used to cheat, or how it disrupts this or that kind of typical assignment. There is a place for that conversation.

Even so, educators must not allow themselves to be sealed and hog-tied in the airtight vessel known as “AI is bad”; we should instead be enchanted by dreams of AI helping us to advance knowledge disciplines.

3 Comments

  1. Jean
    Permalink

    AI are good at production and will get better.
    What it lacks is judgement for editing and filtering. It will produce some garbage, some beauty, some garbage because it is synthetic and can not distinguish.
    It is not just non-synthetic creativity, but creative judgement that humans bring.

    Reply
  2. Kevin Makice
    Permalink

    I’ve been inundated with AI this week while attending the #PRIDE Summit by Lesbians Who Tech, as it is very much THE topic of concern in the tech and creative communities, it seems. The gestalt of all of those sessions echoes your sentiments about needing to understand the human role in collaboration with AI.

    What I have become focused on is cultivating a self-awareness of the nature of my interactions with AI—ChatGPT, specifically. In the same way Twitter’s 140-character constraint improved the quality of what I could fit into that length, interacting with ChatGPT is forcing me to think about clarity of my requests and explanations. It is also a great source of frustration, as the longer a thread goes the more likely ChatGPT is to start forgetting the responses it just made. Some of my moments of greatest frustration have come when trying to correct ChatGPT powered by an angry determination that it should get this thing right.

    I suspect that, with ample training data for Chabon, those sentences would grow more complex. But it would still just be a reflection of that data. The big limitation, as I’m reminded every time I ask ChatGPT about contemporary tech, is that it only knows information up to September 2021. Once it is able to incorporate new data, or adjust its own model based on feedback, we’ll be entering new territory again. Also a reminder that clicking that thumbs indicator on responses is important, in the interim.

    My favorite AI thingy is something that was created as a cautionary tale but is also quite thought provoking on occasion and definitely has the feel of sitting quietly at an important table:

    https://infiniteconversation.com/

    Reply
  3. Tusharika Mishra
    Permalink

    This is such an intriguing topic. The idea of importance of higher education in shaping creative professionals and cultivating human creativity aligns with the argument that AI cannot replace certain aspects of human thought and expression.
    Great read.

    Reply

Leave a comment