‘We need education more than ever’: Matt Dinan on the implications of AI for education
- Brianna Lyttle
- 12 hours ago
- 3 min read

With users increasingly leaning into AI for help and others frantically running in the opposite direction, the question remains: how will the future of higher education be implicated for St. Thomas University?
This is the question that Matt Dinan, great books professor at STU, attempted to answer on Jan. 14. As part of the STU Public Lecture Series, Dinan presented his research on artificial intelligence and the future of liberal arts education.
Dinan structured his lecture by exploring three different ways that AI, specifically Large Language Models (LLMs), intersect with education, arguing that the answer is not to increase AI use, but to prioritize smaller, liberal arts education institutions.
He began by explaining how the more that users interact with the technology, the more it is trained to associate tokens of data with each other to optimize user experience.
“From a human point of view, fundamentally, it’s not thinking. It's guessing. It is a very informed probabilistic hypothesis of what will be a sentence or a series of sentences that will be coherent and meaningful to human users without human interface … they are not concerned with the truth,” he said.
The intersections of AI with education that Dinan wished to explore were its conceptualization as a means to transform society as a whole, a means of destroying higher education and revolutionizing it.
With talk of Artificial Generalized Intelligence (AGI), OpenAI CEO Sam Altman and others predicted that AI can train itself without human assistance. He argued that if Altman's prediction comes true and LLMs become autonomous, it will not change the necessity of being educated.
“In a world where true emancipation from scarcity is coming, or where technical disciplines are eclipsed by AI, we need more education, more than ever, in order to know how to think, to consider what we will do with the good human life that this technology will make available to us,” he said.
The second intersection concerns students cheating on assignments with AI and professors doing the same to mark them, as well as AI use being correlated with cognitive decline and offloading.
Dinan argued that professors teaching smaller classrooms make it easier for them to engage with students and help them to learn through developing relationships and imparting knowledge rather than information.
“AI does not herald the end of higher education. It heralds the end of bad higher education,” he stated.
The third intersection of AI and education that Dinan tackled included those who use AI as tutors due to their 24/7 availability and accessibility, and argue for it becoming a permanent back-pocket tool like calculators.
Dinan argued that, because LLMs are designed to oblige the user’s demands and submit to control as much as possible, they won’t challenge a student and encourage them to grow the way a human tutor would. Even study modes on AI chatbots, he explained, are still controlled by algorithmic predictions of the user’s needs.
“A calculator allows me to do calculations instantly in ways that I would not be able to do so without, but I'm still able to communicate these calculations with others in writing and speech,” he said.
He added that AI can only be used intelligently by people who have already been educated on its dangers and advantages.
“These tools should be introduced to fully formed students who had a world-class education through the study of meaningful texts with an expert and carrying instruction in a small class.”
However, the current use of artificial intelligence lies in supplanting the need for a principle, which pulls students away from quality education.
“We need people who know how to think for themselves, people who are liberated … We need students who understand that we are not simply imparting information to them, nor are we simply training them in schools.”
STU alum Brie Sparks found the lecture informative. They said that AI use was never allowed in classes at STU, but the nuanced conversation about the subject was appreciated.
“I think it's really refreshing to have a conversation about the things that inherently AI will not be able to take over because they are very human,” they said.
“As Dinan highlighted, students are going to use it. So instead of the first time they use it, that being an immediate penalty, having a conversation or helping them understand why it's bad, I think, is really helpful.”
