Categories
deep learning social effects of machine learning

Liberally-educated students need to be more than consumers of AI

If I’m buying a thinking process, I really need to understand what I see when I look under the hood.

The initial wave of controversy over large language models in education is dying down. We haven’t reached consensus about what to do yet. (Retreat to in-class exams? make tougher writing assignments? just forbid the use of AI?) But it’s clear to everyone now that the models will require some response.

In a month or so there will be a backlash to this debate, as professors discover that today’s models are still not capable of writing a coherent, footnoted, twelve-page research paper on their own. We may tell ourselves that the threat to education was overhyped, and congratulate ourselves on having addressed it.

That will be a mistake. We haven’t even begun to discuss the challenge AI poses for education.

For professors, yes, the initial disruption will be easy to address: we can find strategies that allow us to continue evaluating student work while teaching the courses we’re accustomed to teach. Problem solved. But the challenge that really matters here is a challenge for students, who will graduate into a world where white-collar work is being redefined. Some things will get easier: we may all have assistants to help us handle email. But by the same token, students will be asked to tackle bigger challenges.

Our vision of those challenges is confined right now by a discourse that treats models as paper-writing machines. But that’s hardly the limit of their capacity. For instance, models can read. So a lawyer in 2033 may be asked to “use a model to do a quick scan of new case law in these thirty jurisdictions and report back tomorrow on the implications for our project.” But then, come to think of it, a report is bulky and static. So you know what, “don’t write a report. What I really need is a model that’s prepared to provide an overview and then answer questions on this topic as they emerge in meetings over the next week.”

A decade from now, in short, we will probably be using AI not just to gather material and analyze it, but to communicate interactively with customers and colleagues. All the forms of critical thinking we currently teach will still have value in that world. It will still be necessary to ask questions about social context, about hidden assumptions, and about the uncertainty surrounding any estimate. But our students won’t be prepared to address those questions unless they also know enough about machine learning to reason about a model’s assumptions and uncertainty. At higher levels of responsibility, this will require more than being a clever prompter and savvy consumer. White-collar professionals are likely to be fine-tuning their own models; they will need to choose a base model, assess training strategies, and decide whether their models are over-confident or over-cautious.

Midjourney: “a college student looking at helpful utility robots in a department store window, HD photography, 80 mm lens –ar 17:9

The core problem here is that we can’t fully outsource thinking itself. I can trust Toyota to build me a good car, although I don’t know how fuel injection works. Maybe I read Consumer Reports and buy what they recommend? But if I’m buying a thinking process, I really need to understand what I see when I look under the hood. Otherwise “critical thinking” loses all meaning.

Academic conversation so far has not seemed to recognize this challenge. We have focused on preserving existing assignments, when we should be talking about the new courses and new assignments students will need to think critically in the 2030s.

Because AI has been framed as a collection of opaque gadgets, I know this advice will frustrate many readers. “How can we be expected to prepare students for the 2030s? It’s impossible to know the technical details of the tools that will be available. Besides, no one understands how models work. They’re black boxes. The best preparation we can provide is a general attitude of caveat emptor.”

This is an understandable response, because things moved too quickly in the past four years. We were still struggling to absorb the basic principles of statistical machine learning when statistical ML was displaced by a new generation of tools that seemed even more mysterious. Journalists more or less gave up on explaining things.

But there are principles that undergird machine learning. Statistical learning is really, at bottom, a theory of learning: it tries to describe mathematically what it means to generalize about examples. The concept of a “bias-variance tradeoff,” for instance, allows us to reason more precisely about the intuitive insight that there is some optimal level of abstraction for our models of the world.

Illustration borrowed from https://upscfever.com/upsc-fever/en/data/deeplearning2/2.html.

Deep learning admittedly makes things more complex than the illustration above implies. (In fact, understanding the nature of the generalization performed by LLMs is still an exciting challenge — see the first few minutes of this recent talk by Ilya Sutskever for an example of reflection on the topic.) But if students are going to be fine-tuning their own models, they will definitely need at least a foundation in concepts like “variance” and “overfitting.” A basic course on statistical learning should be part of the core curriculum.

I might go a little further, and suggest that professors in every department are going to want reflect on principles and applications of machine learning, so we can give students the background they need to keep thinking critically about our domain of expertise in a world where some (not all) aspects of reading and analysis may be automated.

Is this a lot of work? Yes. Are we already overburdened, and should we have more support? Yes and yes. Professors have already spent much of their lives mastering one field of expertise; asking them to pick up the basics of another field on the fly while doing their original jobs is a lot. So adapting to AI will happen slowly and it will be imperfect and we should all cut each other slack.

But to look on the bright side: none of this is boring. It’s not just a technical hassle to fend off. There are fundamental intellectual challenges here, and if we make it through these rapids in one piece, we’re likely to see some new things.

By tedunderwood

Ted Underwood is Professor of Information Sciences and English at the University of Illinois, Urbana-Champaign. On Twitter he is @Ted_Underwood.

3 replies on “Liberally-educated students need to be more than consumers of AI”

Great post! Would love an online, open-source course for professors on how to learn + teach a core course/sub-unit on LLM literacy.

Love the prose clarity of this post. Short, poignant sentences, and highly relevant ideas for those of us in the reading sphere. The urge about understanding the thinking model before adopting it resonated with me.

Leave a comment