Guest opinion: How to make the Utah workforce AI ready
Around the country, colleges and universities have been making game plans for how to respond to the artificial intelligence boom. In Utah, these strategies were outlined by the Utah Board of Higher Education in the “Resolution on Strategic Direction for an AI-Driven Future,” released late last year. There is much to applaud in the resolution — in particular, its commitments to “pro-human AI skills” and to the continued need for “ethical reasoning” and “human judgment.” While these are noble commitments, the path forward is challenging because ethical reasoning, judgment, and AI have a fraught and complicated relationship.
To navigate that path, the Utah System of Higher Education would do well to consider the insights of earlier AI pioneers — for they faced many of the quandaries we’re facing. In particular, it’s helpful to remember the insights of Joseph Weizenbaum, the Massachusetts Institute of Technology computer scientist who invented the ELIZA chatbot in the mid-1960s. Weizenbaum programmed his chatbot to emulate a type of therapy that turns client statements into questions. You can still chat with a version of ELIZA today. When the chatbot was initially released, it fueled hopes that more refined versions of the technology might eventually duplicate the expertise of therapists and judges. Reflecting these hopes, a colleague — and sometime nemesis — of Weizenbaum named Marvin Minsky liked to call humans “meat machines” as a way of signaling that the brain, and all forms of human judgment, might eventually be duplicated in computing machinery.
A half-century after these hopes were voiced, some might say they have been realized. A growing number of Americans use chatbots as personal therapists and as AI companions. Judges also use a type of AI called “risk assessment algorithms” to guide bail decisions. And in our neighboring state, these aspirations to outsource judgment have, in one instance, been taken to an extreme: Victor Miller, a mayoral candidate in Cheyenne, Wyoming, promised to govern only as a figurehead by handing over all of his political decision-making to AI.
Weizenbaum would not have been happy about these developments. As he argued in his book, “Computer Power and Human Reason: From Judgment to Computation,” judgment wasn’t strictly a technical problem that could be solved with better computation or with the application of more sophisticated algorithms. And even if judgment could be reduced to a technical problem, Weizenbaum didn’t think it should be. Judgment was a quintessentially human activity that everyone who finds meaning in life engages in. In keeping with the philosopher Hannah Arendt, from whom Weizenbaum occasionally took inspiration, Weizenbaum believed that the capacity and willingness to judge between right and wrong was central to what it meant to be human. It wasn’t something that individuals should relegate to other humans, much less AI. Instead, humans should learn to judge for themselves.
These concerns about the relationship between AI and judgment shaped how Weizenbaum thought college students should be educated. In his view, it wasn’t enough if colleges taught students how to pursue ostensibly objective, value-free science. Nor would colleges be fulfilling their mission if they only taught students how to become technicians or computer programmers who could figure out the most automated and efficient means to reach goals that other people had set. Instead, educators needed to see students “as human beings in search of themselves” who needed space to define their own values and express their own sense of a meaningful life. According to Weizenbaum, this meant that if science and engineering needed to play a role in the curricula, so did the humanities, since the humanities were focused on the search for values and meaning. It wasn’t that the humanities were superior to science or engineering. Rather, it was that there “was no single way of seeing,” and if science (or computer science’s penchant to equate brains with computers) became the only “legitimate perspective,” it would lead to an “impoverished view of the world.”
As USHE develops an AI task force, it’s worth keeping in mind these insights from an early AI pioneer. Weizenbaum warned that if college only teaches technical skills and doesn’t include talk and debate about values, students might become “mere followers of other people’s orders … and no better than the machines that might someday replace them in that function.” This latter prospect is, of course, an outcome that all of us in higher education would like to avoid. We want our graduates to be humans rather than machines. To achieve its “pro-human AI” goals, USHE needs to ensure that questions about values and ethical and political judgment play a role in the education of an AI-ready workforce. They are not incidental so much as central to the development of a pro-human AI strategy.
Luke Fernandez is an associate professor in the School of Computing at Weber State University. His research includes the social history and ethics of artificial intelligence, and he co-authored the book “Bored, Lonely, Angry, Stupid: Changing Feelings about Technology.” This commentary is provided through a partnership with Weber State. The views expressed by the author do not necessarily represent the institutional values or positions of the university.

