On the bullshit of teaching students to use AI "ethically"
What kind of sick institution believes it is possible for its AI use to "align" with its "values"? Does such an institution have any "values" worth standing for?
As a research academic who studies the intersections of design, technology, and society, I’ve been invited to, and participated in, a lot of meetings about “AI” lately. I put that in scare quotes here, like I often do, to indicate both that “Artificial Intelligence” itself was always a marketing term (and thus a bunch of bullshit)1 as well as the fact that there are a whole bunch of domains of computing research that seem to now fall under this umbrella term. But, these days, unfortunately, it seems like we’re talking mainly about generative AI, and primarily we’re talking about LLMs and genAI that are served up to our institution by third parties such as Microsoft or OpenAI or Google or whoever.
In these meetings I have been bludgeoned—I mean absolutely bludgeoned—by appeals to “teaching students how to use AI ethically” or “adopting ethical AI use guidelines” or some variation of these terms that seems to suggest an amorphous yet warm fuzzy feeling. These phrases are always deployed with an emphatic but gentle tone, suggesting that any (in this case, mine) visceral, direct, and rather unkind response to the outright absurdity of such propositions would be uncouth. If I turned those meetings into a drinking game where I took a shot of whisky every time someone said “ethical use of AI” or “using AI ethically” or some variation of this general proposition (that there is a way to do so), I’d be dead by the end of every single one of those meetings.
I have yet to find, within my own institution,2 anyone talking about “ethics” and “AI” who has actually defined the term “ethical” in the context of the use of genAI. Sure, people keep talking about teaching their students to “ethically use” AI, and they offer numerous examples of students using it to do certain things and not do others, in effect offering use cases. But a use case is not the same as having a definition of what ethical really means in the context of the use and adoption of genAI tools.
I was just in a meeting yesterday in which a rather self-congratulatory group of faculty presented about their supposedly innovative approaches to teaching students to use AI “ethically,” the development of classroom AI policies, and the development of our institution’s guidelines on genAI use. One faculty member, for example, co-developed their course’s genAI policy with their students, and, together, they created a set of “acceptable use guidelines.” The faculty member offered some examples of what those acceptable uses were and were not. Mostly, these lists centered on a differentiation between “helping” a student do their work and doing the work “for” a student.
Notably, this faculty member didn’t say if the students were asked to describe why they deemed certain uses of genAI acceptable, nor did they suggest that there was any classroom discussion about whether any use of genAI at all is acceptable. This faculty member said that, in class, they “modeled” the ethical use of genAI by using a particular genAI platform and then “employing transparent validation strategies,” to make sure the output was accurate. To me, this just sounds like doing double the amount of work that a decent researcher could do without needing to check the work of a stochastic-parrot research assistant. When I asked this faculty member if “ethical use” of genAI merely amounted to being transparent about its use and the “validation strategies” employed (whatever the fuck that means), he repeated his claim with the caveat that there are lots of different dimensions along which we might judge “ethical.” Oh, really? Are there?
Hilariously, this faculty member suggested that genAI offered some utility to students searching for case precedent in a particular domain of law in which he is an expert, despite the evidence of genAI systems consistently making up false case precedent. He made it seem as if experienced and competent attorneys had been somehow inefficient without genAI’s assistance. I feel like the history of the legal professions is a pretty legit counterclaim to this flimsy thesis. But I digress.
Anyway, this faculty member was then congratulated by subsequent presenters for “enacting their values” in their course policy. I’m sorry, but what in the actual fuck? Whose values are being “enacted” here? Do the students not value the lives of the massively exploited data laborers in the global south who labeled the training data for the LLMs those very students are now using? Do the students not value the lives of the workers being exploited by mining companies and nation states which are extracting vast amounts of minerals from the earth while destroying precious environmental resources and releasing harmful toxins into nearby communities, effectively killing and maiming people, in the process? Do the students not value those very environmental resources or those communities?3 Do they not value the nearly unfathomable amounts of fresh water and electricity consumed by the data centers that could otherwise be used by people who actually need it? The list, as you well know, dear reader, goes on and on and on.4 And some of this shit is happening right down the fucking road—Saline, Ypsilanti. I mean, what the hell kind of “values” are we talking about when the debate over acceptable use comes down to if a student is using it for “cheating”? I don’t give a fuck about cheating when people are dying and climate catastrophe is being accelerated by a bunch of technocratic authoritarian oligarch-intellectuals.
Meanwhile, this faculty member is a full professor. Highly respected in his field. And yet, nowhere to be found in this presentation was an engagement with the literature on the history of ethics and computing or any engagement with the massive body of scholarship on genAI as a material, social, and political-economic subject of intellectual and scholarly inquiry. Nor did he apparently think to talk about any of that with his students when introducing to them the idea that they’d develop some acceptable AI use policy together.
And what about the ethics of implying that the tools used by students under their new “acceptable use” policy are somehow “intelligent”? Did anyone talk to them about how unethical it is to think about “intelligence” in the same way Sam Altman does? There is a real, systemic harm in construing “intelligence” along the lines of the form of intelligence prized by our Silicon Valley oligarchs (slash sociopathic entrepreneur innovators, most of whom should be incarcerated for the harms exacted on marginalized populations who never had a fucking say in the development of any of the “innovations” under the thumb of which they suffer).
The whole history of the kind of “intelligence” privileged by Silicon Valley oligarchs is not only completely one-dimensional but it is deeply rooted in eugenics.5 How is it in any way ethical to reinforce that idea and then to map that onto the many, diverse, and wonderful intelligences (plural!) of our students?
So there I sat. Flabbergasted. But not surprised, I guess. I don’t know. The more investigative journalism and scholarship published—about the harms exacted upon the international working class and the environments on which we rely, both by genAI systems and the technocratic-authoritarian capitalist owners of their means of production—and the more clear it becomes that this situation is the classic neoliberal “privatization of gains and socialization of harms” on steroids, the more shocking these kinds of conversations are to me. At least amongst faculty members. I guess it might be different if we were in a room with the President of our University. But between colleagues, I mean, don’t we all know this genAI shit is, to be quite blunt, fucking us and it’s fucking our students and it’s fucking the world? I mean, is that not patently obvious?6
++
I think I’d be less angry if people didn’t hold onto the pretenses of teaching students how to use AI “ethically.” If they just said, “I’m committed to teaching students to use genAI and here are the reasons for this commitment,” and then enumerated reasons that don’t appeal to “ethics.”
But maybe there’s more at play here that has to do with the psychic violence of the optimized academy.
Is suggesting that there’s somehow an “ethical” way to use, or teach students to use AI, maybe just faculty members trying to soothe themselves, to rationalize the cynicism required to overcome the cognitive dissonance that is the inevitable result of having to survive under an authoritarian technocracy in which being seen as “anti-innovation” is a stain on one’s reputation from which it is impossible to recover? Perhaps the pervasiveness of the appeals to “ethical” uses of AI are part of the university’s infrastructural affect, a result of the psychic damage inflicted by the long history of technological innovation being perceived as a fix for problems—inequality, climate change, and so on—that are fundamentally non-technical in nature.
At this point, this feels like as plausible an explanation as any.
++
And this doesn’t mean that we or our students shouldn’t engage with computing. Nor does it mean we shouldn’t consider the selected application of small-scale LLMs or other more useful approaches from within the suite of AI or AI-related fields, such as Machine Learning. What I’m saying is that we shouldn’t lie to each other, our administrators, or our students when we tell them about how to—and when to—use particular technologies. And, furthermore, it means we should be explicit about the nature of the development of the technologies we are using, regardless of the utility they seem to present to us or our students in a given moment (the infamous “use case”).
The nature of such technological development is fundamentally antidemocratic and this essential quality of all technological innovation under capitalism could not be more obvious than in our current moment—a moment in which billionaires (trillionaires?) offer up the international working class and the environments on which we rely for ritual sacrifice. These oligarch-intellectuals convert, as Evgeny Morozov writes, “intellectual positions into market arbitrage while wielding (and often owning) digital megaphones to reshape the very reality their investments bet against.” Morozov goes on:
Today, it’s increasingly clear that it’s the tech oligarchs — not their algorithmically-steered platforms—who present the greater danger. Their arsenal combines three deadly implements: plutocratic gravity (fortunes so vast they distort reality’s basic physics), oracular authority (their technological visions treated as inevitable prophecy), and platform sovereignty (ownership of the digital intersections where society’s conversation unfolds).
…
Our oligarch-intellectuals begin as interpreters par excellence. They position themselves as technological mediums, passive channels for inevitable futures. Their special gift? Reading the tea leaves of technological determinism with perfect clarity … Armed with their prophetic visions, they demand specific sacrifices—from the public, the government, and their employees.
Those sacrifices include the lives of humans and nonhumans in environments across the globe.
There is nothing even remotely democratic about the kind of technological innovation the inevitability of which we are assured by the very people who have the means to make that inevitability a reality, and who materially benefit from it more than any of us ever will. And yet, we pat ourselves on the backs for using their tools “ethically”? Are you shitting me?
++
It is, additionally, astonishing to me that, although I work in what was once regarded as one of the best agriculture schools in the country, no one in these meetings brings up the fact that if you don’t have water, you can’t have agriculture. And these days, data centers are guzzling fresh water at world-historical rates and at scales that were even a few years ago completely unfathomable.
MSU was, quite literally, the first agricultural college in the United States. And while a deep dive into the history of MSU isn’t within the purview of what I’m writing here, it is impossible not to acknowledge the dissonance between the (admittedly, colonial) connection to the land our students once had (including the mandated three hours per day of manual labor in 1857) and my colleagues’ apparent resignation to the inevitability of natural resources being a sacrificial conduit for the ongoing incursion of genAI into every aspect of everyday life. The problem, of course, is that the supposedly inevitable technological advances—facilitated by the extractivist nature of planetary scale computation as well as the data centers that form its core infrastructure—aren’t gonna do shit for us if we’re all dead because we had to stop growing food and drinking water so the data centers could have the water and the land.
How can MSU “encourage all members of the university community to engage with generative AI tools responsibly, ethically, and creatively”7 when its administration and faculty should know damn well what the implications of that engagement are and how the implications of that engagement are fundamentally anathema to the original purpose of the institution?
+++
Look, Marx didn’t talk about ethics and morality8 because it should be patently fucking obvious that human and non-human suffering is not a bug in the capitalist system—it is one of its core features. It incentivizes competition. It helps create the dynamism of capital itself. The kind of mental fucking gymnastics that faculty and administrators have to do in order to suggest that there is some kind of “ethical” way to use and to teach the use of genAI is, to me, more complicated than reading all three volumes of Capital.
If we were to suggest that it could ever even be possible to adopt some kind of “ethical” use of AI at MSU, such use would have to be predicated on a communally autonomous democratic approach to technological innovation within the university community. Even then, I think, following Marx, the deployment of the term “ethical” or “ethics” becomes a distraction from the real work of creating a convivial society, full of convivial tools, the design and development criteria for which are democratically determined by everyone who is impacted by their development and use.9
It is nearly impossible to imagine what such a situation would actually look like, what kind of technologies we would design, what our everyday lives would look like, together, as a community. Or what kinds of interventions into the environment we would deem acceptable, or what kind of costs we would be willing to incur. What kinds of solidarity might emerge, and how our entire conceptions of education, progress, innovation, and a good life might change.
But I’d rather spend my energy on cultivating this imagination and engaging in the activism to bring it to fruition than doing those mental gymnastics required to soothe my psyche as I lie to myself and my students.
The term “artificial intelligence,” as many folks are now aware thanks to the amazing Karen Hao, was invented for the explicit purpose of marketing a type of research that was originally, and more appropriately, titled “automata studies.” (Hao, Empire of AI)
When it comes to studying how institutions talk about “ethics” and “AI,” I have found it useful to study my own institution, in part because it is also where I feel I can have some influence, even if that influence is small. But that also means my observations here aren’t generalizable beyond my institution in any meaningful way.
Some interesting scholarship engages with the long and capitalist-colonial histories of this, including Murdock (2025) and Regilme (2024).
I’m so sick of adding links to the vast amounts of empirical evidence that support these claims so I probably won’t.
Lots has been written about the history of computational interpretations of intelligence as part of a broader eugenic project, but succinct summaries can be found in Bender and Hanna (2025) and Crawford (2021).
The title of a recent article in The Nation presents this dissonance in sharp relief: “AI is going to kill everyone you know. The surprise is how.”
https://ai.msu.edu/guidelines
Indeed, at least on the measure that Marx understood the dangers of capitalism and its deleterious nature particularly vis-a-vis human flourishing, his belief in communism and the mode by which it might be achieved would need to be understood as his viewing communism as fundamentally “ethical” or “moral” in some capacity, though he did not articulate it as such. Indeed, the SEP itself says that: “The only reason for denying that communism forms a good society for Marx would be a theoretical antipathy to the word ‘good’. And here the main point is that, in Marx’s view, communism would not be brought about by high-minded benefactors of humanity. Quite possibly his determination to retain this point of difference between himself and other socialists led him to disparage the importance of morality to a degree that goes beyond the call of theoretical necessity.” (Wolff and Leopold 2025)
Please read Ivan Illich’s Tools for Conviviality. It’s short and worth your time. And don’t ask ChatGPT to summarize it for you.

