If you buy into the rationalist paradigm of the world, it’s easy to make the claim that technology is inherently neutral. A hammer is just a hammer, a gun is just a gun. Never mind the intentionality and consciousness required to conceive of such inventions — the materialist frame can only view objects as resources. It doesn’t have the requisite holistic frame to see that tools are built to reinforce culture, and culture reinforces the tools. There is nothing neutral, then, with any technology that is created. It is “minded” in the sense that it was created for a certain aim within a certain cultural paradigm.
It’s a common refrain that AI1 is just a neutral tool like any other tool, and anyway, the benefits of such a tool will so far outweigh the negatives, that it’s just “Luddism and romanticism and naive silliness” to dare consider the implications of these tools and how they are likely to be used. It’s simply true, as Neil Postman explains in Technopoly, that all technology offers boons and curses. Our entire civilization is fundamentally based upon technological advancements that have propelled us from a form of medieval feudalism into globalized technocracy, and which have undoubtedly incurred indelible costs. The benefits (medicine, scientific knowledge, the ability to order a private taxi for our burritos) are obvious, and we get the honor of being alive in the era where the consequences are also obvious, or at least they should be. It is supremely profitable for tech-companies and other techno-optimists to gloss over negative externalities of technologies, even as we’re inundated with knowledge that this is common practice in basically every other industry known to man.
The problem is that, as Thamus explained to Theuth as per Socrates retelling, “the discoverer of an art is not the best judge of the good or harm which will accrue to those who practice it.”2 And yet we unquestioningly adopt any new technology that is offered to us as if we don’t know this to be true. As Postman explains, we are surrounded by “prophets who see only what new technologies can do and are incapable of imagining what they will undo.”3
What is remarkable about the adoption of these LLMs is, to me, the pace of its adoption. A little over two years ago, Jake and I co-wrote a piece about generative AI art. This was written at a pivotal moment where AI ceased to be a joke producing only cursed images, and we finally were able to see the direction that this technology was going. Today, you can’t look for inspiration for a new hairstyle on Pinterest without seeing more AI models than human models. I get an ad for AI in some form every time I watch a YouTube video. The implicit message is always the same: if you use paper and pen, you are a clunky idiot who will be left behind. AI is the future. Why are are you so against progress?
This has bugged me a lot, and there have been a lot of really good analyses of this process, and how dismal it has made being a creative, and how unpleasant it has made the internet in general. But it wasn’t until last night, going out to see a movie for my birthday, that I got really angry.
There was an advertisement during the trailers for Google Gemini that I am unable to find available on the internet, unfortunately, but it demonstrated so perfectly why I have such distain for AI — it enhances a process known as cognitive offloading, which is essentially utilizing tools to minimize the work that your brain has to do in a cognitively oriented task.
This ad absolutely glorified how Gemini can essentially take the place of the human mind for even the most mundane of tasks — deciding whether your outfit is cool, deciding whether the recipe you’re currently cooking tastes good, and doing the most basic problem-solving, critical thinking, and cognitive tasks.4
What is baffling to me is the idea that we would want to outsource all of our thinking to a machine. It’s like Google wants me to nod along in agreement and say, “Fuck personal style, fuck independent thought, fuck problem-solving! All of that sounds terribly provincial, inefficient, and boring!” The thing is, cognitive offloading is actually the main purpose of these AI tools. The idea is to have all decisions and thoughts streamlined and made more efficient. Rather than having conscious desires of what you want to eat for dinner, you can just have AI meal-plan for you. Rather than reading and thoughtfully responding to an email, just use predictive text, or have ChatGPT write it. Rather than mindfully considering how to spend your limited time on planet Earth, just have Motion plan your day for you.
My point is, the makers of commercial-grade AI, the AI marketed to regular-schmegulars like you and me, define the most inefficient part of our existences as the parts that actually give life meaning, and they want to sell us on the benefits of outsourcing those parts of life to a machine.5 The key word here is sell: millions of people are paying OpenAI over $20 a month for the privilege of not needing to use their minds. The company has increased their revenue by almost 300% in the last year alone. This is a business which creates technology — technology that will continue to impact culture in completely unpredictable ways. And it is one of many.
If you’re not convinced by my less-than-rationalist framing of this issue, consider the science: people who use generative AI have worse critical thinking skills than those who do not use generative AI. This is backed up by research which is showing that use of AI atrophies cognitive abilities via the mechanism of cognitive offloading, stating that, “frequent AI users exhibited diminished ability to critically evaluate information and engage in reflective problem-solving.” This, unsurprisingly, effects children and young people most of all.
“As individuals increasingly offload cognitive tasks to AI tools, their ability to critically evaluate information, discern biases, and engage in reflective reasoning diminishes. This relationship underscores the dual-edged nature of AI technology: while it enhances efficiency and convenience, it inadvertently fosters dependence, which can compromise critical thinking skills over time.”6
This is, also, assuming that the information you’re getting from ChatGPT or Grok or whatever is actually correct.
A somewhat recent survey found that a dizzying 86% of students are using AI tools to complete their assignments. 69% used it to search for information, 42% to check grammar, 33% to summarize documents, and worst of all: 28% used it to paraphrase a document and 24% used it to create a first draft. In the United States, literacy is already going down. This does not bode well, yet on we march.
I do think a potential silver-lining to the unquestioning adoption of AI is that there could be certain institutions, namely education, that could make the choice to adapt in order to ensure that young people are actually learning faculties such as critical thinking. From my perspective, the concept of the essay is now obsolete in the educational context. Educators will have to adopt older teaching methods such as oral assessments, Socratic dialectics, and other forms of talk-based pedagogy and testing to ensure that students aren’t being entirely assisted by AI. Educators will have to become keenly attuned in their questioning, making sure to prod the edges of their student’s knowledge to ensure true comprehension. I think, overall, this would be a good thing. So many of us were able to skate by through the education system without learning how to critically think, read deeply, and so many other things that education can offer without having AI.7 I can only imagine how much harder it will be for the kids of today to grow up in this technological milieu. Despite growing up on the internet and being raised in a largely text-based world, reading comprehension has been declining rapidly, and AI will continue that trend.
Not to be grim, but I don’t know if I actually believe that this proposed transformation in education will occur. For most techno-optimists and AI enthusiasts, we are gleefully headed into an AI-driven world, and to resist the tide is to fall behind. It’s just another arms race, after all. There is no question whether this is a good path, namely because the utopian benefits (which have yet to be proven, by the way) such as curing cancer, solving climate change, and even curing loneliness are so tantalizing. This is, as is most utopian technology, purely promissory. We cede our minds on the promise of the boons these technologies will bring, and will shield our eyes to the curses as long as we possibly can, as we did with social media, as we did with fossil fuels, as we did with industrial agriculture, as we did with the Internet itself — and on and on and on again.
This pattern of behavior seems to be deeply rooted in the human collective, and I don’t know why. It’s beyond optimism: it’s actively blinding ourselves so we don’t see what we don’t want to see. It’s technological “progress” for the sake of technological progress. It’s an outcropping of convenience and consumer culture. It’s another process of enclosure. It’s the same shit, different era.
The demand for these products — for our thoughts and opinions to be mediated by machines — is merely more evidence of how unmoored we are in modernity. We’re losing connection with ourselves and what makes us unique, what makes us special and important to the unfolding of the world. It, like many of our curses in modernity, seem like the obvious endpoint of worldviews that began long ago — worldviews that desacralized the living world and have placed efficiency over meaning.
But for the sake of our minds, the last human place that is left to be colonized, I think it’s imperative that we resist the pull toward such conveniences and efficiencies. When a corporation is advertising the benefits of not using our brains and using their products instead, it’s only further evidence that we’re in a deep phase of breakdown.
Written by Maren Morgan
For this essay, my focus is on LLMs (large-language models). There is AI operating all the time in hidden places such as algorithms and in certain software systems. It’s nearly impossible to avoid AI altogether if you are using computer-based technology. But in the case of ChatGPT, Grok, Gemini, etc, we do have a choice whether we rely on these technologies or not. We literally don’t have to use them.
Postman, N. (1992) Technopoly: The Surrender of Culture to Technology
Ibid, p.5
I’m very annoyed that I can’t find the ad online. The closest I have to it is this older ad, but you’ll just have to believe me when I tell you how much more extreme the advertising of cognitive offload is in the ad I saw at the movies last night. It’s not an exaggeration to say that it was advertising the benefits of not ever having to use your brain for anything.
Communicating with loved ones; having unique preferences, desires, and tastes; creating art and mastering skills — these things are so antiquated and unnecessary, right?
Gerlich, M. (2025) AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking
I’m not going to get into all of the problematic aspects of compulsory schooling in this essay.
I think pre-agricultural people's brains were larger than ours because they knew who they were and where they were and needed few things to facilitate that.
The best article I have read on Substack. In whose name are these technologies being dropped on us like virus laden confetti?