A $20-a-month product is suddenly free during finals, premium models are getting pushed across campuses, and AI dependence is being framed as student support. The weird part is not the discount. It’s where the subsidy lands: right at the moment when people are stressed, rushed, and most willing to hand over the first draft of their own thinking.
Here’s the whole setup in one paragraph. OpenAI offered U.S. college students two free months of ChatGPT Plus through the end of May, while Anthropic, xAI, Google, and Perplexity have all run student promotions; AP reported Duke launched an OpenAI pilot that normalized campus-wide access while studying learning effects; AP also found 12% of employed U.S. adults already use AI daily at work; and, crucially, Education Week reported on March 23, 2026, that nearly 7 in 10 middle and high school students worry AI is hurting their critical thinking, citing RAND. Put those together and the picture gets sharper: adoption is rising even as users can feel the cognitive trade-off happening.
My canonical version of the argument is simple: AI companies are trying to own the starting step of knowledge work. Once a chatbot becomes the default place you begin, quality matters less than habit, and habit is much harder to dislodge.
AI Dependence Is the Business Model, Not a Bug
The Atlantic had the cleanest phrase for this: dependence is the business model. That sounds dramatic until you look at what these companies are actually subsidizing.
Free access during finals is not just customer acquisition. It is habit formation targeted at the most fragile part of the workflow: the blank page, the messy outline, the moment before you know what you think. Cheap rides made waiting feel irrational, and cheap delivery made cooking feel like friction. AI firms are pushing the same subsidy logic one layer deeper, into writing, research, and judgment.
The market consequence is sharper than “people use the product more.” Lock-in shifts from stored data to learned initiation behavior. If users start every task in the chatbot, the company no longer needs to win on best-in-class output every time. It just needs to remain good enough to preserve the habit.
That lowers the quality threshold for retention in a way normal software rarely does. A document editor still has to open your files and not corrupt them. A search engine still has to return useful results. But a chatbot that becomes your first move gets credit before it earns it, because the alternative now feels slower, lonelier, and cognitively expensive.
| Stage | What the company does | What the user feels | What the company gets |
|---|---|---|---|
| Subsidy | Free or discounted premium access | “Why not try it?” | Rapid trial volume |
| Stress-time use | Promotions during finals, deadlines, hard tasks | Relief and time saved | High-intensity use at vulnerable moments |
| Habit | Repeated use for outlines, summaries, drafts | “This is just how I work now” | Default workflow placement |
| Switching cost | AI becomes the first step | Starting alone feels harder | Retention without needing category-leading quality |
Wait, isn’t making things easier just what good tools do? Sure. But ordinary tools usually remove effort from the back half of a task. A dependency layer moves forward. It starts helping decide what to ask, how to frame the problem, what counts as a reasonable answer.
That is why AI dependence is more valuable than occasional usage. The strongest version is not “my files live here.” It’s “I don’t like how it feels to begin without it.”
Why AI Dependence Starts on Campus

Students are not just a big audience. They are the best place to train a future default.
A 35-year-old lawyer or product manager usually has an existing workflow, annoying as it may be. Students are still building theirs. So when a chatbot offers instant structure, instant summaries, instant confidence, the tool does not slot into a mature process. It becomes the process.
The March 23, 2026, RAND finding is the sharpest recent evidence here. Nearly 7 in 10 middle and high school students told researchers they worry AI will hurt their critical thinking. That is the weird part. Student AI use is rising even while students themselves increasingly believe the trade may be bad for their minds.
That tension matters more than generic adoption numbers. It means the product is surviving contact with user suspicion. People can feel the erosion and still keep using it.
A quick scenario makes the mechanism obvious.
At 9:00 a.m., a student has a paper due at noon. The old workflow starts with rereading notes, sketching an outline, deciding what the argument is, then writing a clumsy first paragraph. The AI-first workflow starts with: “Write me a strong outline on X, give me three claims, and suggest a thesis.” The skipped step is not typing. It’s forming the initial point of view.
The campus push is also broader than one Duke example. The Atlantic reported a whole spread of student targeting: OpenAI’s free ChatGPT Plus offer, Anthropic’s $1-per-month Claude promotion for students, and additional deals from xAI and Perplexity aimed at the same audience. That matters because it turns student AI use from a single-company campaign into a category-wide land grab.
And once multiple vendors pile in, the mechanism gets very predictable:
- subsidy
- peer norm
- policy ambiguity
- default use
That middle part is doing a lot of work. When your classmates use AI for notes, study guides, interview prep, and first-draft structures, non-use starts to feel eccentric. Official rules get muddy fast: allowed for brainstorming, discouraged for drafting, banned for final submission, used constantly anyway.
| Step | What happens on campus | Why it sticks |
|---|---|---|
| Subsidy | Free trials, $1/month promotions, campus pilots | Lowers the cost of trying premium AI |
| Peer norm | Friends share prompts, study guides, draft workflows | Use becomes socially normal |
| Policy ambiguity | Professors and schools allow some uses, discourage others | Students fill the gap with “probably okay” |
| Default use | AI becomes the first stop for planning and drafting | The habit survives beyond the class |
Duke is still useful because it shows institutional normalization, not just student demand. AP reported in October 2025 that Duke launched an OpenAI pilot for students, faculty, and staff and said a report on its findings was expected by the end of the fall semester. As of publication, no public outcome report from Duke’s pilot was available. That absence is not proof of harm, but it does tell you something about sequencing: access was rolled out first, while public evidence on learning effects lagged behind.
And campuses amplify behavior in a way offices don’t. A school can normalize use through licenses, pilots, library guidance, course policies, and simple social contagion all at once.
There’s also a plain business reason these subsidies are attractive: lifetime value. Habits formed in school travel into work. A company that teaches someone to offload outlining, summarizing, and first-pass interpretation at 19 may not just win a student user. It may deliver a pre-trained enterprise customer five years later.
If you want the downstream version of that pipeline, our piece on AI-native graduates follows what happens when polished output arrives before durable judgment.
The Real Cost of Cognitive Offloading
“Cognitive offloading” sounds abstract until you notice which part of the task disappears.
Autocomplete finishing a sentence is one thing. AI that gets consulted before outlining, before interpreting, before deciding is another. That is the difference between execution help and judgment help.
The Atlantic’s reporting on heavy users makes this concrete. One man reportedly used AI for up to eight hours a day, sometimes with six sessions open at once. He asked Claude for marriage and parenting advice, checked produce by sending photos, and decided whether to leave the house because the chatbot warned about a nearby tree. His own description was blunt: “It’s like a real addiction.”
That sounds extreme. But the mechanism is ordinary. The model reduces the discomfort of uncertainty, and that makes repeated use feel emotionally rational even when it weakens your own first-pass judgment.
Wait, how is that different from a normal productivity shortcut? Good question. A shortcut helps after you’ve framed the task. A dependency layer helps instead of framing the task.
So here’s a practical test that’s actually usable.
Green: AI after your thinking
– Draft your thesis first, then ask AI to challenge it.
– Annotate the reading first, then ask for a summary you can compare against your notes.
– Decide what you’re trying to do first, then use AI for wording, formatting, or alternatives.
Yellow: AI alongside your thinking
– You have a rough view, but you ask AI for possible structures before writing.
– You use it to surface counterarguments while still making the final call yourself.
– The tool is shaping the work, but not fully originating it.
Red: AI before your thinking
– You ask for the thesis before you have one.
– You paste in an article or email and ask what it means before deciding yourself.
– You use the model to tell you what matters, not to help once you’ve already chosen.
That red zone is where AI dependence becomes a cognition problem rather than a productivity trick.
There’s a real tension here. AI can help people explore ideas, spot angles, and escape local minima. But that upside depends on the human still being able to generate a view of their own and reject a bad frame when the model hands them one. Once that muscle weakens, convenience stops being neutral.
A simple rule works better than grand philosophy: use AI after you draft a thesis, not before; after you annotate the reading, not before. That single boundary preserves the part of the task most worth keeping.
What AI Companies Gain When Users Stop Thinking First
They gain something better than ordinary retention. They gain placement at the top of the decision stack.
AP’s Gallup-based reporting is useful here because it shows the slope of normalization: 12% of employed adults use AI daily, about one-quarter use it frequently, and nearly half use it at least a few times a year. Once usage gets that frequent, the product stops being “a chatbot for certain tasks.” It becomes infrastructure for beginning.
That changes what “winning” means. The company with the best model is not automatically the one with the strongest position. The stronger position belongs to the company that gets provisioned into the tools people already open, the schools they already attend, and the workflows they already repeat.
| User type | Subsidy or trigger | Habit that forms | What gets locked in |
|---|---|---|---|
| Student | Free premium access during finals, campus pilots | Outline-first, summary-first studying and writing | Early cognitive habits that carry into jobs |
| Knowledge worker | AI bundled into email, docs, search, meetings | Interpret-first and draft-first outsourcing | Daily workflow dependence and reduced tolerance for blank-page work |
| Institution | Campus or enterprise-wide licensing | Norms, policy drift, default provisioning | Distribution power and organization-wide switching costs |
That last column is the prize.
If your assistant is where someone begins every task, you can shape which sources are surfaced, which options are recommended, what tone is treated as professional, and what counts as “good enough.” The product is not just generating text. It is mediating attention before the user has formed a position.
That’s why the comparison to ChatGPT vs Chegg matters. Free answers did not just hurt an education company. They reset the expectation that help should be ambient, instant, and nearly free. Once that expectation lands, distribution and default placement matter more.
It also explains the emotional language around chatbot addiction. Heavy users do not just talk about speed. They talk about relief, reassurance, confidence, compulsion. A product that regulates feeling as well as task completion gets a much deeper kind of stickiness.
Here’s the prediction I’d actually put money on: by Q1 2027, Microsoft will default-enable Copilot in at least one major Microsoft 365 education or enterprise plan and publicly disclose a double-digit increase in seat activation or retention versus comparable opt-in cohorts. That is the clean test. Not “AI was popular.” Not “users liked the feature.” A default-on bundle plus a measurable retention or activation gap.
If that happens, we should stop pretending the market is mainly about better answers. It will be about who gets installed at the moment before thinking starts.
Key Takeaways
- AI dependence is valuable because it captures the starting step of knowledge work, not just the execution step.
- Student AI use matters so much because campus habits can become future enterprise habits.
- The strongest fresh signal is the tension itself: adoption keeps rising even as students increasingly worry AI may damage critical thinking.
- Cognitive offloading turns risky when AI frames the problem before the user does.
- The likely winners are not just the labs with the best models, but the companies that bundle AI deeply enough to become the default place people begin.
Further Reading
- Millennials Got Cheap Ubers. Gen Z Gets Free SuperGrok., On free student AI promotions, campus targeting, and the subsidy logic behind them.
- The People Outsourcing Their Thinking to AI, Reported examples of compulsive chatbot use and everyday cognitive offloading.
- Students Are Worried That AI Will Hurt Their Critical Thinking Skills, Education Week on RAND findings showing students increasingly fear AI’s effect on thinking.
- How Americans are using AI at work, according to a new Gallup poll, AP’s adoption data on daily and frequent workplace AI use.
- Duke launches OpenAI pilot for students and staff, AP on a university normalizing AI access while public evidence on learning outcomes remained pending.
If lock-in happens at the starting layer of cognition, then schools and employers should stop judging AI only by whether the output looks good. The real question is whether it helps people think through a problem, or quietly replaces the part where they were supposed to frame it.
