In January at Davos, Anthropic CEO Dario Amodei said out loud what many software engineers suspected: “We might be six to twelve months away from when the model is doing most, maybe all, of what [software engineers] do end to end.” That single, recorded sentence now sits underneath a much louder meme, a viral Reddit post claiming, without evidence, that “Anthropic internally expects AGI within 6-12 months”, and the two are being casually merged into one terrifying Anthropic AGI timeline.
TL;DR
- There is no verified evidence that Anthropic, as a company, internally expects AGI in 6-12 months; the Reddit “leak” is uncorroborated rumor.
- What is verified is Amodei’s Davos claim that models could do most or all software engineering work end‑to‑end in 6-12 months, and that is economically huge on its own.
- Treat this moment less like “AGI tomorrow” and more like “coding automation shock”: plan for workflows, jobs, and businesses where code is mostly written by machines, but full AGI is still uncertain.
- Over the next 3-6 months, watch hiring, product launches, and usage patterns around code generation, not social media rumors, to see if the Davos prediction is becoming real.
No verified internal leak, why the Reddit claim fails
Start with the narrow question: did Anthropic leak an internal consensus that AGI arrives in 6-12 months?
The only concrete “source” is a Reddit thread linking to a tweet: an individual says they “heard from 2 people” that Anthropic “internally expects to have AGI in 6-12 months” and encourages followers to “plan your business and personal finances appropriately.” No document, no memo, no named employees, no corroboration from outlets that usually handle leaks (the Information, Bloomberg, NYT, FT, etc.). Just an anonymous chain of “I heard that they heard.”
By contrast, the statements we can verify live on the World Economic Forum’s own site. In the “Day After AGI” session, Amodei explicitly talks about software engineering, not a company‑wide declaration of AGI. Digital Watch Observatory’s summary and mainstream write‑ups at Windows Central and Benzinga all quote the same thing: a 6-12 month horizon for models doing most software engineering tasks, plus a separate, longer‑standing 2026-2027 timeline for “country‑of‑geniuses‑in‑a‑datacenter”-level capabilities.
Those are public CEO predictions, not leaked internal beliefs. They matter. But they are not the same as “Anthropic has decided AGI is 6-12 months away.”
What the Reddit post really demonstrates is something else: once timelines become a social currency, a single unverifiable sentence can be amplified until it feels as solid as a WEF transcript.
What Dario Amodei actually said at Davos (and why it matters)
In Davos, Amodei made two distinct claims.
First, on software engineering. He described Anthropic engineers who “don’t write any code anymore” but instead ask Claude to generate it and then edit and integrate the result. He then projected this forward:
“I think, I don’t know, we might be six to twelve months away from when the model is doing most, maybe all, of what SWEs do end to end.”
Second, on broader capability. As summarized by Digital Watch and others, Amodei reiterated an aggressive timeline he’s mentioned before: by around 2026-2027, models with “Nobel‑laureate‑level” skill in at least some domains may be feasible, assuming continued progress in compute and algorithms. Demis Hassabis, sharing the stage, offered a more cautious “around 50% by the end of the decade.”
Two important points follow.
- This is not off‑the‑cuff hype. Amodei has been publicly on record with short AGI timelines for years. The Davos remarks fit that pattern rather than exceed it.
- The six to twelve months claim is about a narrow but economically central capability: software development. That is a very different statement than “we will have AGI,” especially if your definition of AGI includes robust world‑modelling, reliability, and competence across physical and social tasks.
Our own earlier coverage of his comments (Amodei AI Jobs Prediction) stressed the same divide: a CEO can plausibly be early on where automation bites first even if their broader AGI timeline is wrong by years.
This isn’t “AGI tomorrow”, it’s rapid coding automation, and that still matters

If you strip away the Reddit rumor and look only at the verified quotes, the near‑term story is not “Anthropic guarantees AGI by Christmas.” It is something more specific and arguably more actionable:
Over the next year, high‑end models may be capable enough that writing production code resembles editing machine‑generated drafts, not hand‑crafting every function.
Anthropic is already experimenting with this internally. Trade reporting and Anthropic’s own blog posts describe engineers using Claude (and variants sometimes dubbed “Claude Code”) to scaffold features, refactor legacy modules, and even help design training pipelines, a theme we covered in AI Builds AI: How Anthropic’s Claude Codes Its Future. The emerging loop is clear: models write code; that code builds better models.
The Mythos rumors, that Anthropic’s next generation might outperform scaling‑law expectations, fit this direction of travel, even if the details are still murky. Our analysis of that work in Anthropic Mythos breakthrough focused on technique, not timelines: better sample efficiency, smarter training, more robust reasoning.
None of this requires a clean, agreed‑upon definition of AGI. It only requires this more prosaic condition:
- For a growing fraction of software work, the cheapest, fastest competent producer is a model.
At that point, organisations do not need “AGI” to be disrupted. They only need:
– Engineers who become reviewers, integrators, and system designers rather than line‑by‑line coders.
– Managers who are comfortable shipping features mostly written by a stochastic neural network.
– Legal and safety teams who accept that much of the codebase is not “understood” in the traditional human sense.
The danger of the AGI‑in‑12‑months meme is that it pushes people into binary thinking: either civilisation‑changing superintelligence is imminent, or nothing important is happening. The Davos quote points to a third option: significant, uneven automation of a keystone profession, arriving well before anyone can agree whether it counts as “true AGI.”
Signals to watch next on the Anthropic AGI timeline
If you want to know whether Amodei’s six‑to‑twelve‑month prediction is becoming reality, and whether the Reddit rumor is even directionally plausible, social media will be the least informative place to look.
More telling signals over the next 3-6 months:
- Product capability jumps in coding tools. Watch Anthropic, OpenAI, Google, and others for coding‑focused releases that move from autocomplete to full‑stack changes: multi‑file edits, reliable refactors, autonomous test generation and repair. Marketing copy is unreliable; usage data, case studies, and third‑party benchmarks are not.
- Enterprise workflow changes. Look for stories of teams formally mandating AI‑first development, where “open your code assistant” becomes as standard as “run the linter.” When CIOs start budgeting on the assumption that one engineer can do the work of three with model assistance, the economic transition is underway.
- Hiring and pay signals. If Amodei is even approximately correct, junior developer roles and routine maintenance work will change first. Expect more job descriptions that emphasise architecture, domain knowledge, and AI tool fluency over raw coding speed.
- Internal tooling at labs. Trade reporting from places like The Information has already described Anthropic and its peers using their own models for evaluation and tooling. An acceleration in those reports, especially around automated experiment design or agentic debugging, would indicate a tighter self‑improvement loop, making faster capability gains more plausible.
If, on the other hand, the next six months bring only modest, incremental improvements in coding tools, with most companies still treating them as glorified IDE plugins, that is information too. It would argue for a longer, more gradual curve, still disruptive, but not 6-12 months to “models do most of what software engineers do.”
In either case, the key is to separate observed behaviour from viral belief.
Key Takeaways
- There is currently no verified internal leak that Anthropic expects AGI in 6-12 months; the viral Reddit claim is uncorroborated social media.
- The Anthropic AGI timeline that is on the record comes from Dario Amodei’s public remarks: models doing most software engineering in 6-12 months, and highly capable systems around 2026-2027.
- Even without “AGI tomorrow,” rapid coding automation could reshape software work: more editing, integration, and system design; less hand‑written boilerplate and routine implementation.
- The most informative signals in the next 3-6 months will be tool capabilities, enterprise workflows, and hiring patterns, not more aggressive rumors.
- Planning on this basis means preparing for AI‑heavy engineering organisations, not for an imminent, fully general machine mind appearing on a fixed date.
Further Reading
- The Day After AGI, WEF Radio Davos (Dario Amodei & Demis Hassabis), Primary Davos session where Amodei makes the 6-12 month software engineering claim.
- Session summary: ‘The Day After AGI’, Structured write‑up contrasting Amodei’s and Hassabis’s timelines and caveats.
- Engineers Don’t Write Code Anymore, Anthropic CEO Says, Tech outlet coverage of Amodei’s remarks and their implications for software jobs.
- Anthropic CEO Predicts AI Models Will Replace Software Engineers In 6-12 Months, Business‑focused summary, including the “self‑improvement loop” framing.
- Reddit: ‘Sources: Anthropic Internally Expects AGI Within 6-12 Months’, The unverified rumor thread that conflates internal expectation with public prediction.
In the end, the practical question is not whether Anthropic has circled a date on its internal AGI calendar. It is whether software in 2027 is mostly written by humans or mostly edited by them, and that answer will arrive, quietly, feature by feature, long before anyone agrees what to call it.
