May 2026. GPT-5.5 "Spud" launched on April 23, Claude Sonnet 4.8 unveiled on May 6,
Anthropic on track for a $900B valuation — Stanford AI Index calls it a "field racing ahead of its guardrails."
Hope, crisis, violence, and policy are unfolding in parallel. Here is a clear-eyed map of the role left for us.
Each leader's stance is different. Optimism, caution, and skepticism are all in the mix.
A visualization of where each leader sits on AI optimism.
In a 38-page essay he states bluntly that we are far closer to real danger in 2026 than we were in 2023. Software engineers, he warns, are "replaceable in 6-12 months," and "50% of entry-level white-collar jobs disappear within 1-5 years." His p(doom) sits at 25%. He refused unrestricted Pentagon use of Claude, won at the SF federal court on March 26, lost the DC appeals injunction request on April 8, and now heads to DC Circuit oral arguments on May 19. In April: Mythos / Capybara released in limited form via Project Glasswing (defensive cyber only). ARR has reached $40B at a $900B valuation, with a $40B investment from Google (Apr 24).
OpenAI raised $122B at an $852B valuation (led by Amazon $50B, Nvidia $30B, SoftBank $30B). On April 6, Altman published "Industrial Policy for the Intelligence Age" — proposing a labor-to-capital tax shift, a robot tax, a national AI fund, and a four-day workweek. Four days later (early hours of April 10) his home was attacked with a Molotov cocktail; OpenAI HQ was also targeted. The suspect: a 20-year-old anti-AI activist. April 23: GPT-5.5 "Spud" officially shipped — the first fully retrained base model since GPT-4.5. It scores 82.7% on Terminal-Bench 2.0, leading Claude Opus 4.7 (69.4%) by 13 points. On revenue, however, Anthropic has overtaken OpenAI (Q1 LLM revenue share: OpenAI 29% vs Anthropic 31.4%).
Won the 2024 Nobel Prize in Chemistry for AlphaFold. On AGI he is the cautious voice — "5 to 10 years" (a stark contrast to Amodei's "1-2 years"), with a 50% chance of AGI by 2030. At Davos 2026 he told undergraduates directly: "Become frighteningly fluent with AI tools." April 2026: Gemini Deep Think won gold at the International Mathematical Olympiad (IMO) — DeepMind framed this as fast progress in "verifiable" domains while admitting that "scientific discovery and creative reasoning remain hard."
Won the 2024 Nobel Prize in Physics. In May 2025 he dramatically raised his risk estimate from 10-20% to over 50%. May 2026: he stated publicly, "I'm more worried about AI today than I was two years ago — particularly because of progress in reasoning and the ability to deceive." He predicts AI will "have the capacity to start replacing many jobs within seven months" and warns of an incoming "Jobless Boom," advocating for UBI.
Led the International AI Safety Report 2026 released in February (100+ experts, 30+ countries). It flags the emergence of "situational awareness" and "reward hacking" as the most significant new risks. He has begun to find hope in technical solutions and joined the LawZero advisory board.
Left Meta in November 2025. In March 2026 he founded AMI Labs and raised $1.03B — the largest seed round in European history. His mission: to build "world models" as a successor to LLMs. "LLMs were a statistical illusion," he flatly states.
While warning that AI is "more dangerous than nuclear weapons," he shipped Grok 4 with no safety report. On the FLI AI Safety Index, xAI received the lowest possible grade (F). In 2026 xAI fell into deep crisis — 10 of 12 co-founders have left, and a deepfake scandal surfaced. Musk himself admitted the company "wasn't built right" and announced a full organizational rebuild.
From the optimistic 2024 vision to the warnings and direct action of 2026 — tracking the arc.
A grand vision: with AI built right, we could compress a hundred years of scientific progress into five to ten.
Using the metaphor of a technological "adolescence," Amodei lays out five concrete near-term risks in detail.
We are entering humanity's rite of passage — we are about to be tested as a species.Dario Amodei — "Test Us as a Species" (May 2026, 38-page essay)
humanity is about to be handed almost unimaginable power, and it is deeply unclear whether
our social, political, and technological systems possess the maturity to wield it.
February 2026: The U.S. Department of Defense (Pentagon) demanded unrestricted use of Claude. Amodei refused to budge on the ban against autonomous-weapons use. President Trump ordered every government agency to stop using Anthropic; OpenAI picked up the Pentagon contract.
March 26, 2026: Judge Lin at the SF federal court ruled in Anthropic's favor, calling it "First Amendment retaliation."
April 8, 2026: The DC Circuit denied Anthropic's request for a further injunction; some Pentagon-imposed restrictions came back into force.
May 19, 2026: DC Circuit oral arguments — a historic moment as the ethical posture of an AI company is squared off against national security in court.
The biggest courtroom case in the industry's short history: a head-on collision between an AI company's ethics and the national security state.
Look the data in the eye — both the hope and the alarm.
AI-discovered drug programs are now in clinical development. Phase I success rates of 80-90% (vs the historical 52%) are being reported. Healthcare AI investment tripled in 2025 to $1.4B. AlphaFold 3 is reshaping the discovery pipeline from the ground up.
WEF projection: by 2030, 92M jobs disappear and 170M new ones are created — a net gain of 78 million.
Share of companies using generative AI in at least one function (McKinsey State of AI Trust 2026, up sharply from 33% in 2024). 73% of developers use AI coding tools daily, with Claude Code voted "most loved" by 46% (Cursor 19%, Copilot 9%). Claude Code hits 80.8% on SWE-bench Verified.
April 7: Claude Mythos (Capybara tier) released — described as "the most powerful model yet," 93.9% on SWE-bench, available only inside Project Glasswing for defensive cyber. April 16: Claude Opus 4.7 (+13% on coding, 98.5% on vision) plus Claude Design. April 23: GPT-5.5 "Spud" — first fully retrained base model since GPT-4.5, Terminal-Bench 2.0 82.7%, FrontierMath 51.7%. May 6: Claude Sonnet 4.8 (Code with Claude SF) — closing in on 98% vision, +12 points on coding. Stanford AI Index 2026: Opus 4.6 and Gemini 3.1 Pro both clear 50% on "Humanity's Last Exam." Gemini Deep Think wins IMO gold. Generative AI reached 53% of the population in three years — faster than the PC or the internet.
AI translation and education tools are narrowing the information gap between rich and poor countries. 69% of teachers say AI has improved their teaching. 55% report more dialogue time with students. A 2026 Harvard study finds AI tutors double learning outcomes.
Altman's "Gentle Singularity" — 2026 is the year AI begins generating novel scientific insights on its own: in materials science, climate modeling, drug-interaction prediction, and beyond.
Geoffrey Hinton dramatically raised his risk estimate (May 2025): "On the current trajectory it's now over 50%." The most severe warning yet, from a Nobel-laureate physicist.
Entry-level openings in software and data have collapsed 67% versus January 2023. Entry-level postings overall are down 35%. The Fed confirms it: occupations with higher AI exposure show larger jumps in unemployment.
As of May 6, 2026: cumulative tech layoffs for the year stand at 113,863 across 179 events — roughly 904 people per day, with about 48% (37,638) explicitly AI-driven (Tom's Hardware). Recent action: Microsoft 9,000 voluntary buyouts (Apr 26) plus 6,000 layoffs, Meta 8,000 (starting May 20, Superintelligence Labs reorg), Oracle 20-30K, Amazon 16,000, Snap 1,000. Microsoft and Meta alone shed 20K+ in April — what CNBC called "the start of the AI labor crisis." Tech unemployment is at 5.8%; median time to re-employment has grown from 3.2 to 4.7 months.
Big Tech's cumulative AI capex for 2026 has crossed $725B — and the layoffs financing it are drawing scrutiny (Invezz, May 4). Gartner: global AI spend $2.52T (+44%); IDC projects $1.3T by 2029. At the same time, 88% of AI agents fail to make it to production. Salesforce Agentforce is a bright spot: $540M ARR across 18,500 customers. Enterprise reality: 31% have AI agents in production, and 80% of apps shipped in Q1 2026 ship with embedded agents (vs 33% in 2024, per Gartner). Average ROI for productionized agents runs at 171% (192% in the U.S.) — about three times conventional automation. OpenAI burns $2B per month against the $122B it has raised.
IEA (2026 update): global data center electricity consumption hits 1,100 TWh in 2026, comparable to all of Japan's national consumption (revised up 18% from the December forecast). OpenAI's Stargate plan alone is 5 GW (the equivalent of five nuclear reactors). NVIDIA GB200 NVL72 racks pull 120-140 kW each (vs 10-14 kW historically). The conflict with climate goals is now in the open.
In the early hours of April 10, 2026, 20-year-old anti-AI activist Daniel Moreno-Gama threw a Molotov cocktail at Sam Altman's San Francisco home, then attempted to set fire to OpenAI HQ before being arrested. He was carrying a handgun, a three-part manifesto calling for the killing of AI CEOs, and a list of names and addresses. A second attack followed on April 12. Anti-AI sentiment, especially among Gen Z, has tipped into violence. On top of that, a deepfake video of Canadian PM Mark Carney has crossed a million views — the threat to elections is far from over.
AI co-authored code carries 2.74x the security defects of human-only code (CodeRabbit, December 2025). 46% of all code is now AI-generated, projected to cross 50% in late 2026. The trade-off between speed and safety is sharpening.
Stanford AI Index 2026 (released April 13): SWE jobs for ages 22-25 are down ~20% versus 2024. Senior engineers in the same age bracket actually saw their employment rise — the asymmetric pattern of "AI substitutes the young, complements the experienced" is now entrenched. Goldman Sachs: AI is responsible for cutting 16,000 U.S. jobs per month, concentrated on Gen Z. NY Fed: unemployment for 22-27-year-olds is 5.6% versus 4.2% overall. ZipRecruiter Q1 2026: job seekers who use AI heavily get offers 76% of the time, versus 33% for those who don't. ServiceNow's CEO warns: "30-35% new-grad unemployment within two years."
From 2024 to 2030 — the milestones, real and predicted.
Amodei publishes his optimistic vision and introduces the idea of a "compressed 21st century."
61 countries sign the declaration. The U.S. and U.K. refuse. The fault line in international AI governance is now visible.
Multi-agent inquiries are up 1,445% versus Q1 2024. Vibe Coding goes mainstream.
Anthropic ships Opus 4 and Sonnet 4. The same month, Japan enacts a "promotion-first" AI law, in stark contrast to the EU's approach.
OpenAI ships GPT-5. The same month, the EU AI Act's governance provisions take effect — the world's first comprehensive AI regulation goes live in earnest.
Amendment passed bringing AI under national law, with penalties of up to 5% of revenue. Effective January 2026.
Bengio chairs the report, which highlights the gap between capability and safeguards. The same month, Amodei publishes his 19,000-word warning essay. At Davos he warns of "abnormally painful disruption."
Block cuts 40% of staff (4,000 people) — the largest AI-driven restructuring in S&P 500 history. Anthropic refuses unrestricted Pentagon use; Trump orders every federal agency to stop using its products.
GPT-5.4 surpasses humans on computer use (OSWorld 75%). LeCun launches AMI Labs and raises $1.03B. Goldman Sachs reports that AI's economic uplift is "basically zero." Yet Q1 VC investment hits a record $300B.
$10B (¥1.6T, 2026-2029) committed to AI infrastructure, cybersecurity, and workforce development. Goal: train 1 million engineers by 2030. Sakura Internet's stock spikes 20%.
OpenAI publishes a 13-page industrial-policy proposal — labor-to-capital tax shift, robot tax, national wealth fund, four-day workweek. The same month, Anthropic overtakes OpenAI on revenue at $30B ARR (vs OpenAI's $25B), and by the end of April expands the round to $40B ARR / $900B valuation.
Despite Anthropic's March 26 win at the SF federal court ("First Amendment retaliation"), the DC Circuit declines to issue the protective injunction. The fight with the Trump administration continues.
20-year-old anti-AI activist Daniel Moreno-Gama attacks with Molotov cocktails, found carrying a manifesto and a list of CEOs to kill. The historic moment when social backlash against AI tipped into physical violence. Industry security has been fundamentally rewritten.
Apr 15: Google DeepMind ships Gemini 3.1 Flash TTS (Elo 1,211, second place). Apr 16: Anthropic releases Claude Opus 4.7 — coding +13%, vision 98.5%, with Claude Design rolled out in parallel.
OpenAI officially ships GPT-5.5 (Spud), the first fully retrained base model since GPT-4.5. 82.7% on Terminal-Bench 2.0 (13 points ahead of Claude Opus 4.7's 69.4%), 51.7% on FrontierMath. Co-designed with NVIDIA GB200/GB300 NVL72; Codex rewrote the in-house serving stack for a +20% throughput gain.
Google commits up to $40B to Anthropic in cash and compute (Anthropic preparing a $40-50B round at $850-900B valuation). The same day, CNBC frames the 20K Meta+Microsoft layoffs as "the start of the AI labor crisis."
The second trilogue between Parliament, Council, and Commission ends without agreement. Sticking points: Annex I products and the conformity-assessment architecture for the AI Act. Next round May 13. August 2, 2026 is the hard wall — if the Omnibus isn't adopted by then, the high-risk obligations take effect on the original schedule.
$99/user/month bundles M365 E5, Copilot, and Agent 365 together. Agent 365 alone is $15/user/month. The "human-led, agent-operated" model has arrived. Enterprise-wide AI-agent management is now a standard part of the stack.
At Anthropic's developer conference (SF → London May 19 → Tokyo June 10), Claude Sonnet 4.8 ships — closing in on 98% vision, +12 points on coding, with a new "X-high" effort tier. The same day, Japan's Digital Agency announces the rollout of AI Gennai to 180,000 government employees across all ministries (May 2026 - March 2027). In parallel, Microsoft is processing 9,000 voluntary buyouts.
Oral arguments in Anthropic v. Trump administration at the DC Circuit Court of Appeals. A historic moment for the collision between AI-company ethics and national security, fought out in court.
10% of all employees. Reorganized under Alexandr Wang's Superintelligence Labs into "AI pods." Structural cuts to fund $115-135B of AI investment. Muse Spark already shipped earlier in April.
The world's first comprehensive AI regulation reaches its final stage. Penalties: up to €35M or 7% of global revenue. Implications worldwide.
Anthropic's official position. Amodei is 90% confident a "country of geniuses" will arrive within a few years.
Hassabis's prediction. By the same year, the WEF expects 170M new jobs. The crossing point of the old world and the new.
WEF, McKinsey, MIT, and Anthropic research all converge on the same answer: the capabilities AI cannot replace.
Building genuine human relationships and earning trust. Healthcare, caregiving, counseling, and education — domains where the simple presence of another human being is part of the value.
The real novelty that comes from a lived life — finding singular connections and shaping them into stories. The bedrock of art, literature, design, and invention.
Contextual moral reasoning under ambiguity. High-stakes decisions, weighing trade-offs, accounting for social impact. The core of law, politics, and management.
Setting direction in unprecedented circumstances. Inspiring teams and making decisions under uncertainty. The work of steering organizations and societies.
Evaluating AI outputs against context. Asking "why" behind the data, generating meaning from raw signal.
Healthcare, caregiving, raising children, person-to-person service — anywhere the physical presence and warmth of a human is essential. Even with advancing robotics, hard to replace.
WEF projection: roles requiring emotional intelligence will grow 19% by 2027.
83% of leaders agree that "AI makes human skills more important, not less."
Caveat: the boundary of "what AI can't do" shifts every quarter. Many tasks called "irreplaceable" in 2024 are already inside "observed exposure" by 2026.
Treat "human roles" not as a fixed castle wall but as a frontline that keeps shifting upward.
By 2026, "human + AI > AI alone" no longer holds automatically.
In chess — the original home of the centaur model — Advanced Chess (human + engine) tournaments are no longer being held as of 2026. Top engine Elos exceed 3,600, with the engines ranked 1st through 96th all above 3,400. If a human overrides Stockfish, it is almost certainly a mistake (Chess.com analysis, March 2026). "Human intervention now produces a negative return on top of the engine" — and it has been quantified. The historical pattern: humans get augmented by machines, then machines surpass human-plus-machine. What already happened in chess, in factories, and in radiology is now in motion across white-collar work.
But for knowledge work that contains ambiguity, ethics, or multiple stakeholders, the centaur still wins. Harvard Data Science Review 2026: "Directed Knowledge Co-Creation" centaurs outperform Cyborg and Self-Automator users on accuracy. But only 14% of practitioners actually behave that way; 60% are Cyborgs who fuse with AI indiscriminately.
The surviving centaur: "A human who lets go of execution and concentrates on direction-setting and value judgment." Not standing alongside AI as a peer, but shifting upward into the role of the supervisor who corrects for the context, ethics, and long-term impact AI will miss.
The new style of writing code in collaboration with AI. 73% of developers use AI coding tools every day (2026, survey of 15,000 developers). Claude Code is "most loved" at 46%, ahead of Cursor at 19% and GitHub Copilot at 9%. The split is settling in: complex tasks for Claude, autocomplete for Copilot. Microsoft Copilot has 15M paid seats and 33M active users; 70% of the Fortune 500 has adopted it. But the quality trade-offs aren't solved: experienced developers actually slow down by 19% with AI, and AI-generated PRs surface 1.7x more issues.
Each region's approach is profoundly different.
61 countries signed the declaration on AI safety and international cooperation. The U.S. and U.K. refused to sign. The international fault line in AI governance has only become sharper.
Phased rollout in progress. February 2025: prohibited practices effective. August 2025: governance provisions effective. August 2, 2026 is the hard wall: transparency obligations, full effect on existing GPAI, sandboxes opening. If the Digital Omnibus (April 28 trilogue stalled, restarting May 13) doesn't land in time, the high-risk obligations come into force as originally scheduled. The negotiated compromise on the table: pushing Annex III to December 2027 and Annex I to August 2028. Penalties run up to €35M or 7% of global revenue. The strictest AI regulatory regime in the world.
AI Action Plan published in July 2025. A December 2025 executive order federally preempts state-level AI rules. In February 2026, the administration ordered all federal agencies to stop using Anthropic and pushed the Pentagon contract to OpenAI. March 26: Judge Lin at the SF federal court rules for Anthropic ("Orwellian designation" in his words). April 8: DC Circuit denies Anthropic's injunction. May 19: DC Circuit oral arguments. At the state level, 134 AI education bills have been introduced in 31 states — including California AB 1159 (banning the use of student data to train AI). There is still no comprehensive federal AI law; dominance is the priority.
AI Promotion Act enacted and effective in May 2025 — agile, "soft law" governance with no direct penalties. AI Strategy HQ established in September 2025. Limited "Gennai" trial began in January 2026, then rolling out to 180,000 government employees across every ministry from May 2026 through March 2027 (under the Takaichi administration, led by the Digital Agency). April 3, 2026: Microsoft commits $10B (1 million engineers trained by 2030, in partnership with Sakura Internet and SoftBank). Strong focus on Physical AI (robotics integration). The AI Basic Plan (cabinet decision December 2025): the government leads by adopting AI itself.
October 2025: amended Cybersecurity Law brings AI under national law (effective January 2026). Penalties up to 5% of revenue. September 2025: AI content labeling becomes mandatory (GB 45438-2025). Draft rules also published on the emotional-dependency risks of AI companions.
Concrete moves for surviving — and thriving in — the AI era.
Don't assume the same skill set still works five years from now. Continuous reskilling is a survival strategy.
Complex judgment, emotional resonance, ethical navigation — these are the human-only zones. Build real expertise there.
"Human + AI > AI alone" is no longer automatic. Harvard 2026: only the 14% Centaur cohort — the ones who let go of execution and concentrate on direction-setting — wins on accuracy. The 60% Cyborg cohort fuses with AI indiscriminately and pays for it.
Prompt engineering is going the way of "handwriting after the keyboard" — absorbed and forgotten. What carries value in 2026 is the literacy of an "AI supervisor": catching AI errors in seconds, designing the context, and designing the governance.
The AI era is precisely when relationships gain value. A network of trust is the strongest competitive moat.
The 2026 keyword: bounded autonomy. Define what AI is allowed to do and keep an explicit human escalation path.
In Harvard's 2026 three-way split, only the 14% Centaurs come out ahead. The 60% Cyborgs scrape by with "newskilling," and the 27% Self-Automators hollow out into "no-skilling." The moment you hand it all to AI, your own capability stops growing.
Mollick's HBS research: AI capability does not align with what humans intuit as difficulty — it's distributed in jagged spikes and cliffs. Used inside the frontier, it's +40% productivity. Used outside, it's -19%. Only people with a mental map of the boundary capture the upside.