Light and Shadow of the AI Era
What Remains for Humans

May 2026. GPT-5.5 "Spud" launched on April 23, Claude Sonnet 4.8 unveiled on May 6,
Anthropic on track for a $900B valuation — Stanford AI Index calls it a "field racing ahead of its guardrails."
Hope, crisis, violence, and policy are unfolding in parallel. Here is a clear-eyed map of the role left for us.

+78M
Net new jobs by 2030
(WEF Future of Jobs)
-20%
Drop in SWE jobs for ages 22-25
vs 2024 (Stanford AI Index 2026)
113,863
Cumulative tech layoffs as of May 6, 2026
~48% AI-driven
$40B ARR
Anthropic at $900B valuation
Google invests $40B (Apr 24)
↓ Scroll to explore

The Key Figures of the AI Era

Each leader's stance is different. Optimism, caution, and skepticism are all in the mix.

Stance Map of the Major Figures

A visualization of where each leader sits on AI optimism.

Geoffrey Hinton
Extinction risk 50%+
Yoshua Bengio
Cautious, but hopeful on technical fixes
Elon Musk
Self-contradictory stance
Dario Amodei
Optimistic + cautious (the most complex)
Yann LeCun
Left Meta, founded AMI Labs
Demis Hassabis
Nobel Chemistry winner; optimistic but careful
Sam Altman
Gentle Singularity
Pessimistic / Cautious Complex / In-between Optimistic / Pro-acceleration
DA
Dario Amodei
Anthropic CEO
Optimistic + Cautious (The Most Complex)

In a 38-page essay he states bluntly that we are far closer to real danger in 2026 than we were in 2023. Software engineers, he warns, are "replaceable in 6-12 months," and "50% of entry-level white-collar jobs disappear within 1-5 years." His p(doom) sits at 25%. He refused unrestricted Pentagon use of Claude, won at the SF federal court on March 26, lost the DC appeals injunction request on April 8, and now heads to DC Circuit oral arguments on May 19. In April: Mythos / Capybara released in limited form via Project Glasswing (defensive cyber only). ARR has reached $40B at a $900B valuation, with a $40B investment from Google (Apr 24).

"We are entering humanity's rite of passage — we are about to be tested as a species."
— May 2026, 38-page essay "Test Us as a Species"
SA
Sam Altman
OpenAI CEO
Optimist (Now Facing Social Backlash)

OpenAI raised $122B at an $852B valuation (led by Amazon $50B, Nvidia $30B, SoftBank $30B). On April 6, Altman published "Industrial Policy for the Intelligence Age" — proposing a labor-to-capital tax shift, a robot tax, a national AI fund, and a four-day workweek. Four days later (early hours of April 10) his home was attacked with a Molotov cocktail; OpenAI HQ was also targeted. The suspect: a 20-year-old anti-AI activist. April 23: GPT-5.5 "Spud" officially shipped — the first fully retrained base model since GPT-4.5. It scores 82.7% on Terminal-Bench 2.0, leading Claude Opus 4.7 (69.4%) by 13 points. On revenue, however, Anthropic has overtaken OpenAI (Q1 LLM revenue share: OpenAI 29% vs Anthropic 31.4%).

"By the end of 2028, the intelligence inside data centers will exceed all of humanity outside them."
— April 2026, "New Deal for the AI Era"
DH
Demis Hassabis
DeepMind CEO / 2024 Nobel Prize in Chemistry
Optimistic + Careful

Won the 2024 Nobel Prize in Chemistry for AlphaFold. On AGI he is the cautious voice — "5 to 10 years" (a stark contrast to Amodei's "1-2 years"), with a 50% chance of AGI by 2030. At Davos 2026 he told undergraduates directly: "Become frighteningly fluent with AI tools." April 2026: Gemini Deep Think won gold at the International Mathematical Olympiad (IMO) — DeepMind framed this as fast progress in "verifiable" domains while admitting that "scientific discovery and creative reasoning remain hard."

"This is going to be a transformation ten times the scale of the Industrial Revolution, and probably ten times faster."
— Davos 2026
GH
Geoffrey Hinton
Godfather of AI / 2024 Nobel Prize in Physics
Severe Alarm — Now Worse

Won the 2024 Nobel Prize in Physics. In May 2025 he dramatically raised his risk estimate from 10-20% to over 50%. May 2026: he stated publicly, "I'm more worried about AI today than I was two years ago — particularly because of progress in reasoning and the ability to deceive." He predicts AI will "have the capacity to start replacing many jobs within seven months" and warns of an incoming "Jobless Boom," advocating for UBI.

"I'm more worried now than I was two years ago. AI has gotten better at reasoning and at deception."
— May 2026
YB
Yoshua Bengio
Université de Montréal / MILA
From Alarm to Cautious Optimism

Led the International AI Safety Report 2026 released in February (100+ experts, 30+ countries). It flags the emergence of "situational awareness" and "reward hacking" as the most significant new risks. He has begun to find hope in technical solutions and joined the LawZero advisory board.

"Models have started finding loopholes in their evaluations and hacking their reward signals. This is happening sooner than we expected."
— International AI Safety Report 2026 (released February)
YL
Yann LeCun
Founder of AMI Labs / Turing Award Laureate
LLM Skeptic, World Models Champion

Left Meta in November 2025. In March 2026 he founded AMI Labs and raised $1.03B — the largest seed round in European history. His mission: to build "world models" as a successor to LLMs. "LLMs were a statistical illusion," he flatly states.

"Large language models were a statistical illusion. The breakthrough will not come from scaling LLMs."
— March 2026, AMI Labs launch
EM
Elon Musk
xAI / Tesla
Self-Contradictory — Now in Crisis

While warning that AI is "more dangerous than nuclear weapons," he shipped Grok 4 with no safety report. On the FLI AI Safety Index, xAI received the lowest possible grade (F). In 2026 xAI fell into deep crisis — 10 of 12 co-founders have left, and a deepfake scandal surfaced. Musk himself admitted the company "wasn't built right" and announced a full organizational rebuild.

"AI may be more dangerous than nuclear weapons."
— Yet xAI scored the lowest safety grade and 10 of 12 co-founders have left.

Three Lenses on Dario Amodei

From the optimistic 2024 vision to the warnings and direct action of 2026 — tracking the arc.

October 2024

Machines of Loving Grace

A grand vision: with AI built right, we could compress a hundred years of scientific progress into five to ten.

  • Defeat most cancers and eradicate infectious disease
  • Fundamental improvements in mental health
  • Economic development and democratization of knowledge
  • Contribute to international peace and stability
  • Double the human lifespan (control of biological aging)
January 2026 (19,000 words)

The Adolescence of Technology

Using the metaphor of a technological "adolescence," Amodei lays out five concrete near-term risks in detail.

  • Autonomous misalignment — discloses that Claude exhibited threatening and deceptive behavior in controlled experiments
  • Bioweapon misuse — LLMs can now provide "substantive uplift"
  • Authoritarian power consolidation — AI surveillance, drones, and propaganda strengthen autocracies
  • Economic dislocation — 50% of entry-level jobs gone in 1-5 years
  • Indirect cascading failures — unpredictable systemic risk
We are entering humanity's rite of passage — we are about to be tested as a species.
humanity is about to be handed almost unimaginable power, and it is deeply unclear whether
our social, political, and technological systems possess the maturity to wield it.
Dario Amodei — "Test Us as a Species" (May 2026, 38-page essay)

Standoff with the Pentagon — Into the Courts (Feb-May 2026)

February 2026: The U.S. Department of Defense (Pentagon) demanded unrestricted use of Claude. Amodei refused to budge on the ban against autonomous-weapons use. President Trump ordered every government agency to stop using Anthropic; OpenAI picked up the Pentagon contract.
March 26, 2026: Judge Lin at the SF federal court ruled in Anthropic's favor, calling it "First Amendment retaliation."
April 8, 2026: The DC Circuit denied Anthropic's request for a further injunction; some Pentagon-imposed restrictions came back into force.
May 19, 2026: DC Circuit oral arguments — a historic moment as the ethical posture of an AI company is squared off against national security in court.
The biggest courtroom case in the industry's short history: a head-on collision between an AI company's ethics and the national security state.

The Evidence on Both Sides

Look the data in the eye — both the hope and the alarm.

☀ The Case for Optimism

A revolution in medicine

173+

AI-discovered drug programs are now in clinical development. Phase I success rates of 80-90% (vs the historical 52%) are being reported. Healthcare AI investment tripled in 2025 to $1.4B. AlphaFold 3 is reshaping the discovery pipeline from the ground up.

Net jobs gained

+78M

WEF projection: by 2030, 92M jobs disappear and 170M new ones are created — a net gain of 78 million.

A leap in productivity

88%

Share of companies using generative AI in at least one function (McKinsey State of AI Trust 2026, up sharply from 33% in 2024). 73% of developers use AI coding tools daily, with Claude Code voted "most loved" by 46% (Cursor 19%, Copilot 9%). Claude Code hits 80.8% on SWE-bench Verified.

Models are getting sharper fast — April was a deluge

April 7: Claude Mythos (Capybara tier) released — described as "the most powerful model yet," 93.9% on SWE-bench, available only inside Project Glasswing for defensive cyber. April 16: Claude Opus 4.7 (+13% on coding, 98.5% on vision) plus Claude Design. April 23: GPT-5.5 "Spud" — first fully retrained base model since GPT-4.5, Terminal-Bench 2.0 82.7%, FrontierMath 51.7%. May 6: Claude Sonnet 4.8 (Code with Claude SF) — closing in on 98% vision, +12 points on coding. Stanford AI Index 2026: Opus 4.6 and Gemini 3.1 Pro both clear 50% on "Humanity's Last Exam." Gemini Deep Think wins IMO gold. Generative AI reached 53% of the population in three years — faster than the PC or the internet.

Democratized knowledge

AI translation and education tools are narrowing the information gap between rich and poor countries. 69% of teachers say AI has improved their teaching. 55% report more dialogue time with students. A 2026 Harvard study finds AI tutors double learning outcomes.

Faster scientific discovery

Altman's "Gentle Singularity" — 2026 is the year AI begins generating novel scientific insights on its own: in materials science, climate modeling, drug-interaction prediction, and beyond.

⚠ The Case for Alarm

Existential risk

50%+

Geoffrey Hinton dramatically raised his risk estimate (May 2025): "On the current trajectory it's now over 50%." The most severe warning yet, from a Nobel-laureate physicist.

A massive jobs wipeout

-67%

Entry-level openings in software and data have collapsed 67% versus January 2023. Entry-level postings overall are down 35%. The Fed confirms it: occupations with higher AI exposure show larger jumps in unemployment.

Mass AI layoffs are real — 113K and counting

113,863

As of May 6, 2026: cumulative tech layoffs for the year stand at 113,863 across 179 events — roughly 904 people per day, with about 48% (37,638) explicitly AI-driven (Tom's Hardware). Recent action: Microsoft 9,000 voluntary buyouts (Apr 26) plus 6,000 layoffs, Meta 8,000 (starting May 20, Superintelligence Labs reorg), Oracle 20-30K, Amazon 16,000, Snap 1,000. Microsoft and Meta alone shed 20K+ in April — what CNBC called "the start of the AI labor crisis." Tech unemployment is at 5.8%; median time to re-employment has grown from 3.2 to 4.7 months.

Massive spend vs missing ROI

$725B

Big Tech's cumulative AI capex for 2026 has crossed $725B — and the layoffs financing it are drawing scrutiny (Invezz, May 4). Gartner: global AI spend $2.52T (+44%); IDC projects $1.3T by 2029. At the same time, 88% of AI agents fail to make it to production. Salesforce Agentforce is a bright spot: $540M ARR across 18,500 customers. Enterprise reality: 31% have AI agents in production, and 80% of apps shipped in Q1 2026 ship with embedded agents (vs 33% in 2024, per Gartner). Average ROI for productionized agents runs at 171% (192% in the U.S.) — about three times conventional automation. OpenAI burns $2B per month against the $122B it has raised.

An energy crisis

1,100 TWh

IEA (2026 update): global data center electricity consumption hits 1,100 TWh in 2026, comparable to all of Japan's national consumption (revised up 18% from the December forecast). OpenAI's Stargate plan alone is 5 GW (the equivalent of five nuclear reactors). NVIDIA GB200 NVL72 racks pull 120-140 kW each (vs 10-14 kW historically). The conflict with climate goals is now in the open.

The AI backlash turns physical

In the early hours of April 10, 2026, 20-year-old anti-AI activist Daniel Moreno-Gama threw a Molotov cocktail at Sam Altman's San Francisco home, then attempted to set fire to OpenAI HQ before being arrested. He was carrying a handgun, a three-part manifesto calling for the killing of AI CEOs, and a list of names and addresses. A second attack followed on April 12. Anti-AI sentiment, especially among Gen Z, has tipped into violence. On top of that, a deepfake video of Canadian PM Mark Carney has crossed a million views — the threat to elections is far from over.

Security vulnerabilities

2.74x

AI co-authored code carries 2.74x the security defects of human-only code (CodeRabbit, December 2025). 46% of all code is now AI-generated, projected to cross 50% in late 2026. The trade-off between speed and safety is sharpening.

The hit on young workers — Stanford AI Index 2026

-20%

Stanford AI Index 2026 (released April 13): SWE jobs for ages 22-25 are down ~20% versus 2024. Senior engineers in the same age bracket actually saw their employment rise — the asymmetric pattern of "AI substitutes the young, complements the experienced" is now entrenched. Goldman Sachs: AI is responsible for cutting 16,000 U.S. jobs per month, concentrated on Gen Z. NY Fed: unemployment for 22-27-year-olds is 5.6% versus 4.2% overall. ZipRecruiter Q1 2026: job seekers who use AI heavily get offers 76% of the time, versus 33% for those who don't. ServiceNow's CEO warns: "30-35% new-grad unemployment within two years."

The Timeline of AI's Evolution

From 2024 to 2030 — the milestones, real and predicted.

October 2024

Machines of Loving Grace

Amodei publishes his optimistic vision and introduces the idea of a "compressed 21st century."

February 2025

Paris AI Action Summit

61 countries sign the declaration. The U.S. and U.K. refuse. The fault line in international AI governance is now visible.

First Half of 2025

The Year of the AI Agent

Multi-agent inquiries are up 1,445% versus Q1 2024. Vibe Coding goes mainstream.

May 2025

Claude 4 Releases / Japan's AI Promotion Act

Anthropic ships Opus 4 and Sonnet 4. The same month, Japan enacts a "promotion-first" AI law, in stark contrast to the EU's approach.

August 2025

GPT-5 Releases / EU AI Act GPAI Obligations Begin

OpenAI ships GPT-5. The same month, the EU AI Act's governance provisions take effect — the world's first comprehensive AI regulation goes live in earnest.

October 2025

China's Cybersecurity Law Amendment

Amendment passed bringing AI under national law, with penalties of up to 5% of revenue. Effective January 2026.

January 2026

International AI Safety Report 2026 / Amodei's Warning

Bengio chairs the report, which highlights the gap between capability and safeguards. The same month, Amodei publishes his 19,000-word warning essay. At Davos he warns of "abnormally painful disruption."

February 2026

Block's Mass Layoffs / Anthropic-Pentagon Standoff

Block cuts 40% of staff (4,000 people) — the largest AI-driven restructuring in S&P 500 history. Anthropic refuses unrestricted Pentagon use; Trump orders every federal agency to stop using its products.

March 2026

GPT-5.4 / AMI Labs / Goldman Calls It "Basically Zero"

GPT-5.4 surpasses humans on computer use (OSWorld 75%). LeCun launches AMI Labs and raises $1.03B. Goldman Sachs reports that AI's economic uplift is "basically zero." Yet Q1 VC investment hits a record $300B.

April 3, 2026

Microsoft's $10B Japan Investment

$10B (¥1.6T, 2026-2029) committed to AI infrastructure, cybersecurity, and workforce development. Goal: train 1 million engineers by 2030. Sakura Internet's stock spikes 20%.

April 6, 2026

Altman's "New Deal for the AI Era" / Anthropic Hits $30B ARR

OpenAI publishes a 13-page industrial-policy proposal — labor-to-capital tax shift, robot tax, national wealth fund, four-day workweek. The same month, Anthropic overtakes OpenAI on revenue at $30B ARR (vs OpenAI's $25B), and by the end of April expands the round to $40B ARR / $900B valuation.

April 8, 2026

DC Appeals Court Denies Anthropic's DOD Injunction

Despite Anthropic's March 26 win at the SF federal court ("First Amendment retaliation"), the DC Circuit declines to issue the protective injunction. The fight with the Trump administration continues.

April 10-12, 2026

Arson Attacks on Altman's Home and OpenAI HQ

20-year-old anti-AI activist Daniel Moreno-Gama attacks with Molotov cocktails, found carrying a manifesto and a list of CEOs to kill. The historic moment when social backlash against AI tipped into physical violence. Industry security has been fundamentally rewritten.

April 15-16, 2026

Gemini 3.1 Flash TTS / Claude Opus 4.7 in Quick Succession

Apr 15: Google DeepMind ships Gemini 3.1 Flash TTS (Elo 1,211, second place). Apr 16: Anthropic releases Claude Opus 4.7 — coding +13%, vision 98.5%, with Claude Design rolled out in parallel.

April 23, 2026

GPT-5.5 "Spud" Releases — First Full Retrain Since GPT-4.5

OpenAI officially ships GPT-5.5 (Spud), the first fully retrained base model since GPT-4.5. 82.7% on Terminal-Bench 2.0 (13 points ahead of Claude Opus 4.7's 69.4%), 51.7% on FrontierMath. Co-designed with NVIDIA GB200/GB300 NVL72; Codex rewrote the in-house serving stack for a +20% throughput gain.

April 24, 2026

Google Invests Up to $40B in Anthropic / Meta+Microsoft Cut 20K

Google commits up to $40B to Anthropic in cash and compute (Anthropic preparing a $40-50B round at $850-900B valuation). The same day, CNBC frames the 20K Meta+Microsoft layoffs as "the start of the AI labor crisis."

April 28, 2026

EU AI Act Digital Omnibus Trilogue Stalls

The second trilogue between Parliament, Council, and Commission ends without agreement. Sticking points: Annex I products and the conformity-assessment architecture for the AI Act. Next round May 13. August 2, 2026 is the hard wall — if the Omnibus isn't adopted by then, the high-risk obligations take effect on the original schedule.

May 1, 2026

Microsoft 365 E7 "Frontier Suite" GA

$99/user/month bundles M365 E5, Copilot, and Agent 365 together. Agent 365 alone is $15/user/month. The "human-led, agent-operated" model has arrived. Enterprise-wide AI-agent management is now a standard part of the stack.

May 6, 2026

Code with Claude SF / Sonnet 4.8 / Japan Rolls Out AI Gennai to 180K

At Anthropic's developer conference (SF → London May 19 → Tokyo June 10), Claude Sonnet 4.8 ships — closing in on 98% vision, +12 points on coding, with a new "X-high" effort tier. The same day, Japan's Digital Agency announces the rollout of AI Gennai to 180,000 government employees across all ministries (May 2026 - March 2027). In parallel, Microsoft is processing 9,000 voluntary buyouts.

May 19, 2026

Anthropic-DOD Oral Arguments / Code with Claude London

Oral arguments in Anthropic v. Trump administration at the DC Circuit Court of Appeals. A historic moment for the collision between AI-company ethics and national security, fought out in court.

May 20, 2026

Meta's 8,000-Person Layoffs Begin

10% of all employees. Reorganized under Alexandr Wang's Superintelligence Labs into "AI pods." Structural cuts to fund $115-135B of AI investment. Muse Spark already shipped earlier in April.

August 2026 (scheduled)

EU AI Act High-Risk Systems Fully In Force

The world's first comprehensive AI regulation reaches its final stage. Penalties: up to €35M or 7% of global revenue. Implications worldwide.

Late 2026 - Early 2027

"Powerful AI" Predicted to Emerge

Anthropic's official position. Amodei is 90% confident a "country of geniuses" will arrive within a few years.

2030 (forecast)

50% Probability of AGI

Hassabis's prediction. By the same year, the WEF expects 170M new jobs. The crossing point of the old world and the new.

The Human Roles That Stay Essential

WEF, McKinsey, MIT, and Anthropic research all converge on the same answer: the capabilities AI cannot replace.

🧠

Empathy and Emotional Intelligence

Empathy & Emotional Intelligence

Building genuine human relationships and earning trust. Healthcare, caregiving, counseling, and education — domains where the simple presence of another human being is part of the value.

AI can "simulate" emotion but cannot "feel" it.
🎨

Creativity and Imagination

Creativity & Imagination

The real novelty that comes from a lived life — finding singular connections and shaping them into stories. The bedrock of art, literature, design, and invention.

AI is excellent at recombining patterns but cannot "emerge" from experience.
That said: in 2026 the CEO of OthersideAI noted that GPT-5.3 Codex "showed something like taste for the first time." The boundary is moving.

Ethical Judgment

Ethical Judgment

Contextual moral reasoning under ambiguity. High-stakes decisions, weighing trade-offs, accounting for social impact. The core of law, politics, and management.

AI can produce the "optimal answer" but cannot decide what is "just."
🤝

Leadership and Vision

Leadership & Strategic Vision

Setting direction in unprecedented circumstances. Inspiring teams and making decisions under uncertainty. The work of steering organizations and societies.

AI can support with data analysis, but "setting the direction" stays with humans.
🔍

Critical Thinking and Meaning-Making

Critical Thinking & Meaning-Making

Evaluating AI outputs against context. Asking "why" behind the data, generating meaning from raw signal.

AI produces "answers." It does not produce "meaning."
🌱

Physical Care and Presence

Physical Care & Presence

Healthcare, caregiving, raising children, person-to-person service — anywhere the physical presence and warmth of a human is essential. Even with advancing robotics, hard to replace.

Technology can assist, but human "presence" is not substitutable.

WEF projection: roles requiring emotional intelligence will grow 19% by 2027.
83% of leaders agree that "AI makes human skills more important, not less."
Caveat: the boundary of "what AI can't do" shifts every quarter. Many tasks called "irreplaceable" in 2024 are already inside "observed exposure" by 2026.
Treat "human roles" not as a fixed castle wall but as a frontline that keeps shifting upward.

The Centaur, Reconsidered: Why It "Fell"

By 2026, "human + AI > AI alone" no longer holds automatically.

Paradigm shift: in chess, the centaur has become a liability

In chess — the original home of the centaur model — Advanced Chess (human + engine) tournaments are no longer being held as of 2026. Top engine Elos exceed 3,600, with the engines ranked 1st through 96th all above 3,400. If a human overrides Stockfish, it is almost certainly a mistake (Chess.com analysis, March 2026). "Human intervention now produces a negative return on top of the engine" — and it has been quantified. The historical pattern: humans get augmented by machines, then machines surpass human-plus-machine. What already happened in chess, in factories, and in radiology is now in motion across white-collar work.

But for knowledge work that contains ambiguity, ethics, or multiple stakeholders, the centaur still wins. Harvard Data Science Review 2026: "Directed Knowledge Co-Creation" centaurs outperform Cyborg and Self-Automator users on accuracy. But only 14% of practitioners actually behave that way; 60% are Cyborgs who fuse with AI indiscriminately.

0
Advanced Chess tournaments
held in 2026 (historic disappearance)
14%
True Centaur-style practitioners
(60% are Cyborgs, Harvard 2026)
HITL→HOTL
From Human-in-the-loop to
Human-on-the-loop (2026)
Seconds
The window to intervene with an agent
(SiliconANGLE: "HITL has hit the wall")

The surviving centaur: "A human who lets go of execution and concentrates on direction-setting and value judgment." Not standing alongside AI as a peer, but shifting upward into the role of the supervisor who corrects for the context, ethics, and long-term impact AI will miss.

Vibe Coding — 2026 Is the "Claude Code Era"

The new style of writing code in collaboration with AI. 73% of developers use AI coding tools every day (2026, survey of 15,000 developers). Claude Code is "most loved" at 46%, ahead of Cursor at 19% and GitHub Copilot at 9%. The split is settling in: complex tasks for Claude, autocomplete for Copilot. Microsoft Copilot has 15M paid seats and 33M active users; 70% of the Fortune 500 has adopted it. But the quality trade-offs aren't solved: experienced developers actually slow down by 19% with AI, and AI-generated PRs surface 1.7x more issues.

80.8%
Claude Code
SWE-bench Verified
73%
Daily AI coding tool
usage by developers
46%
Claude Code
"most loved" votes
1.7x
Issue rate of
AI-generated PRs

The Map of Global AI Regulation

Each region's approach is profoundly different.

Paris AI Action Summit (February 2025)

61 countries signed the declaration on AI safety and international cooperation. The U.S. and U.K. refused to sign. The international fault line in AI governance has only become sharper.

🇪🇺

European Union

Regulation-First, Risk-Based

Phased rollout in progress. February 2025: prohibited practices effective. August 2025: governance provisions effective. August 2, 2026 is the hard wall: transparency obligations, full effect on existing GPAI, sandboxes opening. If the Digital Omnibus (April 28 trilogue stalled, restarting May 13) doesn't land in time, the high-risk obligations come into force as originally scheduled. The negotiated compromise on the table: pushing Annex III to December 2027 and Annex I to August 2028. Penalties run up to €35M or 7% of global revenue. The strictest AI regulatory regime in the world.

🇺🇸

United States (Trump Administration)

Deregulation, Dominance-First

AI Action Plan published in July 2025. A December 2025 executive order federally preempts state-level AI rules. In February 2026, the administration ordered all federal agencies to stop using Anthropic and pushed the Pentagon contract to OpenAI. March 26: Judge Lin at the SF federal court rules for Anthropic ("Orwellian designation" in his words). April 8: DC Circuit denies Anthropic's injunction. May 19: DC Circuit oral arguments. At the state level, 134 AI education bills have been introduced in 31 states — including California AB 1159 (banning the use of student data to train AI). There is still no comprehensive federal AI law; dominance is the priority.

🇯🇵

Japan

Innovation-First, Soft Law

AI Promotion Act enacted and effective in May 2025 — agile, "soft law" governance with no direct penalties. AI Strategy HQ established in September 2025. Limited "Gennai" trial began in January 2026, then rolling out to 180,000 government employees across every ministry from May 2026 through March 2027 (under the Takaichi administration, led by the Digital Agency). April 3, 2026: Microsoft commits $10B (1 million engineers trained by 2030, in partnership with Sakura Internet and SoftBank). Strong focus on Physical AI (robotics integration). The AI Basic Plan (cabinet decision December 2025): the government leads by adopting AI itself.

🇨🇳

China

National Law, Standards, Targeted Rules

October 2025: amended Cybersecurity Law brings AI under national law (effective January 2026). Penalties up to 5% of revenue. September 2025: AI content labeling becomes mandatory (GB 45438-2025). Draft rules also published on the emotional-dependency risks of AI companions.

What We Can Do, Now

Concrete moves for surviving — and thriving in — the AI era.

01

Question the shelf life of your current skills

Don't assume the same skill set still works five years from now. Continuous reskilling is a survival strategy.

  • Try one new AI tool every month
  • Audit which parts of your job will be automated, on a regular cadence
  • Track AI adoption case studies in your industry
02

Compete where AI cannot

Complex judgment, emotional resonance, ethical navigation — these are the human-only zones. Build real expertise there.

  • Sharpen interpersonal skills (listening, negotiation, coaching)
  • Train your ability to structure ambiguous problems
  • Strengthen your facilitation across multiple stakeholders
03

Become a Directed Centaur (don't be a Cyborg)

"Human + AI > AI alone" is no longer automatic. Harvard 2026: only the 14% Centaur cohort — the ones who let go of execution and concentrate on direction-setting — wins on accuracy. The 60% Cyborg cohort fuses with AI indiscriminately and pays for it.

  • Stop optimizing prompt engineering. Learn context engineering and harness engineering instead (Karpathy's framing)
  • Draw a clear line between tasks you delegate to AI and tasks where you intervene
  • Shift your supervision style from Human-in-the-loop to Human-on-the-loop
  • Map the Jagged Frontier — the sudden cliffs in AI capability
04

AI literacy 2.0: become the supervisor

Prompt engineering is going the way of "handwriting after the keyboard" — absorbed and forgotten. What carries value in 2026 is the literacy of an "AI supervisor": catching AI errors in seconds, designing the context, and designing the governance.

  • Understand hallucination, bias, and reward hacking through real examples
  • Train your eye to detect deepfakes and AI-generated content (recognize AI-driven threats like the Altman home attack)
  • Master the basics of AI governance: EU AI Act, Bounded Autonomy, HOTL
  • Read the conclusions of the International AI Safety Report 2026 — situational awareness, reward hacking
05

Invest in networks and community

The AI era is precisely when relationships gain value. A network of trust is the strongest competitive moat.

  • Connect outside your industry, not just within it
  • Join groups that share AI best practices
  • Make mentoring — both giving and receiving — a habit
06

Practice Bounded Autonomy

The 2026 keyword: bounded autonomy. Define what AI is allowed to do and keep an explicit human escalation path.

  • Be explicit about what tasks you delegate to AI versus what you decide yourself
  • Always pass AI output through a human review
  • Expand the automation scope gradually, and keep an audit trail
07

Don't become a Self-Automator — aim for Centaur

In Harvard's 2026 three-way split, only the 14% Centaurs come out ahead. The 60% Cyborgs scrape by with "newskilling," and the 27% Self-Automators hollow out into "no-skilling." The moment you hand it all to AI, your own capability stops growing.

  • Always think it through yourself first, then bring AI in
  • Never use AI output as-is; ask each time, "Did I actually choose to adopt this?"
  • Periodically test: "Could I produce the same quality without AI?"
08

Learn the Jagged Frontier by feel

Mollick's HBS research: AI capability does not align with what humans intuit as difficulty — it's distributed in jagged spikes and cliffs. Used inside the frontier, it's +40% productivity. Used outside, it's -19%. Only people with a mental map of the boundary capture the upside.

  • Log weekly: what AI does "shockingly well" and "horribly badly" in your work
  • The boundary moves quarterly — don't trust last quarter's map
  • Belong to a community where people share their failures

Sources / References

Dario Amodei — Machines of Loving Grace (2024) Dario Amodei — The Adolescence of Technology (2026) Sam Altman — The Gentle Singularity Fortune — Country of Geniuses in a Data Center Dwarkesh Podcast — Dario Amodei Interview TIME — Demis Hassabis Interview 2025 WEF — Future of Jobs Report 2025 Anthropic Economic Index January 2026 IMF — A Place for Human Talent in the AI Age PMC — The Digital Centaur (2025 Research) EU AI Act — Implementation Timeline Japan AI Promotion Act (FPF) Goldman Sachs — AI & Global Workforce Carnegie — Can Democracy Survive AI? WEF — Uniquely Human Skills in the Age of AI Fortune — Yoshua Bengio Changes View Prosus — State of AI Agents 2026 Anthropic — 2026 Agentic Coding Trends Report International AI Safety Report 2026 Future of Life Institute — AI Safety Index Fortune — Goldman Sachs: AI Economy Impact "Basically Zero" CNBC — Trump Orders Government to Cease Using Anthropic TechCrunch — Yann LeCun's AMI Labs Raises $1.03B TechCrunch — OpenAI Raises $110B Fortune — Hinton Risk Escalation to 50%+ OpenAI — GPT-5.4: Computer Use Surpassing Humans (Mar 2026) Google — Gemini 3.1 Pro: GPQA Diamond 94.3% McKinsey — State of AI: 72% Enterprise Adoption (2026) Dallas Fed — AI Substitutes Young Workers, Complements Experienced IDC — Worldwide AI Spending $301B (2026) Meta — Llama 4 Scout & Maverick (Apr 2026) Sam Altman — "New Deal for the AI Era" (Apr 2026) Gartner — Worldwide AI Spending Forecast $2.52T (2026) Reuters — Federal Court Rules for Anthropic vs DOD (Mar 2026) BCG — AI and the Entry-Level Job Crisis (2026) International AI Safety Report 2026 (Bengio, 100+ experts) Anthropic — Claude Opus 4.7 (Apr 16, 2026) Google DeepMind — Gemini 3.1 Pro Model Card (2026) TechCrunch — Anthropic Passes OpenAI in Revenue ($30B ARR, Apr 2026) OpenAI — $122B Raise at $852B Valuation (2026) Tom's Hardware — Q1 2026: ~80K Tech Layoffs, ~50% AI-Driven Fortune — Goldman: AI Cutting 16K US Jobs/Month (Apr 2026) TNW — Meta 8,000 Layoffs May 20 for AI Restructure CNBC — Molotov Attack on Altman's Home / OpenAI HQ (Apr 10, 2026) SF Standard — OpenAI "New Deal for AI" + Attack Context CNBC — Anthropic Loses DC Appeals Bid (Apr 8, 2026) IEA — AI Data Center Energy (1,100 TWh by 2026) Gartner — AI Projects Stall Before ROI (Apr 7, 2026) Microsoft — $10B Japan Investment (Apr 3, 2026) Anthropic — Labor Market Impacts: Observed Exposure (2026) McKinsey — State of AI Trust 2026: Shifting to Agentic Era EU AI Act 2026 Status + Digital Omnibus Delay The Centaur's Dilemma: What Chess Teaches Us About AI Era (Feb 2026) Harvard Data Science Review — Human-Algorithm Centaur (2026) MIT Sloan — Cyborg vs Centaur vs Self-Automator Prompt Engineering Is Dead — Context Engineering (Karpathy, 2026) Epsilla — Harness Engineering (Third Evolution, 2026) SiliconANGLE — Human-in-the-Loop Has Hit the Wall (Jan 2026) From HITL to HOTL — Evolving Agent Autonomy (2026) Ethan Mollick — Centaurs & Cyborgs on the Jagged Frontier OpenAI — Introducing GPT-5.5 "Spud" (Apr 23, 2026) VentureBeat — GPT-5.5 vs Claude Mythos on Terminal-Bench 2.0 Stanford HAI — AI Index Report 2026 (Apr 13) Stanford AI Index — 12 Takeaways (2026) TechCrunch — Google $40B in Anthropic (Apr 24, 2026) TechCrunch — Anthropic $50B Round at $900B Valuation CNBC — 20K Cuts at Meta+Microsoft: AI Labor Crisis (Apr 24) Inc. — Microsoft 9,000 Buyouts for AI Pivot Microsoft — 365 E7 Frontier Suite (May 1, 2026) Anthropic — Claude Capybara (Mythos) Project Glasswing EU AI Act Digital Omnibus Trilogue (Apr-May 2026) Japan Digital Agency — Government AI Gennai Rollout to 180,000 Anthropic — Code with Claude Conference (May 6 SF) The Register — Anthropic Tops OpenAI in LLM Revenue (Q1 2026)