← Disalignment.comChapter 03_
The Debate Index
A structured map of where experts genuinely disagree—from near-term bias and deepfakes all the way to existential instrumental convergence. No culture-war framing. No straw-manning. Just the cruxes.
Source: NotebookLM — Disalignment.com: A Field Guide (33 sources)Positions mapped: 3 tiers · 10 issues · 3 core cruxesTone: Forensic, not polemical
Near-Term (Now)Medium-Term (5–15y)Existential (15y+)
Camps:
Near-Term Focus — fix today's harms
Long-Term Focus — prevent existential risk
Contested — experts sharply divided
Well-EstablishedNear-Term Focus
Algorithmic Bias & Discrimination
Well-EstablishedNear-Term Focus
Deepfakes & Synthetic Disinformation
Well-EstablishedNear-Term Focus
Engagement Optimization & Polarization
Emerging EvidenceContested
Labor Displacement
Core Cruxes — Where experts fundamentally disagree
A crux is a single factual or philosophical question where you and an opponent genuinely disagree—and where, if you changed your belief on that one question, you would update your entire position. These are the load-bearing walls of the alignment debate.
Core Crux #1
Is the primary risk from AI itself, or from humans misusing AI?
Core Crux #2
Will AI power-seeking emerge inevitably?
Core Crux #3
Is the existential narrative a distraction from current harms?
Changelog
2026-04-07Initial publication of Debate Index covering 10 risk areas across 3 tiers.ReasonMapping the discourse landscape without cultural-war framing.SourceNotebookLM — Disalignment.com: A Field Guide to AI Alignment Architecture