Signal 1
Critical systems are failing in ways people can already feel.
An open analytical project
Civic Blueprint is an open attempt to understand why critical systems keep failing, what makes reform so difficult, and where better design might actually change outcomes. The framework already exists, but it is not presented here as settled truth. It is a working set of claims that needs outside pressure, domain expertise, and evidence strong enough to prove parts of it wrong.
Start with one concrete example, then decide whether the broader framework is worth your time.
Quick context
Signal 1
Critical systems are failing in ways people can already feel.
Signal 2
AI is accelerating faster than governance.
Signal 3
Many of these failures share the same underlying causes.
Signal 4
If the diagnosis is right, we may be trying to fix the wrong things first.
Too many essential systems are harder to use, slower to improve, and easier to capture than they should be.
AI capability is accelerating faster than any governance system can keep pace with. Housing is scarce where it should be abundant. Healthcare is rationed by price and complexity. Public trust erodes when institutions cannot deliver visible competence.
These problems are usually treated as separate policy debates. The argument here is that they are connected — often through the same upstream failures in institutional capacity, democratic accountability, and the ability of public systems to execute at the speed the moment demands.
This project exists to map those connections and test whether understanding them changes what reform looks like.
For readers who want the full working documents, the repository remains the source of record.
Start With One Concrete Example
It is to test whether the framework can produce a better read of real bottlenecks than generic policy summaries can.
That is why the first memo pairs two cases: housing permitting and AI governance.
Featured Artifact
AI governance is arguably the most urgent systemic challenge right now. Housing permitting is one of the most concrete. The memo applies the same analytical method to both and compares what they reveal together:
Key test questions
The memo is not just asking whether the comparison is interesting. It is asking whether the framework's directional claim holds up: that upstream institutional competence, matched to the speed and structure of the domain, may be one of the strongest places to look for real leverage.
If that claim adds nothing to either conversation, the project needs to know that. If it clarifies something that single-domain analysis misses, that is a better basis for deeper engagement.
Framework overview
The current framework has three main layers and one process layer.
PRINCIPLES.md defines the project's commitments: dignity, access to essential needs, accountable power, democratic oversight of AI, public-interest governance of critical systems, and openness to challenge.
PROBLEM_MAP.md describes where systems are stuck, why they stay stuck, who benefits from the dysfunction, and how recursive failure can spread across domains.
SYSTEMS_FRAMEWORK.md applies that diagnosis across fourteen domains — including housing, AI governance, healthcare, infrastructure, democratic process, and institutional capacity. It focuses on bottlenecks, dependencies, leverage, failure modes, and sequence.
The project also publishes its review methods. Its claims are meant to face adversarial review, coherence checks, and historical challenge rather than being treated as final answers.
Read moreSite purpose
The working documents live in GitHub. That is useful for drafting and version history, and structured contribution.
It is not a reasonable front door for many of the people this project most needs:
This site exists to make the work legible enough for serious outsiders to challenge it without first learning a repository structure.
The goal of this phase is not applause, branding, or broad awareness. It is to bring the framework into contact with serious outside critique, domain expertise, and evidence it does not yet have.
Feedback quality bar
This project is not mainly looking for encouragement. It is looking for pressure that improves the work.
The most useful input includes:
Domain expertise from people who know how a system actually works.
Historical parallels that support or challenge the framework's causal claims.
Implementation critique about sequencing, incentives, staffing, and execution.
Missing perspectives, especially from outside US and Western policy frames.
Direct disagreement with major claims, including the institutional-capacity hypothesis and the memo's directional claim about leverage.
Helpful feedback is specific. "This feels right" is less useful than "this causal link is weak because permitting delays in this case came from infrastructure finance, not zoning."
Response paths
If you think the framework is useful, incomplete, or wrong, say so directly.
Suggested response paths:
The contribution path should work for both GitHub users and people who would rather send a plain message first.
Closing
The point is not agreement. It is a harder question:
This site exists to make that question easier to answer in public.