The convergence is measured, not asserted
Three independent research threads — Stanford's Intelligence Per Watt programme, Microsoft's BitNet 1-bit LLM framework, and Dr Luci Attala's anthropology of water — converge on the same conclusion: distributed, low-power, community-governed compute infrastructure is both technically superior and ethically necessary. Net-Positive Data Centers (NPDC) is the architecture that answers all three.
Stanford Intelligence Per Watt
A research programme led by Avanika Narayan and Jon Saad-Falcon at Stanford, with PIs Christopher Ré, Azalia Mirhoseini, and John Hennessy. Public site: intelligence-per-watt.ai.
Key findings
- 77% of AI requests are practical tasks (emails, summarisation, search) that do not require frontier-scale models.
- Local models of ≤20B parameters accurately answer 88.7% of single-turn queries.
- Intelligence Per Watt improved 5.3× from 2023 to 2025 — efficiency is accelerating faster than raw capability.
- Hybrid routing (local-first, cloud-on-escalation) cuts energy, compute, and cost by 60–80%.
- The "Gross Domestic Intelligence" framework argues the US could boost effective inference capacity 2–4× by activating 70–80M AI-capable devices already deployed domestically.
Relevance to NPDC
Stanford's Gross Domestic Intelligence concept is the macroeconomic version of the NPDC MicroDC mesh. The Stanford team argues for distributed national compute infrastructure; NPDC is building it. Stanford's hybrid routing model maps directly to NPDC's federated mesh, in which 100 kW nodes handle local inference and escalate to cloud only when the workload demands it. The 77% finding is the threshold that makes the 100 kW node viable.
Microsoft BitNet b1.58
A 1-bit LLM framework released by Microsoft Research. Open-source reference implementation: github.com/microsoft/BitNet. Reference model: bitnet-b1.58-2B-4T on HuggingFace.
Key findings
- Ternary weights ({−1, 0, +1}) — not floating point, not post-training quantisation.
- 96% less energy than standard-precision peers.
- Runs on commodity CPUs — no GPU required, 0.4 GB memory (versus ~2 GB for comparable LLaMA).
- Outperforms LLaMA 3.2 1B on standard benchmarks at a fraction of the compute cost.
- 40% faster token processing than equivalent full-precision models; 2.37×–6.17× speed-ups on x86 CPUs with 71.9%–82.2% energy reduction.
- Ternary arithmetic enables future purpose-built silicon at radically lower cost than current GPU silicon.
Relevance to NPDC
BitNet eliminates the GPU dependency that made hyperscale water-hungry. A 100 kW MicroDC node running BitNet-class models on commodity CPUs serves Stanford's 77% practical-AI tier without GPU cooling infrastructure, without NVIDIA supply-chain dependency, and without the power draw that drove the 2030 1.45B-gallons/day water-demand projection. The argument that hyperscale is necessary to host competitive inference is settled — Microsoft has proven it is not.
Attala — water as constitutive force
Dr Luci Attala's How Water Makes Us Human (University of Wales Press): uwp.co.uk.
Key thesis
Water is not a resource — it is a constitutive force that shapes human behaviour, identity, memory, and culture. Three case studies make the argument concrete:
- Wales (Tryweryn). The drowning of Capel Celyn for Liverpool's water supply turned water into political consciousness — Cofiwch Dryweryn.
- Kenya (Giriama). Water-carrying shapes gendered bodies and the community's spatial identity.
- Spain (Lanjarón). Glacial springs operate as health infrastructure and civic ritual.
Relevance to NPDC
If water makes us human, industrial-scale water extraction for data-centre cooling is cultural extraction, not environmental cost. This is the dimension Stanford and Microsoft do not touch and that hyperscalers structurally cannot answer. NPDC's tri-partite governance — investors, host communities, epistemic stakeholders — operationalises Attala's claim: the people whose water makes them human hold the binding vote on whether that water is taken at all.
The three-front argument
Each thread answers a different objection. Together they describe one architecture.
| Front | Source | What it rules out | NPDC answer |
|---|---|---|---|
| Efficiency | Stanford IPW | "You need massive centralised compute." | 77% of workloads run locally; hybrid routing cuts energy 60–80%. |
| Hardware | Microsoft BitNet | "You need GPU farms." | 1-bit LLMs run on CPUs at 96% less energy, 0.4 GB memory. |
| Human | Attala / Cogniosynthesis | "Water is just a cooling resource." | Water shapes identity; extraction is cultural violence. Tri-partite governance holds the vote. |
No hyperscaler can make this argument. Existing megacampus operators are structurally locked into the architecture their existing infrastructure investments require.
Read the technical whitepaper (v1 in preparation) Enquire about consortium membership