Digital Governance Paper Notes
A curated archive of short, practitioner-oriented reviews covering AI governance, regulation, digital public infrastructure, digital identity, and adjacent socio-technical systems.
Browse by Domain
AI Governance (12)
AI Safety & Evaluation (4)
Digital Public Infrastructure (5)
Public Sector Digital Strategy (1)
Digital Identity (1)
Trust Infrastructure (1)
Privacy & Data Protection (2)
Cybersecurity & Resilience (1)
Law, Regulation & Liability (5)
Socio-technical Systems (2)
State Capacity & Administrative Systems (1)
| Date | Paper | Publication | Domain | Key Insight | Review | Source |
|---|---|---|---|---|---|---|
| 2026-03-18 | Large-scale online deanonymization with LLMs | arXiv | Privacy & Data Protection | LLMs do not need to exceed human investigative capability to collapse pseudonymity at scale — they only need to reduce its cost, and that cost reduction is now sufficient to make large-scale deanonymization a routine, automatable threat. | Source | |
| 2026-03-17 | Sandboxes for DPI: Co-creating the blocks of digital trust | Datasphere Initiative | Digital Public Infrastructure | The report’s most useful move is to treat sandboxes not as innovation theater, but as upstream governance capacity for testing trust, inclusion, interoperability, and institutional learning before DPI choices harden at population scale. | Source | |
| 2026-03-17 | Distributed Legal Infrastructure for a Trustworthy Agentic Web | arXiv | Law, Regulation & Liability | The paper’s real contribution is not the rhetoric of agent personhood, but the claim that legality for agents must be infrastructural: identity, constraints, evidence, adjudication, and portability have to travel with the system rather than be bolted on after harm occurs. | Source | |
| 2026-03-17 | AI Innovation, Effective Anonymization & the DPDP Act | Open Loop | Privacy & Data Protection | The report’s central insight is that India’s AI bottleneck is not merely lack of data, but lack of a usable legal-operational pathway for iterative model development, effective anonymization, and PET adoption under the DPDP regime. | Source | |
| 2026-03-14 | What India’s Push for Global Digital Repositories Tells Us About Its Tech Diplomacy | Tech Policy Press | Digital Public Infrastructure | India’s repository diplomacy is best understood not as neutral knowledge-sharing, but as a low-commitment instrument for projecting leadership while preserving strategic flexibility in a fragmented technology order. | Source | |
| 2026-03-14 | Open Problems in Technical AI Governance | Transactions on Machine Learning Research | AI Governance | The paper’s most durable contribution is showing that many AI governance debates are blocked not by lack of principles, but by missing technical capacities for assessment, access, verification, security, operationalisation, and ecosystem monitoring. | Source | |
| 2026-03-14 | MASFactory: A Graph-centric Framework for Orchestrating LLM-Based Multi-Agent Systems with Vibe Graphing | arXiv | AI Safety & Evaluation | MASFactory’s real contribution is not that it makes multi-agent systems easier to build, but that it reframes orchestration as a reusable governance surface where topology, context access, and human intervention can be made explicit, inspectable, and testable. | Source | |
| 2026-03-14 | Advancing Indigenous Foundation Models | White paper | AI Governance | The paper’s strongest move is treating indigenous foundation models as public-interest infrastructure, but it stops short of specifying the assurance, procurement, and lifecycle governance machinery needed to make that ambition operational. | Source | |
| 2026-03-12 | Digital Identities Across the World | PwC / Strategy& | Digital Identity | Digital identity succeeds not when a credential exists, but when governance, interoperability, trust, and everyday service relevance are engineered together as public infrastructure. | Source | |
| 2026-03-10 | Gene name errors: Lessons not learned | PLOS Computational Biology | Socio-technical Systems | A decade of documented warnings and nomenclature reforms have not reduced the rate of spreadsheet-induced gene name corruption in published genomics research, demonstrating that knowledge dissemination alone cannot change entrenched data practices — only structural interventions at the software, journal, and training levels can. | Source | |
| 2026-03-10 | From Future of Work to Future of Workers: Addressing Asymptomatic AI Harms for Dignified Human-AI Interaction | arXiv | AI Governance | AI systems can improve visible performance while gradually eroding the human expertise, intuition, and professional agency needed to detect and correct system failures. Governance frameworks must therefore treat human capability retention as a safety objective. | Source | |
| 2026-03-09 | Strategy for Artificial Intelligence in Healthcare for India (SAHI) | Ministry of Health and Family Welfare, Government of India | AI Governance | SAHI’s real significance is not that it celebrates AI in health, but that it tries to turn India’s health DPI into a governed deployment environment where risk tiering, interoperability, capacity, and procurement become the rails for responsible scale. | Source | |
| 2026-03-09 | Doot: The AI Agent for Every Indian Citizen | DigiDoot / India AI Mission White Paper | Digital Public Infrastructure | The next layer of digital public infrastructure may be citizen-owned AI agents that mediate interaction between individuals and complex administrative systems. | Source | |
| 2026-03-09 | Agents of Chaos | arXiv | AI Safety & Evaluation | The paper shows that once language models are wrapped in memory, tools, messaging, and delegated authority, the main governance problem is no longer just model error but insecure delegation across socio-technical systems. | Source | |
| 2026-03-07 | The Artificial in ‘Artificial Intelligence’: How Imagination Shapes AI Regulation | SSRN | Law, Regulation & Liability | AI regulation is being shaped not only by technical architectures but by metaphors that silently define where risk, responsibility, and accountability are presumed to sit. | Source | |
| 2026-03-07 | Evaluating AGENTS.md: Are Repository-Level Context Files Helpful for Coding Agents? | arXiv | AI Safety & Evaluation | Automatically generated repository context files often degrade coding-agent performance because they introduce additional constraints without improving task-relevant understanding. | Source | |
| 2026-03-07 | Digital Governance Stacks and the Infrastructure of Empires | Bot Populi | Digital Public Infrastructure | Digital governance infrastructure is not merely technical plumbing; its architecture can shape sovereignty, administrative autonomy, and geopolitical influence. | Source | |
| 2026-03-07 | Codified Context: Infrastructure for AI Agents in a Complex Codebase | arXiv | AI Governance | Persistent, machine-readable project context functions as a governance layer for AI coding agents, but the paper shows this through a single-project experience report rather than a comparative evaluation. | Source | |
| 2026-03-07 | Business Perspectives on Advancing AI | Business at OECD | AI Governance | AI adoption policy debates increasingly hinge on balancing innovation incentives and regulatory coherence with stronger accountability and public-interest safeguards. | Source | |
| 2026-03-07 | Advancing Open Source AI in India | Digital Futures Lab | AI Governance | The brief’s strongest contribution is showing that AI openness is not binary but component-specific, yet it remains more persuasive as policy architecture than as an operational governance framework for high-impact public deployments. | Source | |
| 2026-03-06 | Towards an Open, Resilient, Non-Aligned AI | Geopolitique.eu | AI Governance | Sovereignty in AI is not a branding posture but an operational capability built through portability, audit rights, egress drills, and enforceable redress. | Source | |
| 2026-03-06 | Toward Risk Thresholds for AI-Enabled Cyber Threats | UC Berkeley Center for Long-Term Cybersecurity | Cybersecurity & Resilience | Cyber-risk thresholds become real governance only when probabilistic assessment is tied to explicit baselines, trigger points, and mandatory actions. | Source | |
| 2026-03-06 | The Mythology of Conscious AI | Noema Magazine | Socio-technical Systems | Myths about conscious AI distract from the immediate governance problem: non-conscious systems already exercise power at scale without clear accountability. | Source | |
| 2026-03-06 | The Global Landscape of Environmental AI Regulation: From the Cost of Reasoning to a Right to Green AI | SSRN | Law, Regulation & Liability | Effective environmental governance of AI will require regulation to shift from facility-level reporting toward model-level transparency, especially around inference costs and reasoning-heavy systems. | Source | |
| 2026-03-06 | Sovereignty in the Age of AI: Strategic Choices, Structural Dependencies and the Long Game Ahead | Tony Blair Institute for Global Change | AI Governance | AI sovereignty is credible only when countries can operationally exit, audit, and tier dependencies rather than merely rebrand lock-in as strategic autonomy. | Source | |
| 2026-03-06 | International AI Safety Report 2026 | International AI Safety Report | AI Safety & Evaluation | The central AI-safety challenge is not diagnosis but translating uncertainty into default actions, decision thresholds, and enforceable consequences. | Source | |
| 2026-03-06 | Democratising AI: Towards Open, Decentralised AI Ecosystems | Observer Research Foundation | AI Governance | AI democratisation only becomes operational when \\"decentralisation\\" is broken into testable design choices, conformance rules, and incentives rather than treated as a feel-good umbrella term. | Source | |
| 2026-03-06 | AI Regulatory Capability Framework and Self-Assessment Tool | The Alan Turing Institute | Law, Regulation & Liability | Regulatory capability has to be evidenced, infrastructure-backed, and tied to authority at runtime, otherwise self-assessment becomes polished theatre. | Source | |
| 2026-03-06 | AI Maturity Framework for Public Administrations | UNESCO | Public Sector Digital Strategy | Public-sector AI maturity should be evidenced, risk-tiered, and vendor-aware, otherwise maturity models drift into self-assurance theatre. | Source | |
| 2026-03-06 | AI Governance in South Asia | Centre for Responsible AI, IIT Madras | Law, Regulation & Liability | South Asia’s pragmatic AI governance path will only become credible when soft-law coordination is backed by enforceable controls, sovereign infrastructure choices, and real labour protections. | Source | |
| 2026-03-05 | The Stack and the State: India’s Digital Governance Model as Technopolitical Power | Information Polity (IPP Journal) | State Capacity & Administrative Systems | DPI’s \\"power effects\\" are not vibes—they come from specific architectural and operational control points, so serious critique needs measurable indicators and design-level mappings. | Source | |
| 2026-03-05 | Exhaustibility Is Not an Optimization. It Is a First-Class Invariant | Medium (Paul Knowles) | Trust Infrastructure | For agentic systems, governance must shift from persistent identity-based permission to action-bound, exhaustible authority that produces verifiable provenance at the moment an effect occurs. | Source | |
| 2026-03-05 | DPI-AI Framework: Vision paper on Building AI-Ready Nations through Digital Public Infrastructure | Centre for Digital Public Infrastructure (CDPI) | Digital Public Infrastructure | Treat AI as modular infrastructure, but make legitimacy modular too: risk-tier workflows with signed, bounded, revocable authority and built-in redress. | Source | |
| 2026-03-05 | Build vs Buy in the Age of LLMs | arXiv | AI Governance | The build-vs-buy question is really a sovereignty dial: governments should optimize for control of data, risk, and upgrade paths—not for a romantic preference for in-house models. | Source | |
| 2026-03-05 | AI Agents and the Next Layer of India's Digital Infrastructure | Tech Policy Press | AI Governance | At population scale, an \\"agent layer\\" only becomes governance-grade when delegation is cryptographically bounded, discoverable, and revocable—otherwise you just automated intermediaries and fraud. | Source |