AI Driven eGovernment: The Principles
Agent-powered government flips apply→confirm: life events trigger help, not forms. Minimal data, human oversight, and transparency save time, boost equity, rebuild trust.
The age of portals is ending. The future of e-government is agent-powered: small pieces of software that act for people and for public services, talking to each other to get things done. Instead of hunting for the right form on the right website, you’ll say what you need—“We had a baby,” “I’ve moved,” “I lost my job”—and your agent will coordinate the rest. Services shift from search & apply to offer & confirm, and the state moves from information hoarder to helpful orchestrator.
At the center is a citizen-owned civic agent—your AI, on your device—that understands your preferences, accessibility needs, and context. It holds your digital credentials (age, residency, qualifications) and uses selective disclosure to prove only what’s necessary, never dumping your whole dossier. You see who’s asking for what, why, and for how long; you approve or revoke with a tap. The result: maximum help with minimum exposure, and control that is real, not theoretical.
On the other side lives a mesh of government service agents. Each service exposes a standard endpoint that can explain policy, request a proof, and take an action—update a record, schedule an appointment, calculate an entitlement. These agents cooperate across agencies over secure rails, so one verified event can cascade safely: a birth updates benefits and health records; a move updates tax, schools, and permits; a bereavement closes accounts with dignity. It’s government that behaves like a team, not a maze.
This architecture changes daily life. Parents don’t fill the same facts five times; they confirm a pre-built offer. Patients don’t wait for avoidable delays; hospitals use AI to predict bottlenecks and free beds faster. Commuters don’t crawl through red lights; adaptive signals cut idle time and emissions. Students get clearer guidance on aid and admissions; workers get faster, fairer tax and benefits decisions. Most importantly, people who historically face the steepest “time tax” see the biggest gains—because services become multilingual, accessible, and proactive by default.
Crucially, help stays explainable and humane. Every automated step comes with a plain-language reason and an easy path to a human. High-impact systems are listed in public algorithm registers; risk assessments are published; appeals are simple. That keeps legitimacy high: the state shows its work, logs what happened in your case, and treats “revoke consent” as seriously as “grant consent.” Automation augments accountable public servants; it doesn’t sideline them.
Behind the scenes, the state upgrades its plumbing. Policy-as-code turns rules into official, machine-readable logic with open tests, so your agent can simulate outcomes privately and services give the same answer every time. Compute-to-data runs analytics where data already lives (or on your device), so insights flow but dossiers do not. Outcome-centric guarantees replace vanity metrics: time-to-benefit, first-time-right, effort saved, and equity are measured and published, and agents are tuned to optimize those outcomes.
The promise is a Software-3.0 state that helps more by seeing less. Life events trigger help rather than paperwork. Proofs replace PDFs. Consent is a living contract you control. Inclusion is ambient: your language, your device, your abilities—no side doors. Resilience is designed in: when the tech hiccups or your situation is atypical, there’s a staffed path that works and a clear record of how decisions were made.
For leaders, this is not sci-fi; it’s a program. Start by shipping digital identity wallets with selective disclosure, exposing service endpoints that accept verifiable proofs, wiring event subscriptions between authoritative registers, and standing up a shared national assistant layer. Publish algorithm records and outcome dashboards from day one. The prize is enormous: hours back to households, faster and fairer decisions, cleaner government, lower emissions, higher trust—a state that feels present when you need it and invisible when you don’t.
Summary
1) Citizen-owned civic agents
How it works: Every person has a personal, privacy-preserving “civic agent” (on their phone/PC) that knows their preferences and context and acts on their behalf—with explicit consent and a clear audit trail. Instead of repeatedly filling forms, the agent assembles the right evidence from secure local storage (and from trusted credential issuers like tax, registry, or university authorities) and presents only the minimum facts needed to complete a task. It reasons over machine-readable rules to simulate outcomes privately before any disclosure, and it can negotiate with government service agents via standard APIs to book appointments, pre-fill applications, or accept offers. Technically, it uses verifiable credentials, selective disclosure, and signed presentations so verifiers trust outcomes without seeing raw documents. Governance-wise, the agent is the citizen’s property: revocable permissions, transparent logs, and portability across devices and vendors.
What it brings: Friction collapses for citizens (one tap instead of 20 screens), errors and rework plummet, and trust rises because the person—not the state—remains in control. For government, first-time-right rates climb, contact-center load falls, and inclusion improves because the agent can translate, simplify language, and remember accessibility needs. Security improves as well: fewer bulk documents are uploaded and fewer databases need to store personal dossiers.
2) Government as a mesh of service agents
How it works: Each public service exposes a secure, well-documented “agent endpoint” that can explain policy, request proofs, and execute actions (create a case, update a record, schedule an appointment). These government agents interoperate over a trusted exchange layer (with mutual authentication and auditable logs), so a single request can fan out across agencies. Rather than centralizing all data, services exchange verifiable proofs and event messages (“birth registered”, “address changed”) to coordinate. The mesh also includes a shared national assistant layer (text/voice) that plugs into these endpoints, so citizens encounter one coherent front door instead of a patchwork of bots.
What it brings: Services stop being silos and start behaving like a coordinated team. Life-event journeys (“I’ve moved”, “We had a child”, “My parent died”) complete in minutes because the mesh can cascade updates safely behind the scenes. Agencies reuse each other’s capabilities, cut duplicate builds, and gain observability across end-to-end journeys. The public gets consistency, speed, and fewer “bureaucratic scavenger hunts.”
3) Event-driven, “no-stop” services (offer → confirm)
How it works: Real-world events in authoritative registers (birth, death, address change, reaching an age threshold, job loss) emit secure notifications. Subscribed services listen and automatically assemble the next right steps: pre-filled benefit offers, suggested appointments, or status changes—visible to the citizen (and their civic agent) for approval. The policy logic is machine-readable, so the system can explain eligibility plainly (“Under §X.Y you qualify for Z”) and show what happens if details change. Cross-border evidence can be fetched, with consent, from authentic sources to avoid re-uploading documents.
What it brings: People stop applying for what they already deserve: the state offers and the person simply confirms. Time-to-benefit shrinks, non-take-up drops, and frontline staff focus on edge cases rather than clerical checks. The state earns legitimacy by being timely, predictable, and humane—especially in stressful moments like bereavement.
4) Minimum disclosure by design
How it works: Instead of asking for whole documents (passports, payslips, certificates), services ask for precise attributes (“over 18”, “resident of district X”, “income in band B”). The citizen’s agent produces cryptographic proofs from verifiable credentials; the service stores only the signed outcome, not the underlying personal data. Endpoints publish exactly which attributes are necessary and for what purpose, and proofs expire automatically unless renewed.
What it brings: Sharply reduced privacy risk and data-breach surface, faster decisions (no manual document inspection), and far fewer mistakes from mis-entered or inconsistent data. Citizens gain confidence because they reveal only what’s strictly needed, with a clear purpose and time limit every time. Agencies benefit from consistent inputs and auditable, machine-verifiable decisions.
5) Local-first intelligence; compute-to-data
How it works: Algorithms travel to the data—not the other way around. Sensitive analytics run inside accredited secure environments (or on the citizen’s device), with strict ingress/egress controls and public logs of what ran, when, and with what code. For cross-organizational insights, federated learning or enclave-based processing produces aggregate results or risk scores without exporting raw personal data. The citizen’s agent performs eligibility simulations locally and shares only a signed result when the person decides to proceed.
What it brings: Useful insights without dossier sprawl: less copying, fewer breaches, simpler compliance. Health, tax, and social programs can do serious analytics with real accountability. Citizens see faster, safer services; government gets better evidence and a defensible privacy posture.
6) Consent as a living contract
How it works: Consent is granular (attribute-level), purpose-bound, time-boxed, and revocable. Requests clearly state who is asking, what will be shared, why, and for how long. The wallet and service both log the transaction; a common dashboard lets people review, pause, or revoke permissions later—and services must respect revocation immediately. Long-lived or repurposed use requires a fresh, explicit prompt.
What it brings: Control the public can actually use. Citizens don’t have to “trust blindly”—they can check, adjust, and stop sharing. Agencies gain legitimacy because they can demonstrate lawful, proportionate use for every access, and because “revoke as easy as grant” is built into the user experience and back-office processes.
7) Explainable help, not hidden automation
How it works: Any automated step—triage, prioritization, selection, pre-assessment—produces a short, plain explanation, links to legal or policy basis, and a visible path to a human decision-maker. High-impact systems are recorded in a public algorithm register with purpose, data sources, oversight, and change history. Risk assessments are done before launch and updated as models evolve; subgroup performance is monitored and reported.
What it brings: People understand what happened and how to challenge it; officials remain accountable for outcomes; and trust grows because transparency is a first-class feature, not a compliance afterthought. Inside government, explainability clarifies responsibilities, reduces disputes, and accelerates safe adoption.
8) Outcome-centric service guarantees
How it works: The state defines and publishes guarantees around outcomes that matter—time-to-benefit, first-time-right rate, effort saved (minutes/hours), and equity of outcomes across groups. Journeys are instrumented end-to-end; teams iterate to hit targets, and automation is tuned to optimize these outcomes (with guardrails). Budgets, business cases, and post-implementation reviews report against these same outcomes, not vanity metrics.
What it brings: Services compete to save citizens’ time and improve fairness, not to ship features. Leadership can see where bottlenecks and burdens really are and direct investment accordingly. People feel the difference: fewer steps, faster decisions, clearer status, and measurably fairer results.
9) Policy-as-code with public tests
How it works: Eligibility rules, thresholds, dates, and calculations are published as authoritative, versioned code alongside the human-readable policy. Open test suites cover typical and edge cases; services and third-party tools run the same tests to ensure consistent answers. The citizen’s agent uses the public rules to simulate outcomes privately; if they proceed, the service verifies the same rules server-side and returns an explanation and a signed decision.
What it brings: Predictability and fairness: the same inputs yield the same outputs everywhere, with no “mystery math.” Implementations become faster and safer because tests catch regressions before they hit the public. Citizens get self-serve clarity (“what if I change X?”) without surrendering any data until they choose to apply.
10) Ambient inclusion (works for everyone by default)
How it works: Accessibility, multilingual support, and plain-language content are built in from day one. Services meet WCAG across channels; language support is automatic (with human review where stakes are high); voice, chat, and low-bandwidth modes are first-class. Assisted channels mirror digital flows so people can switch without starting over. Metrics track completion, errors, and satisfaction across languages, devices, and access needs—so gaps get fixed, not hidden.
What it brings: A government that works for all, not just the digitally fluent. Completion rises, complaints fall, and equity improves because barriers are addressed as product defects, not user failings. For teams, inclusive design reduces rework and policy noise by making the common path clearer and kinder.
11) Resilience & dignity (human in command)
How it works: In rights-affecting contexts (benefits, immigration, health, policing), automation is assistive; accountable humans make determinations. Systems are engineered to degrade gracefully: manual lanes at borders, staffed helplines for complex cases, and paper/phone fallbacks during outages. End-to-end logs show how automation was used in each case. Continuity plans, security drills, and red-teaming are routine.
What it brings: People aren’t trapped by edge-case failures or model quirks; there’s always a humane path that works. Public servants have clear authority and escalation routes. Trust and legitimacy rise because the state demonstrates prudence, reliability, and respect—especially when technology stumbles.
12) Continuous, democratic feedback loops
How it works: Participation and oversight are part of the product. Agencies publish and maintain public records for impactful algorithms; they run structured, multilingual consultations that cluster and summarize input with attribution; and they show “you said → we did” after decisions. Service pages link directly to their transparency records and feedback channels. Performance dashboards report outcomes and equity measures openly.
What it brings: A learning state that listens—and shows it. Citizens can inspect, question, and shape how AI is used; communities see their input reflected in product changes and policies. Internally, continuous feedback reduces blind spots, surfaces unintended impacts early, and builds the social license needed to scale AI across the public sector.
The Principles
1) Citizen-owned civic agents
Definition (two lines)
A privacy-preserving “civic agent” lives with the person (phone/PC), knows their preferences, and acts on their behalf—only when they consent.
It proves facts with digital credentials and shares the minimum necessary (not whole documents), under the user’s sole control.
What it means for citizens
Your agent fills forms, books slots, and assembles proofs; you review and approve. You disclose just the needed attribute (e.g., “over-18”, “resident of X”), not your entire dossier—made possible by the EU Digital Identity Wallet model of selective disclosure and electronic attestations of attributes (EAA/QEAA). EUR-Lex+1
How it will be implemented (practical steps)
Adopt the wallet model with selective disclosure. Implement the EU Digital Identity Wallet (EUDI) so people can store, combine, and present identity data and attributes with selective disclosure—the regulation explicitly requires wallets to support this, and defines qualified electronic attestations of attributes for high-trust proofs. EUR-Lex+1
Use the EUDI Architecture & Reference Framework (ARF). Build to the Commission’s ARF (latest public versions 1.0/1.1), which specifies how wallets, issuers and verifiers interoperate, including standards for credentials and presentations. Digital StrategyEUDI WalletEuropean Commission
Rely on open credential standards that support minimum disclosure. Use W3C Verifiable Credentials 2.0 and the IETF/OAuth SD-JWT family to let a holder reveal just specific claims when needed. (The eIDAS framework and W3C specs both point toward selective disclosure as the privacy-preserving baseline.) W3C+1IETF Datatracker
Make consent a living contract. Wallet UX must show who requests what, for which purpose, and for how long—revocable at any time. (The regulation states data sharing is under the sole control of the user and wallets must not leak usage data to issuers.) EUR-Lex+1
Expose “simulate before you share.” Publish policy-as-code (machine-readable eligibility/rules) so a person’s agent can run local simulations (e.g., benefits/tax/permits) before any disclosure. (The ARF and toolbox work enable machine-readable flows.) European Commission
Plan for on-device / local-first AI. Where feasible, run drafting, extraction and “what do I qualify for?” on the user’s device; only send signed proofs (not raw data) to services. (This aligns with the regulation’s minimum-disclosure, purpose-limited ethos.) EUR-Lex
Provide a clean state front-door that welcomes the citizen’s agent. Offer a mobile app and web endpoint that can accept wallet-based presentations and guide people through identity checks (the UK’s One Login app shows an official, face-match onboarding flow). GOV.UK+1
Connected agendas (what you line up)
eIDAS 2.0 / EUDI Wallet (Regulation (EU) 2024/1183): selective disclosure; (qualified) electronic attestations of attributes; wallet trust marks. EUR-Lex+1
EUDI ARF + Toolbox: common architecture, protocols and formats for EU wallets and verifiers. Digital StrategyEUDI Wallet
Open standards: W3C VC 2.0; IETF/OAuth SD-JWT (selective disclosure for JWT-based credentials); OpenID for Verifiable Presentations. W3CIETF DatatrackerOpenID Foundation
Practical examples (actual programs & artefacts)
EU Digital Identity Wallet—law in force. The amended eIDAS Regulation (EU) 2024/1183 sets the European Digital Identity framework, requiring wallets that enable selective disclosure and (qualified) electronic attestations of attributes. EUR-Lex+1
EUDI ARF (specs). The Commission’s Architecture & Reference Framework is publicly available; updates in 2025 provide implementer guidance across standards. Digital StrategyEuropean Commission
GOV.UK One Login app. Official guidance and accessibility statements (June 2025) describe how the app performs face-to-ID matching to prove identity for government services. GOV.UK+1Google Play
2) Government as a mesh of service agents
Definition (two lines)
Every public service exposes a standard agent endpoint (policies-as-code + actions).
A citizen’s agent talks to these gov-agents to get things done across agencies—using verifiable proofs, not repeated forms.
What it means for citizens
You tell your agent “I moved” and approve once. It coordinates updates to tax, health, schools, permits—by exchanging digitally signed proofs between back-office systems. No re-typing, no re-uploading PDFs. Estonia’s Bürokratt and the EU’s Once-Only infrastructure point squarely at this model. Interoperable Europe PortalInternal Market and SMEs
How it will be implemented (practical steps)
Publish service APIs that accept verifiable credentials (VCs). Each agency offers endpoints that verify EUDI wallet presentations (identity + attributes) and act (update records, schedule appointments), logging purpose and consent. EUR-Lex
Use a secure data-exchange layer with audit trails. Connect systems via X-Road (or equivalent) so services can consume authoritative data with mutual TLS, time-stamped logs and distributed governance—without centralising all data. (Estonia’s X-Road is the backbone of 100% online public services.) e-Estonia
Adopt Once-Only for cross-border evidence. Plug into the EU Once-Only Technical System (OOTS) so, with user consent, a service in one Member State can request official “evidence” (proofs) directly from another state’s authentic sources. (Core OOTS infrastructure launched Dec 2023.) Internal Market and SMEsInteroperable Europe Portal
Stand up a national assistant layer as shared infra (not 100 bots). Follow Bürokratt’s model: a reusable, interoperable assistant layer (text/voice) that agencies plug into—so citizens get one front door, not dozens of one-off chatbots. Interoperable Europe Portal
Instrument everything with explainability and recourse. Every automated step returns a plain-language “what we did and why,” with a link to appeal or talk to a human; publish entries in an algorithm register for high-impact systems.
Track outcome KPIs, not clicks. Publish time-to-benefit, errors avoided, and equity metrics for each service flow; prioritise friction removal where it saves citizens the most time.
Connected agendas (what you line up)
Interoperability & secure exchange: X-Road-class backbone (audited logs, standard interfaces) connecting registries and services. e-Estonia
Once-Only / Single Digital Gateway: cross-border evidence exchange via OOTS to eliminate duplicate submissions. Internal Market and SMEs
National assistant strategy: shared conversational layer (Bürokratt-style) instead of duplicative bots per agency. Interoperable Europe Portal
Open service vocabularies & policy-as-code: machine-readable services and rules so agents can coordinate reliably.
Practical examples (actual programs & artefacts)
Bürokratt (Estonia). Official programme for an interoperable network of public-sector chat/voice assistants, designed as shared infrastructure for multi-agency services. Interoperable Europe Portal
X-Road (Estonia/Finland and beyond). Open-source secure data exchange; the 2025 e-Estonia guide cites ~2.7 billion queries/year via X-Road, underpinning 100% online public services. e-Estonia
OOTS (EU Single Digital Gateway). The Commission launched the core OOTS infrastructure in December 2023 to make cross-border paperwork “once-only,” enabling services to request evidence directly from authentic sources with the user’s permission. Internal Market and SMEs
U-Ask (UAE). A unified, generative-AI government assistant providing cross-agency guidance in Arabic and English; live on the UAE’s official portal and profiled by the OECD OPSI. U-AskObservatory of Public Sector Innovation
GOV.UK app / One Login direction (UK). Official pages show a mobile app for identity and access; reputable press report the new app’s roadmap with AI chatbot support to unify access across services. GOV.UKFinancial Times
3) Event-driven, “no-stop” services (offer → confirm)
Definition (two lines)
Real-world events (birth, move, bereavement, job loss) automatically trigger the right help across agencies.
Default UX flips from search & apply to offer & confirm—with clear reasons and consent.
What it means for citizens
When a birth is registered, the state drafts your child benefit, pre-books any needed appointments, and lines up ID steps; you just confirm. Moving home updates tax, schools and permits after one approval. When a loved one dies, you report it once and government handles the cascade. Cross-border evidence is fetched for you—securely, with your consent. Observatory of Public Sector Innovationlife.gov.sgGOV.UKInternal Market and SMEs
How it will be implemented (from plumbing to product)
Anchor on authoritative base registers + event subscriptions
Connect civil status, population, tax, benefits and identity registers through a secure exchange layer (e.g., X-Road) and publish events (e.g., “birth recorded”, “address changed”). Downstream systems subscribe and react; every call is logged and auditable. e-Estoniax-road-document-library.s3.amazonaws.comUse verifiable proofs, not bulk data copies
Instead of pushing raw records around, issue verifiable credentials/attestations (birth fact, residency, status). Services request the minimum attribute needed; the person (or their agent/wallet) consents to each disclosure. (This aligns with the EU Digital Identity Wallet model of selective disclosure.) European CommissionMake rules machine-readable (policy-as-code) and testable
Eligibility and business rules are published in open, machine-readable form so back-end agents (and citizens’ own agents) can simulate outcomes before any data leaves the person’s device. Tie this to service SLAs (time-to-benefit, error rates). European CommissionOrchestrate life-event journeys in one front door
Ship a mobile/web “moments of life” front door that assembles steps for the event (e.g., birth: registration → benefits → immunisation → preschool search) and explains “why you qualify” in plain language. LifeSG in Singapore is the pattern to copy. life.gov.sg+2life.gov.sg+2ICAAutomate cross-border evidence with Once-Only rails
For EU users, rely on the Once-Only Technical System (OOTS) so authorities can pull official “evidence” from other Member States—with the user’s permission—instead of asking the person to upload PDFs again. The Commission launched OOTS core infrastructure in December 2023. Internal Market and SMEsEuropean CommissionDesign consent as a living contract
Every event-triggered step shows who needs what and why, for how long, with one-tap revoke. Wallets and verifiers must log purpose-bound use under the user’s control (eIDAS/EUDI baseline). European CommissionBuild safety: explanations, appeals, human handoff
Any automated eligibility or scheduling must ship with a plain-language reason (“under rule X, you qualify for Y”), links to policy, and an easy appeal or human chat. Track appeal rates and reversals.Measure outcomes, not clicks
Publicly report time-to-benefit, successful “first-time right” rate, and equity by subgroup for each event flow. Use these metrics to drive backlog relief and target UX debt.
Connected agendas (the main policy/tech rails to align)
eIDAS 2.0 / EU Digital Identity Wallet: selective disclosure and qualified electronic attestations of attributes for privacy-preserving proofs across services. European Commission
Single Digital Gateway & OOTS: cross-border “evidence” exchange so people submit information only once across the EU; core infrastructure live since Dec 2023. Internal Market and SMEs
Interoperability & secure exchange (X-Road class): auditable, federated, standardised inter-agency data exchange to wire up event subscriptions. e-Estonia
Life-event product design: “moments of life” front doors (e.g., LifeSG) that bundle tasks with clear language, timelines and eligibility explainers. life.gov.sg
Practical examples from other countries (what’s already real)
Estonia — Proactive family benefits (since 2019)
Parents of newborns no longer apply; the system checks eligibility and sends a digital offer to confirm—first rolled out in Oct 2019. (Government case write-ups; OPSI profile.) Observatory of Public Sector InnovationSingapore — LifeSG “Having a child” flow
Parents register births via LifeSG, get a digital birth certificate, apply for Baby Bonus, and see immunisation records and parental-leave options in one journey. (LifeSG & ICA pages.) life.gov.sg+1ICAUnited Kingdom — “Tell Us Once” (bereavement)
Report a death once and most government bodies are notified automatically—an event-driven cascade that removes repeat paperwork at a difficult time. GOV.UK+1New Zealand — SmartStart (birth bundle)
Register a baby online and, in the same flow, apply for the child’s IRD number and benefits such as Best Start—a practical “offer & confirm” birth bundle. smartstart.services.govt.nzGovt NZEuropean Union — Once-Only Technical System
The Commission confirms that OOTS core infrastructure is live (Dec 2023), enabling authorities to fetch official evidence directly—key to effortless cross-border life-event services. Internal Market and SMEsEuropean CommissionInteroperability backbone — X-Road (EE/FI)
Estonia’s secure, federated data-exchange layer is the “invisible” plumbing that lets event updates propagate with audit trails; it underpins the once-only experience and is now used across borders (Estonia–Finland). e-Estoniax-road-document-library.s3.amazonaws.com
4) Minimum disclosure by design
Definition (two lines)
Replace “send us your whole dossier” with selective disclosure: prove only the specific fact a service needs (age ≥ 18, resident of X, income in band Y).
Use cryptography and verifiable credentials so officials get just enough to decide—nothing more.
What it means for citizens
You never upload full PDFs to prove a tiny thing. Your (or your family’s) civic agent shows only the required attribute(s), with an on-screen purpose and consent prompt. This aligns with the EU Digital Identity Wallet model, which explicitly calls for selective disclosure of attributes under the user’s control. EUR-Lex+1
How it will be implemented (practical steps)
Adopt wallets that support selective disclosure by default.
Implement the EU Digital Identity Wallet (EUDI) stack so people can present only specific attributes, backed by qualified electronic attestations of attributes. The regulation’s recitals and articles require the wallet to technically enable selective disclosure. EUR-Lex+1Use open credential formats built for minimal data.
Issue and verify W3C Verifiable Credentials 2.0 and support Selective Disclosure JWT (SD-JWT) so a holder can cryptographically reveal just the claims a verifier requests. W3CIETF DatatrackerPublish precise “ask lists” in service APIs.
Each service endpoint declares which attributes are lawful and necessary (e.g., “over-18”, “municipal residency”), not “send your full certificate.” Wallet/verifier UX must show purpose and request scope before consent. (EUDI Toolbox/ARF provides the common specs and patterns.) European Commission+1Show consent as a living contract.
The presentation flow displays who is asking, what will be disclosed, and for how long; it logs consent and lets the user revoke later—consistent with the EUDI framework’s user-control requirements. EUR-LexPrefer proofs over document copies in back-office flows.
Replace bulk data pulls with verifiable presentations: the back office stores a signed yes/no or bounded value (e.g., “income in band B”), not the full document.Make “simulate before share” a norm.
Publish policy-as-code for eligibility so a person’s agent can run a local simulation (no data leaves the device) to check likely outcomes before any disclosure. (ARF/Toolbox gives the rails for machine-readable flows.) European Commission
Connected agendas (what you line up)
eIDAS 2.0 / EU Digital Identity Wallet — legal basis and architecture for selective disclosure and qualified attestations; toolbox + ARF for implementers. EUR-LexEuropean Commission
W3C VC 2.0 — web-native, privacy-respecting credentials. W3C
SD-JWT (IETF OAuth) — standard selective-disclosure mechanism for JWT claims. IETF Datatracker
Practical examples (actual programs)
EU Digital Identity Wallet (law + specs).
Regulation (EU) 2024/1183 states the wallet should technically enable the selective disclosure of attributes; Commission pages host the Toolbox/ARF implementer guidance and reference implementation. EUR-Lex+1European CommissionEUDI Large-Scale Pilots (LSPs).
The Commission is running four large-scale pilots to test real wallet use-cases and feed the common Toolbox ahead of Member State roll-outs. Digital StrategyTSA Digital ID (mDL) at U.S. airports.
TSA now accepts mobile driver’s licenses/digital IDs at 250+ airports and explicitly states it sees only the necessary information; travellers get an on-device consent screen summarising what will be shared. (This demonstrates selective, purpose-bound attribute sharing in production.) tsa.gov+1
5) Local-first intelligence; compute-to-data
Definition (two lines)
Move algorithms to the data (or to the citizen’s device), not data to algorithms.
Do the work in secure, audited environments and share only results or verifiable proofs.
What it means for citizens
Your data isn’t endlessly copied into silos. Eligibility checks, simulations and analytics happen where the data already lives (or on your phone). Services receive only a decision or a minimal proof (e.g., “income in band B,” “resident of X”), consistent with the EU Digital Identity Wallet’s selective disclosure rule. EUR-Lex
How it will be implemented (practical steps)
Adopt secure “trusted research/processing environments” (TRE/SDE) across agencies.
Host sensitive datasets inside accredited environments; analysts run code inside, outputs are checked, and nothing leaves except approved results. (The UK ONS Secure Research Service is a national TRE; France’s CASD offers secure processing and has even earned GDPR certification.) Office for National Statisticscasd.euFollow the OpenSAFELY pattern in health: compute inside the data controller’s estate.
With OpenSAFELY, analyses run where NHS EHRs reside; all activity is logged; code and codelists are open by default—showing you can do large-scale public-good analytics without extracting raw patient data. opensafely.orgPMCUse EU Health Data Space rails: secure processing over sharing.
The EHDS establishes governance so secondary-use analytics occur in secure processing environments, enhancing individual control while enabling reuse for research, policy and preparedness—again prioritising compute-to-data. Public HealthMake on-device wallets the default place for proofs.
The EU Digital Identity Wallet performs credential presentation and selective disclosure locally; services consume proofs (attestations) instead of raw documents. Design verifiers to accept attributes, not PDFs. EUR-LexDeploy federated/remote algorithms for private-sector signals—with public governance.
In OPAL pilots, vetted algorithms were sent to telecom data in situ (Senegal, Colombia) to produce aggregate indicators for public good—no data extract, no raw access. Treat this as the model for any private data collaboration. datacollaboratives.orgOPAL ProjectHarden the stack with confidential computing where needed.
For highly sensitive joins, run workloads in hardware-backed secure enclaves, ensuring code and data are protected while in use—and log who ran what, when. (Standard TRE/SDE guidance and enclaves support this posture.) Data Science JournalPublish code, keep data put.
Mandate open codebooks/codelists, public logging, and reproducible pipelines (the OpenSAFELY norm) so trust comes from transparency of methods, not data copying. PMC
Connected agendas (what you line up)
Trusted environments for official statistics & policy research: ONS SRS / UK Integrated Data Service; France CASD. Office for National StatisticsIntegrated Data Service (IDS)casd.eu
Health: European Health Data Space (secure processing environments; stronger individual control). Public Health
Identity & proofs: eIDAS 2.0 / EU Digital Identity Wallet with built-in selective disclosure and attestations. EUR-Lex
Data collaboratives: OPAL “algorithms to data” governance for private-sector sources. datacollaboratives.org
Practical examples from other countries (what’s already real)
OpenSAFELY (England) — “secure analytics platform for NHS EHRs,” running analyses across 58+ million records without copying data out; all activity publicly logged and code shared. This is compute-to-data at national scale. opensafely.org
ONS Secure Research Service (UK) — the country’s largest TRE; ~1,500 accredited researchers / ~600 projects / 124 datasets accessed under strict controls, often remotely via SafePods/Safe Rooms. HDR UK
CASD (France) — hardware-anchored secure data access with strong identity controls (SD-Box), now GDPR-certified; partnerships include Banque de France to analyse granular economic data in-situ. casd.euBanque de France
European Health Data Space (EU) — Commission materials and Parliament studies underline the move to secure processing environments and better individual control, rather than broad data exports for secondary use. Public HealthEuropean Parliament
OPAL pilots (Senegal, Colombia) — algorithms executed inside telecom environments to produce public-good indicators without exposing customer-level data—an operational proof of the model beyond health. datacollaboratives.org
6) Consent as a living contract
Definition (two lines)
Consent isn’t a one-time checkbox; it’s granular, time-boxed, revocable and audited.
Every share shows who asks, what, why, for how long—and can be withdrawn as easily as given.
What it means for citizens
You see a clear prompt before any attribute is shared (e.g., “prove resident of X for school placement, 90 days”), you approve or deny, and you can revoke later. That matches EU law: wallets must enable selective disclosure under your sole control, and GDPR requires that withdrawing consent be as easy as giving it. EUR-Lex+1GDPR
How it will be implemented (practical steps)
Put consent flows into the wallet presentation itself.
Use the EU Digital Identity Wallet (EUDI) so requests specify purpose and precise attributes; the wallet mediates selective disclosure (no full document upload), with user control baked in. EUR-LexEuropean CommissionMake revoke-as-easy-as-grant a hard requirement.
Mirror GDPR Art. 7(3): a single tap in your account or app cancels an ongoing permission; services must stop using data immediately and log the change. GDPRGDPR TextExpose a universal “consent & access” dashboard.
Give people one place to review what they’ve shared and who accessed it—like the UK NHS National Data Opt-out (opt in/out any time, including via the NHS App) and Estonia’s patient portal where citizens see who viewed their records. NHS England Digitalnhs.uke-EstoniaEuropean CommissionPrefer proofs over data copies; expire by default.
Services request verifiable presentations (yes/no or bounded values) that expire after the stated purpose window; anything longer needs a fresh, explicit approval. (Fits the EUDI “user control & selective disclosure” model.) European CommissionRecord purpose and provenance in audit logs.
Back-ends (ideally on an X-Road-class exchange) must log who asked for what, under which legal basis, when, and show that to the user on demand. Estonia’s approach—citizens can check access logs and challenge misuse—shows the value. e-EstoniaAggregate complex consents via trusted intermediaries.
Where many sources exist, adopt human-centric operators that manage purpose-bound permissions coherently (the MyData model), so people aren’t juggling dozens of toggles. mydata.org+1Cross-border: fetch “evidence” only on request.
In the EU, tie cross-border data pulls to the Once-Only Technical System (OOTS) so an authority retrieves official evidence from another Member State at the citizen’s request—not by default. Core OOTS infrastructure went live on 12 Dec 2023. Internal Market SMEsEuropean Commission+1
Connected agendas (what you line up)
eIDAS 2.0 / EUDI Wallet — wallet under the user’s control; selective disclosure of attributes; implementing acts and ARF define the technical baseline. EUR-LexEuropean Commission
GDPR Art. 7 — consent must be withdrawable as easily as given; people must be told this upfront. GDPR
Single Digital Gateway / OOTS — cross-border evidence exchange upon user request, not blanket sharing. Internal Market SMEs
Audit & access transparency — service logs visible to citizens (Estonia’s patient portal pattern). European Commission
Practical examples from other countries (what’s already real)
EU Digital Identity Wallet (law + toolbox).
Regulation (EU) 2024/1183 requires that the wallet technically enable selective disclosure so users share only necessary attributes and remain in control; the Architecture & Reference Framework operationalizes this. EUR-LexEuropean CommissionNHS England — National Data Opt-Out.
People can set or change their choice at any time (web or NHS App) on sharing health data for research/planning—an operational example of revoke-as-easy-as-grant. NHS England Digitalnhs.ukEstonia — access logs for health data.
Patients see their records and who accessed them, and can challenge inappropriate access—demonstrating auditable provenance in practice. e-EstoniaEuropean CommissionEU — Once-Only Technical System (OOTS) live.
The Commission confirms the go-live of OOTS core infrastructure (12 Dec 2023), enabling cross-border evidence exchange at the citizen’s request. Internal Market SMEsEuropean Commission
7) Explainable help, not hidden automation
Definition (two lines)
Every automated step comes with a plain-language “what we did and why,” a human contact, and an easy appeal.
Transparency is a service feature: people can see, question, and contest how automation affected them.
What it means for citizens
Instead of a black box, you get a short reason you can understand (“You qualify for X under rule Y; here’s the link”). If an algorithm helps triage your visa or flag your tax return, you can see that automation was used, what safeguards exist, and how to escalate to a human. In practice, that means public records of government algorithms and pre-launch risk assessments are published as a matter of course. algoritmes.overheid.nlGOV.UKCanada.ca
How it will be implemented (practical steps)
Make transparency mandatory and routine.
Require an Algorithmic Transparency Recording Standard (ATRS) entry for every impactful system (purpose, data, risks, human oversight, contacts). The UK guidance sets out the template and makes ATRS publication mandatory for central government bodies. GOV.UK+1Publish a public algorithm register.
Run a searchable, plain-English catalogue (like the Netherlands) where each system’s page explains what it does, why it’s used, and links to docs/risk controls. Keep entries updated as systems change. algoritmes.overheid.nl+1Do a risk/impact assessment before launch, then publish it.
Use a structured Algorithmic Impact Assessment (AIA) to classify risk and prescribe mitigations (human-in-the-loop, appeal paths, bias tests, logging). Canada’s AIA is mandatory under its Directive on Automated Decision-Making and publicly accessible. Canada.caOpen Government CanadaShip “reason & recourse” in the product.
For every automated determination or triage, return a one-screen explanation with the legal/policy basis, a link to the full policy, and a talk to a person option. Keep the human decision-maker accountable for outcomes.Audit for fairness and publish results.
Track subgroup performance (false positives/negatives, error rates) and publish summaries. Where models affect rights or entitlements, align with the EU AI Act obligations for risk management, logging, transparency, and human oversight. EUR-LexLog and show access/usage.
Maintain auditable logs of when an algorithm was invoked in your case; let citizens see this in their account, mirroring access-log patterns used elsewhere in digital government.
Connected agendas (what you line up)
ATRS (UK) — standardised public records of government algorithm use (purpose, data, risk, oversight). GOV.UK+1
Algorithm Registers (NL, cities) — public catalogues that strengthen trust and position citizens to ask informed questions. algoritmes.overheid.nl
Algorithmic Impact Assessments (CA) — mandatory, published risk assessments that drive mitigations and human oversight. Canada.ca
EU AI Act — legal duties for high-risk systems: risk management, documentation, transparency to users, and human oversight. EUR-Lex
Practical examples from other countries (what’s already real)
United Kingdom — ATRS guidance and policy
The UK’s official guidance explains ATRS and the requirement for central government bodies to publish records of algorithmic tools in use, including the information that must be disclosed. GOV.UK+1Netherlands — National Algorithm Register
A live, government-run register where ministries and agencies publish their algorithms, with explanations aimed at non-experts and an emphasis on trust and strengthening citizens’ position. algoritmes.overheid.nl+1City of Helsinki — AI Register
A municipal-level, public directory that lets anyone examine each AI system the city uses and give feedback — a concrete pattern for local transparency and participation. ai.hel.fi+1Canada — Algorithmic Impact Assessment (AIA) & published case
The AIA tool is mandatory under Canada’s Directive on Automated Decision-Making. Immigration, Refugees and Citizenship Canada has published the AIA for its “Advanced Analytics Triage of Overseas Temporary Resident Visa Applications,” documenting scope, risks, and mitigations — a strong precedent for visibility and recourse. Canada.caOpen Government Canada(Context) Transparency expectations rising
While mandates exist, scrutiny continues to ensure compliance — reinforcing why explainable help must be treated as a first-class product requirement, not an afterthought. The Guardian
8) Outcome-centric service guarantees
Definition (two lines)
Design, measure and fund services around outcomes that matter to people (time-to-benefit, effort saved, equity), not pageviews or tickets closed.
Publish targets and performance, and let automation/agents optimize to those outcomes with human oversight.
What it means for citizens
You feel the state saving you time and stress: fewer steps, faster decisions, clearer status, and fair outcomes across groups. Governments commit publicly to customer-experience goals (e.g., reduce “time tax”), measure them consistently, and fix services that miss. The White House
How it will be implemented (practical steps)
Adopt an outcomes yardstick (and make it public).
Use a standard CX framework that requires agencies to set and publish targets like time to complete, time to decision, task success, satisfaction, and burden (minutes/hours). The U.S. OMB Circular A-11 §280 and EO 14058 do exactly this and center the reduction of “time taxes.” performance.govThe White HouseInstrument every high-volume journey with 4 core metrics.
Track completion rate, user satisfaction, cost per transaction, and digital take-up—the long-standing GOV.UK Service Manual standard—then add equity splits (e.g., by language, disability, age) and publish dashboards. GOV.UK+1gds.blog.gov.ukSet life-event SLAs (offer-to-confirm).
For events like bereavement, birth, moving, unemployment, commit to maximum “offer-to-confirm” windows (for proactive benefits, status updates, appointment booking) and monitor exceptions. OECD guidance encourages life-event framing for coherent outcomes. OECDMake outcome telemetry part of policy and budget.
Require CX and burden metrics (minutes saved, errors avoided) in business cases and post-implementation reviews; tie funding to measured improvements (A-11 §280 makes CX data collection an explicit expectation). performance.govClose the loop with service design, not comms.
Use qualitative research and usability testing to find where people get stuck; then change rules, content, and flows. EO 14058 and A-11 mandate human-centred design and continuous research. The White Houseperformance.govTreat administrative burden as a harm to reduce.
Catalogue paperwork/friction and remove it; A-11 §280 and related OMB memos instruct agencies to measure and reduce burden, especially in benefits access. The White HouseBake outcomes into agent behavior.
Configure state and citizen agents to optimize for time-to-benefit and first-time-right, not raw throughput; require explanations and escalation when outcome targets are at risk.
Connected agendas (what you line up)
Executive Order 14058 — “transforming Federal customer experience,” explicitly targeting time saved (“time tax”) and life-experience journeys. The White Houseperformance.gov
OMB Circular A-11 §280 — standardized CX metrics, telemetry, surveys, and continuous improvement requirements for services. performance.gov
GOV.UK Service Manual — canonical measures (completion rate, satisfaction, cost per transaction, digital take-up) and guidance on publishing performance. GOV.UK+1
OECD principles on life-event services — encourages outcome-focused, user-centred service ecosystems across providers. OECD
Practical examples from other countries (what’s already real)
United States — CX policy stack (EO 14058 + A-11 §280).
The EO directs agencies to save people’s time and rebuild trust through human-centred design and measurable CX goals; A-11 §280 operationalizes metrics, telemetry, and burden reduction across designated services. The White Houseperformance.govUnited Kingdom — Service performance metrics.
The GOV.UK Service Manual requires services to measure completion rate, satisfaction, cost per transaction and related KPIs—metrics used to drive redesign and de-burden journeys. GOV.UK+1United Kingdom — “Tell Us Once” (bereavement).
A national, event-driven service that reduces repeat notifications after a death; widely cited by civil society as a way to cut administrative burden during a stressful life moment—a concrete example of outcome-centric design. scoop4c.euMarie CurieOECD — Life-event guidance (2024/2025).
The OECD’s digital-age service design principles reinforce life-event framing and integrated outcomes as good practice for public services. OECD
9) Policy-as-code with public tests
Definition (two lines)
Turn rules (eligibility, calculations, deadlines) into official, machine-readable code alongside the human text.
Publish open test suites so anyone can verify that services apply the rules consistently.
What it means for citizens
You can simulate outcomes (benefits, taxes, permits) locally before sharing any data; when you proceed, every decision is consistent and explainable. If a rule changes, your agent and the service both update from the same source of truth.
How it will be implemented (practical steps)
Adopt a rules engine built for legislation
Use a mature, open standard/tooling (e.g., OpenFisca) to encode formulas, thresholds and conditions with provenance back to the statute/regulation. Expose a read API for simulators and agents. openfisca.orgPublish authoritative, versioned rule packages
Treat rules like code: semantic versioning, change logs, deprecation windows, and a public repository. Each change references the legal instrument and commencement date.Ship public test suites
Provide canonical “edge cases” and regression tests (e.g., family types, income bands, residency quirks). Third parties can run them to confirm they’ll get the same answer as government.Wire policy-as-code to wallets/agents
Let a person’s civic agent run local simulations against public rules (no data leaves the device). If they choose to apply, the service requests selective-disclosure proofs (age, residency, income band) instead of documents. (This aligns with EU Digital Identity Wallet selective disclosure.) European CommissionBake rules into service endpoints
Each service exposes an “explain” endpoint that returns: the rule path, parameters used, and a human-readable reason (“Under §X.Y, you qualify for Z”). Tie this to appeal links and publish uptime/error budgets.Governance
Create a small central Rules Office (policy + engineering) to steward standards, reviews, and testing, and to help ministries migrate complex chapters (tax/social protection first).
Connected agendas (what you line up)
Rules-as-Code / Better Rules — encode law and guidance so machines and humans share one source of truth (New Zealand’s programme). New Zealand Digital governmentbetterrules.govt.nz
OECD “Cracking the Code” — international framing for publishing machine-consumable rules alongside human text. OECD
Identity & proofs — EUDI Wallet + selective disclosure so simulations become low-friction applications when you consent. European Commission
Practical examples from other countries (what’s already real)
France — OpenFisca powering benefits simulators
OpenFisca (supported on the EU’s Interoperable Europe portal) has been used by the French state to model social/tax rules and power multi-benefit simulators (e.g., Mes Aides, precursor to today’s unified entitlements tools). Interoperable Europe PortalNew Zealand — Better Rules / Legislation-as-Code
NZ’s Better Rules programme documents how policy drafting, legislative text and machine-readable rules are developed in parallel, with prototypes across tax and regulatory areas. New Zealand Digital governmentserviceinnovationlab.github.ioGlobal practice & research — OECD & Gov networks
OECD’s Rules-as-Code report sets out the case and methods; public-interest labs (e.g., Georgetown’s Digital Benefits Network) are piloting RaC for U.S. benefits, underscoring momentum toward testable, computable rules. OECDDigital Government Hub
10) Ambient inclusion (everyone can use it, by default)
Definition (two lines)
Services should work for everyone the first time—across languages, abilities, devices and connection speeds—without extra steps.
Inclusion is built in (accessibility, multilingual, plain language), not bolted on.
What it means for citizens
You can complete critical tasks in your language, on your device, using voice or text, with clear, plain words and assistive tech support. If you need help, assisted channels are available—without making you start again. Standards and laws require this—and they’re already in force in many countries (EU Web Accessibility Directive/WCAG 2.2; UK Public Sector Accessibility Regulations; Accessible Canada Act). EUR-LexW3CLegislation.gov.ukchrc-ccdp.gc.ca
How it will be implemented (practical steps)
Meet and evidence accessibility standards as a baseline.
Build and audit to WCAG 2.2 (latest W3C Recommendation) and publish an accessibility statement for each service (as required in the EU/UK public sector regs). W3CEUR-LexGOV.UKShip multilingual by default using public-sector MT.
Integrate the Commission’s eTranslation (and its API) for 24+ EU languages in back-office workflows and front ends, with human review where stakes are high. European CommissionWebsite Translation Tool (WEB-T)Design for assistive tech and low digital skills.
Follow the GOV.UK Service Manual guidance: plan assisted-digital support from discovery, test with users who have access needs every sprint, and offer alternative channels without duplicative steps. GOV.UK+1Use plain language, always.
Adopt plain-language standards (e.g., U.S. Plain Writing Act guidance) so content is understandable the first time—paired with readability checks and user testing. Consumer Financial Protection BureauMake inclusion measurable.
Track completion, satisfaction and error rates by assistive tech use, language, and channel; publish dashboards and fix gaps. (NHS and GOV.UK manuals provide practical testing checklists.) nhs.ukGOV.UKBake inclusion into procurement and change control.
Require WCAG 2.2 conformance in contracts; use accessibility statements and VPAT-style disclosures for major components. (UK regs mandate statements for public bodies.) GOV.UK
Connected agendas (the main agendas)
EU Web Accessibility Directive (2016/2102) — obliges public-sector sites/apps to be accessible and to publish accessibility statements. EUR-Lex
WCAG 2.2 — current international standard for web accessibility (adds nine success criteria beyond 2.1). W3C
UK Public Sector Accessibility Regulations 2018 — national implementation requiring conformance and statements. GOV.UK
Accessible Canada Act (2019) — goal of a barrier-free Canada by 2040, with federal duties to identify, remove and prevent barriers. chrc-ccdp.gc.ca
Plain Writing Act (US) — mandates clear government communication people can understand and use. Consumer Financial Protection Bureau
Practical examples from other countries (what’s already real)
European Commission — eTranslation
Official neural MT service for EU institutions and European public administrations, with API access for government workflows and sites—core plumbing for multilingual inclusion. European CommissionWebsite Translation Tool (WEB-T)European Union — Web Accessibility Directive
In force for public-sector sites/apps; requires conformance (aligned to EN 301 549/WCAG) and a public accessibility statement per service. Digital StrategyUnited Kingdom — Public Sector Bodies Accessibility Regulations
GOV.UK guidance sets out duties (WCAG conformance, statements, testing). The Service Manual provides step-by-step accessibility and assisted-digital guidance used across government. GOV.UK+1Canada — Accessible Canada Act
Federal law targeting a barrier-free Canada by 2040, with priority areas and oversight bodies—making inclusion a statutory, measurable responsibility. Government of CanadaNHS (England) — practical accessibility playbooks
NHS digital manuals detail how to research and test with users with access needs across product iterations—an operational template for ministries and agencies. nhs.uk
11) Resilience & dignity (human in command)
Definition (two lines)
Design services so automation augments, never replaces, accountable humans—especially for rights-affecting calls.
Build for graceful degradation: when tech fails or users don’t fit the template, there’s a humane fallback that works.
What it means for citizens
There’s always a staffed path. If an eGate won’t work for your family or a model flags your case, a human officer decides—and you can see how automation was used and challenge it. (NZ Customs explicitly excludes young children from eGates, requiring manual processing; Canada’s immigration analytics state officers always make the decision.) Customs New Zealand+1Government of Canada
How it will be implemented
Codify human oversight for high-risk AI. Align with the EU AI Act: risk management, documentation, user transparency, and human oversight for high-risk systems (health, migration, social benefits, etc.). Digital StrategyEuropean Commission
Guarantee staffed fallbacks in frontline flows. Keep manual lanes and service desks where automation has known limits (e.g., biometrics for kids/edge cases). Publish criteria and signage so people know their options. (NZ eGate age rules show why this matters.) Customs New Zealand
Engineer operational resilience. Meet NIS2 obligations: incident response, CSIRT cooperation, crisis liaison (EU-CyCLONe). Run continuity drills so core services keep working during outages. Digital Strategy
Prove accountability with end-to-end logs. Use exchange layers (e.g., X-Road) that generate auditable logs for each transaction and configuration change; make relevant access logs visible to users. docs.x-road.globalniis.org
Write “reason & recourse” into the UI. Every automated step returns a plain-language reason, a human contact, and an appeal path—no dead ends.
Test dignity, not just uptime. In service reviews, simulate failure modes (model offline, camera glare, accent/age variance) and measure how quickly users reach a competent human.
Connected agendas
EU AI Act (human oversight, transparency); NIS2 (incident response & resilience); secure data exchange with auditable logs (X-Road class). Digital Strategy+1docs.x-road.global
Practical examples (what’s already real)
NZ eGate — automated border control with explicit exclusions; staffed alternatives remain. Customs New Zealand+1
IRCC (Canada) advanced analytics — triage helps sort routine visas, but officers always decide; not used to refuse applications. Government of CanadaOpen Canada Search
EU AI Act — risk-based obligations including human oversight for high-risk AI. Digital StrategyEuropean Commission
X-Road — documented audit log events and logging model for accountability. docs.x-road.global
12) Continuous, democratic feedback loops
Definition (two lines)
Make participation and oversight part of the product: people can see, inspect, and influence how AI is used—continuously, not just once.
Close the loop by showing how input changed decisions, and by iterating services in public.
What it means for citizens
You can inspect the AI systems your city or ministry uses, leave feedback, and read plain-English explanations. You can also contribute to big policy questions at scale (e.g., via structured, multilingual online consultations) and see how that input shaped outcomes. (Helsinki’s AI Register and the Netherlands’ Algorithm Register do the first; vTaiwan/Pol.is shows the second.) ai.hel.fialgoritmes.overheid.nlcompdemocracy.org
How it will be implemented
Publish an algorithm register entry for every impactful system. Follow the Dutch model: purpose, data, oversight, contacts; keep it updated as systems change. Enable comments/feedback on each entry. algoritmes.overheid.nl+1
Adopt a standard transparency record. Mirror the UK’s ATRS pattern (purpose, data, risks, human role) and link it from the service page that uses the system.
Run pre-launch risk assessments and publish them. Use Canada-style Algorithmic Impact Assessments (AIAs) to classify risk and document mitigations; post the AIA publicly. Open Canada
Scale consultations with evidence-preserving tools. Use platforms that cluster views (e.g., Pol.is) and publish attributed summaries showing majority and minority positions, plus “you said → we did” responses. congress.crowd.lawcompdemocracy.org
Instrument participation. Track participation breadth (language, region, disability), response time to feedback, and policy changes made—publish dashboards quarterly.
Make registers discoverable in context. Link the relevant algorithm entry from within each service journey (“How automation is used here”), not just in a separate portal.
Connected agendas
Municipal/national algorithm registers (Helsinki, NL); AIAs (Canada); structured mass-participation (vTaiwan/Pol.is); transparency norms (ATRS-style records). ai.hel.fialgoritmes.overheid.nlOpen Canadacompdemocracy.org
Practical examples (what’s already real)
Helsinki — AI Register: a public, explorable directory of every municipal AI system with feedback mechanisms. ai.hel.fi+1
Netherlands — National Algorithm Register: central, government-run catalogue focused on impactful algorithms (incl. high-risk AI). algoritmes.overheid.nl
Canada — AIA (published): IRCC’s AIA for advanced analytics in visa triage documents scope, risk rating, and safeguards. Open Canada
Taiwan — vTaiwan/Pol.is: national-scale consensus mapping used to inform policy; well-documented case studies. compdemocracy.orgcongress.crowd.law