Conditions under which algorithmic systems may influence, rank, score, classify, recommend, or determine outcomes within civic life.
Algorithmic systems are already present within civic life, whether seen or unseen. They rank, filter, score, classify, recommend, predict, and shape decisions across housing, education, health, benefits, safeguarding, transport, employment, policing, and public administration. Their presence is often obscured by software language, procurement layers, institutional routine, or the false assumption that technical mediation is neutral by default.
Where a system can meaningfully alter a person’s access, treatment, scrutiny, movement, opportunity, or standing, it is already exercising civic force. That force may not operate without declaration, accountability, review, and refusal. Civic life may not be silently reorganised by systems the public cannot see, question, or meaningfully challenge.
This Charter establishes the principle that local communities, authorities, and institutions retain the right and responsibility to set conditions on algorithmic presence within civic life. It exists to protect dignity, fairness, legibility, due process, and human accountability at the point where computational judgment begins to shape lived reality.
This Charter governs a category in which invisibility is itself part of the harm. Because algorithmic systems often arrive through software layers, procurement chains, and routine administrative use, the conditions set here are weighted toward declaration, legibility, contestability, and review rather than physical containment alone.
The purpose of this Charter is to define the minimum conditions required before an algorithmic system may be lawfully, ethically, and operationally used within local civic environments.
It is designed to:
The purpose of this Charter is to define the minimum conditions required before an algorithmic system may be lawfully, ethically, and operationally used within local civic environments.
It is designed to:
This Charter applies at the point of civic effect. Procurement origin, vendor location, model origin, or central contracting arrangement do not displace local standing. Where a system produces or is capable of producing Meaningful Civic Effect within the jurisdiction, the deploying body within that jurisdiction is the accountable party and must either secure compliance with this Charter or refuse deployment.
No algorithmic system may be deployed or used within the jurisdiction unless all of the following conditions are met.
The deployer must clearly declare the function of the system, the civic process in which it will be used, the specific problem it claims to address, and the reason algorithmic mediation is necessary rather than merely convenient.
Every deployment must have a named human steward and a named responsible organisation. Responsibility may not be diffused across vendors, procurement chains, consultants, software providers, or abstract institutional ownership structures.
The system must be registered with the relevant local authority before use begins. Registration must include system type, operating context, named steward, vendor or internal builder, declared decision boundary, decision posture, update pathway, affected populations, and escalation procedures.
A deployment-specific assessment must be completed before approval. This assessment must address purpose, proportionality, fairness risk, data quality risk, discrimination risk, error pathways, explanation standards, challenge pathways, human review design, update risk, and likely effects on vulnerable populations.
The deployer must declare what the system can and cannot do, what data it uses, what outputs it produces, what decisions it influences, and what forms of adaptation, retraining, or learning may occur. No meaningful civic use may rest on an undeclared system boundary.
Where a system has a Meaningful Civic Effect, a real human review and override pathway must exist. The human reviewer must be competent, reachable, authorised to intervene, and not reduced to a ceremonial rubber stamp.
Where a person is materially affected by an algorithmic system, the deployer must provide clear notice that such a system is in use, the general function it serves, the nature of its influence on the outcome, and the route by which a person may seek explanation, review, or challenge. Explanation must be meaningful enough to support contestability. Mere disclosure that an automated system was used is insufficient.
The system must maintain sufficient records to reconstruct its use, inputs, outputs, interventions, model or rules version, and any material changes affecting outcomes. Model lineage must be preserved in a form sufficient for local audit, incident reconstruction, and public accountability where required.
Data used or generated through deployment must not be used beyond the declared civic purpose, operational audit, incident reconstruction, and lawful review unless separately and explicitly approved. Secondary commercial, profiling, training, or analytics use requires distinct approval.
A simple and accessible route must exist for people to challenge materially consequential outputs, request review, and seek correction where harm, error, or unfair treatment may have occurred.
The system must not impose a level of surveillance, opacity, categorisation, or procedural burden disproportionate to the civic purpose claimed. Greater civic consequence requires greater clarity, tighter boundaries, and stronger review.
The deployer must explicitly declare whether the system is assistive, advisory, prioritising, threshold-setting, or outcome-determinative. A system approved for one posture may not be used in another without renewed approval.
Every consequential deployment must carry a declared review date, after which continued operation requires renewed approval. Continued civic use may not be presumed indefinite.
The following constraints apply to all deployments under this Charter.
Human dignity, fairness, and procedural clarity take precedence over administrative speed, efficiency claims, model confidence, or institutional convenience.
No person may be denied, downgraded, sanctioned, escalated, flagged, or deprioritised in a high-risk civic context solely on the basis of an algorithmic output.
Where a system meaningfully shapes a civic outcome, its role may not remain hidden from those materially affected by it.
A deployer may not combine datasets, proxy variables, behavioural traces, third-party data, or inferential layers beyond what has been locally declared and approved.
A system may not expand its predictive, inferential, adaptive, classificatory, or decision-shaping capabilities beyond what has been locally declared and approved.
A system may not alter its outputs, thresholds, classifications, or recommendations within a live civic interaction based on a person’s responses, behaviour, or inferred state unless such adaptation has been explicitly declared, bounded, and locally approved. Adaptive behaviour within a session is a declared capability, not a background feature.
A system may not rely on proxies, correlations, or hidden variables that produce materially discriminatory effects on protected or vulnerable groups, whether directly intended or operationally tolerated.
No significant update to model behaviour, rules logic, input structure, thresholding, or decision weight may be introduced without review where such change could materially alter civic outcomes.
Commercial confidentiality, vendor secrecy, or technical complexity may not be used to block local scrutiny, civic challenge, or audit of a system with Meaningful Civic Effect.
An algorithmic system may not be used to pressure, behaviourally steer, exploit confusion, or engineer compliance in ways that bypass ordinary civic consent and understanding.
Pilot use, advisory use, or administrative support use may not be gradually normalised into consequential decision power without explicit renewed approval.
An algorithmic system used in civic life may not present itself in ways that obscure the location of institutional authority, simulate human accountability where none exists, or induce a person to treat system outputs as if they were relational care, legal judgment, or civic deliberation.
The following categories require heightened review, explicit declaration, and independent justification before any civic use may be considered. Some may be unsuitable for local approval in principle.
No system may infer or operationalise emotional state, intent, trustworthiness, future compliance, or character traits in a civic context unless such use has been explicitly declared, independently justified, and locally approved under heightened review.
A deployment must be refused, suspended, or withdrawn where any of the following apply:
Refusal under this Charter does not require catastrophe. Repeated opacity, unexplained classifications, harmful error, undeclared escalation, rubber-stamp review, or procedural unfairness are sufficient grounds.
Algorithmic deployment within civic life must remain under visible, auditable human stewardship.
A designated local authority, committee, or review function shall retain the power to approve, constrain, suspend, or revoke deployment.
A local registry should be maintained for algorithmic systems with Meaningful Civic Effect. Registry entries should include the system name, deploying body, civic function, named steward, decision posture, declared decision boundary, and current standing.
Local oversight bodies must retain the right to inspect impact assessments, inputs, outputs, model or rules documentation, model lineage, version history, intervention records, challenge outcomes, and declared decision boundaries.
Material errors, harmful false positives or negatives, discriminatory findings, unexplained escalations, and challenge failures must be reported promptly. Serious incidents should trigger automatic review.
Where human review is required, the review process must itself be auditable. A human being placed in the loop without authority, time, institutional permission, relevant understanding, or practical capacity to depart from the system’s output does not satisfy this Charter. Formal presence is not the same as meaningful review.
No software agreement, procurement contract, vendor term, service dependency, or commercial arrangement may limit, delay, dilute, or supersede the local authority’s power to suspend, withdraw, or refuse approval. Contractual continuity does not override civic safety or procedural fairness.
Any consequential algorithmic system must have a documented suspension, rollback, or withdrawal pathway that can be enacted without unreasonable delay. Civic dependency on a system does not eliminate the obligation to remove it where fairness, safety, legality, or governability has broken down.
Where the local authority, council, trust, force, or public body is also the deployer, the review and approval function must be operationally independent of the deploying department. Self-certification does not satisfy this Charter.
This Charter affirms a simple principle: no algorithmic system has an automatic right to shape civic life.
Presence must be earned through clarity, accountability, proportionality, reviewability, and declared restraint. Local communities bear the consequences of invisible machine judgment first; they therefore retain the right to set the terms first.
Where conditions are not met, deployment does not proceed.
Where accountability fails, deployment is paused.
Where a person’s civic reality is being shaped by a system they cannot see, question, or meaningfully challenge, deployment is refused.
The burden of demonstration rests with the deployer. Necessity, legibility, contestability, governability, declared bounds, and meaningful human accountability must be shown, not assumed.
This Charter is designed for real institutional conditions, not hypothetical future systems. It applies to statistical scoring systems, machine learning models, rules engines, risk triage systems, LLM-backed civic interfaces, and hybrid decision architectures already entering civic processes through procurement, vendor tools, and administrative software layers.
Where robotic presence produces visible civic force, algorithmic presence often produces hidden civic force. This is why the Charter places greater weight on declaration, explanation, challenge, auditability, and independent review. Invisibility is not a neutral condition. In this category, invisibility is often part of the harm.
This Charter governs a category in which software language, vendor abstraction, and institutional habit can conceal the true location of power. That concealment must not be mistaken for neutrality.
Algorithmic systems are not merely administrative tools once they begin shaping people’s treatment, scrutiny, access, priority, or standing. They become civic actors in effect, whether or not anyone is willing to name them that way.
This is why local standing matters. The point of effect is the point of responsibility. The system may be bought centrally, built elsewhere, hosted remotely, or updated through opaque procurement chains; none of that removes the obligation of the body choosing to use it within civic life.