Paras Sharma

Abstract: The Indian administrative state is increasingly deploying automated and algorithmic systems to govern welfare delivery, policing, and access to public services. While these technologies are often justified on grounds of efficiency and leakage reduction, they relocate discretion from accountable public officials to opaque systems whose internal logic resists judicial scrutiny. This article argues that existing standards of judicial review, particularly Wednesbury reasonableness, are ill-equipped to evaluate algorithmic state action and risk becoming normatively hollow when applied mechanically. Through an analysis of Articles 14and 21, the Digital Personal Data Protection Act, 2023, and comparative jurisprudence from the European Union and the United Kingdom, the article identifies a constitutional blind spot in the regulation of automated governance. It proposes a Doctrine of Technological Legibility, grounded in explicability, proportionality-based impact assessment, and individualized human responsibility, as a judicially workable framework for constitutional accountability.
I. Introduction: The Silent Shift in Governance
The Indian administrative state is currently amidst a quiet albeit profound transformation. From the automated de-listing of welfare beneficiaries, via integrated databases, to the deployment of facial recognition in localized policing, the traditional ‘Executive’ is increasingly being replaced by the ‘Algorithmic.’ While the State defends these tools as instruments of efficiency and ‘leakage reduction,’ they introduce a fundamental friction into our constitutional order: they redistribute discretion from accountable public officials to opaque, non-legible systems.
As we look toward the 2026 legal landscape which is shaped by a data-driven welfare state and a maturing data protection regime, a significant gap has emerged. Our current standards of judicial review, designed to scrutinize human irrationality, are failing to penetrate the internal logic of automated systems. This essay argues that the traditional Wednesbury test, when applied mechanically to algorithmic systems, risks becoming normatively empty. To preserve the guarantees of Articles 14 and 21, the Judiciary must articulate a ‘Doctrine of Technological Legibility’. Only by requiring that algorithmic governance be explainable, contestable, and tethered to human responsibility, we can begin to ensure that automation remains a tool for constitutional justice, rather than a shield against it.
This article proceeds in six parts. Part II outlines the ‘Rationality Gap’ that emerges when traditional standards of reasonableness are applied to algorithmic decision-making. Part III examines the resulting constitutional friction under Articles 14 and 21, focusing on intelligible differentia, dignity, and proportionality. Part IV analyses the legislative blind spot within the Digital Personal Data Protection Act, 2023, and its insulation of automated state action. Part V situates the Indian challenge within comparative developments in the European Union and the United Kingdom. Finally, Part VI proposes a judicially workable Doctrine of Technological Legibility as a framework for constitutional review of algorithmic governance.
II. When Reasonableness Runs Out of Code: The Rationality Gap
The bedrock of Indian administrative law, the Wednesbury principle, postulates that a decision is reviewable if it is so ‘outrageous in its defiance of logic’ that no sensible person could have reached it. However, this test assumes a human ‘reasoning’ process that is linear and intelligible. Algorithmic systems, particularly those built on machine learning, operate on ‘statistical rationality‘, which is a process of correlation and probabilistic weightage that often defies intuitive causal logic.
This leads to what may be termed a ‘Rationality Gap.’ In a traditional writ proceeding, a judge can ask an official to produce the file noting that led to a specific decision. In an algorithmic proceeding, the ‘noting’ is embedded in a statistical model whose internal weighting is neither transparent nor intuitively intelligible to the court. If the Judiciary cannot perceive the internal logic of the classification, it cannot determine if that logic meets the threshold of ‘reasonableness’ or not.
Moreover, the integration of automation into governance often results in ‘Automation Bias.’ Frontline officials, overwhelmed by the scale of data, frequently defer to algorithmic outputs without meaningful independent review. This constitutes a subtle but terminal fettering of discretion. Under our existing jurisprudence, an authority must not abandon its judgment to a rigid formula or an external influence. When a human officer treats an automated risk score as a definitive verdict, the discretion mandated by law is effectively abdicated to a pre-configured system or an external vendor. This rationality gap is not merely an administrative inconvenience; it produces direct constitutional friction under Articles 14 and 21.
III. Constitutional Friction: Articles 14 and 21
Having outlined the Rationality Gap in administrative review, this section examines its constitutional consequences under Articles 14 and 21. When the State uses algorithmic governance, it changes how we understand ‘classification’ and ‘due process,’ often in ways that bypass judicial optics.
1. Article 14: The Erosion of Intelligible Differentia
Article 14 permits ‘reasonable classification,’ but it demands an intelligible differentia, a clear, logical distinction, backed by a rational nexus to the objective sought.
Algorithmic systems, however, often rely on correlations that fail the test of intelligibility. If a system flags a citizen as a ‘security threat’ based on thousands of opaque data points, the differentia becomes nothing less than a statistical ghost—a phantom that exists in the math but cannot be explained in a courtroom. As the Hon’ble Supreme Court observed in E.P. Royappa, arbitrariness is the ‘antithesis of equality.’ When a classification is opaque, it is inherently arbitrary because its logic is insulated from judicial scrutiny. It is important to distinguish complexity from opacity here; complexity in governance is not constitutionally suspect, but opacity that shields the State from the test of rationality is inherently problematic. When the differentia is buried within a model’s weighting, it is no longer ‘intelligible’ to the citizen or the Court, rendering the resultant discrimination unconstitutional.
2. Article 21: Dignity and the Right to a Reasoned Order
The guarantee of life and personal liberty in Article 21 includes the right to be treated with dignity and fairness. A crucial part of this fairness is the Right to a Reasoned Order. As observed in Maneka Gandhi, any procedure established by law must be ‘just, fair, and reasonable.’ When a machine-generated decision takes away a livelihood or a benefit, not having a human-understandable reason undermines the heart of administrative due process.
This point is clearly expressed in the dissent by Chandrachud, J. in Puttaswamy II (Aadhaar), where he stated, “the dignity of an individual cannot be made dependent on algorithms.” A citizen should not have to accept a service denial without understanding the specific fault or criteria that caused it. Without a reasoned explanation, the right to challenge a decision, which is central to Article 21, becomes meaningless.
3. Proportionality in the Age of Automation
The modern test for any State intrusion into rights is Proportionality. It requires the State to prove that its chosen measure is the ‘least restrictive’ means to achieve its goal. In algorithmic governance, proportionality also includes the selection of technology itself.
If the State picks an unclear neural network when a clearly adequate and more understandable option, like a decision-tree model, is available, it might fail the ‘least restrictive’ test. The use of high-risk, non-understandable systems must be strictly necessary and narrowly focused. If the same objective can be achieved using a system that allows for greater transparency, then the choice of a ‘black-box‘ model is, prima facie, disproportionate. This makes technological legibility not merely a policy preference, but a constitutional requirement under proportionality review.
IV. The Legislative Blindspot: DPDP 2023 and Functional Insulation
While India’s transition to a comprehensive data protection regime is a milestone, the current legislative framework remains remarkably silent on the specific risks posed by algorithmic state action. In fact, the Digital Personal Data Protection (DPDP) Act, 2023, and subsequent drafts of the DPDP Rules (2025), arguably shield significant categories of state action from the accountability standards they impose on the private sector.
1. The ‘Legitimate Use’ Loophole and State Immunity
The DPDP Act introduces a ‘Notice and Consent’ framework for data processing, but Section 7 carves out broad exemptions for ‘certain legitimate uses.’ Crucially, Section 7(b) allows the State to process personal data without consent for ‘providing or issuing any subsidy, benefit, service, certificate, license, or permit.’
While this may be administratively efficient, it functionally exempts the most consequential algorithmic systems i.e. those managing welfare and public services, from the Act’s transparency requirements. Because processing under ‘legitimate uses’ is not grounded in consent, the citizen’s right to be meaningfully informed about the ‘nature and purpose’ of processing (Section 5) is effectively bypassed. Consequently, the very systems that present the highest risk of ‘statistical exclusion’ are the ones least scrutinized by the Act.
2. The Absence of an Automated Decision-Making (ADM) Right
Unlike global benchmarks, the DPDP Act contains no explicit provision regulating automated decision-making. There is no statutory ‘right to an explanation,’ nor is there a right to opt-out of a purely automated process that has legal or significant effects.
The Act’s primary mechanism for accountability is the ‘right to correction’ (Section 12), which applies only when data is ‘likely to be used to make a decision.’ However, without a corresponding right to understand the logic of the decision-making model, a citizen cannot know which data point was inaccurate or how it influenced the outcome. In the absence of legislative clarity, the State’s algorithmic programs operate in a ‘blind spot,’ where the citizen bears the burden of detecting an error they cannot see.
3. Nascent State Efforts: The Karnataka Platform Workers Bill, 2025
A glimmer of legislative awareness can be found at the state level. The Karnataka Platform Based Gig Workers (Social Security and Welfare) Bill, 2025, represents India’s first attempt to mandate transparency in ‘automated monitoring and decision-making systems.’ It requires aggregators to disclose the ‘main parameters’ that determine work allocation and earnings.
While this is a commendable ‘proof of concept’ for algorithmic accountability, its scope is limited to a specific labor sector. It highlights a glaring inconsistency: why should a gig worker have a right to understand their earnings algorithm, while a citizen has no right to understand the algorithm that denies them a pension or flags them as a security risk? The ‘Karnataka Model’ demonstrates that technological legibility is feasible; however, it also underscores the urgent need for a unified Constitutional Doctrine to bridge the gaps left by a fragmented legislative landscape. In this legislative vacuum, the responsibility for articulating minimum standards of algorithmic accountability necessarily shifts to constitutional adjudication.
V. Comparative Blueprints: Navigating the Global Pulse
India’s struggle to reconcile automation with administrative law is not unique. As global jurisdictions move from ethical guidelines to hard regulation, two distinct models have emerged: the rights-based legislative model of the European Union and the judicially-led, principles-based model of the United Kingdom. These frameworks offer critical blueprints for how the Indian Judiciary might articulate its own missing doctrine.
1. The EU AI Act and Evolutionary Transparency
The European Union’s AI Act (2024) serves as a pioneer in risk-based regulation. Crucially, it classifies AI systems used by public authorities for ‘access to and enjoyment of essential public services and benefits’ (such as welfare eligibility) as ‘High-Risk.‘ Under this framework, high-risk systems are subject to stringent transparency obligations, including robust requirements for explanation and information. This is more than just a procedural checkbox; it empowers affected individuals to understand the specific logic behind an automated decision that impacts their fundamental rights. By mandating structured human oversight and Fundamental Rights Impact Assessments (FRIA) before deployment, the EU model essentially codifies the very ‘Technological Legibility’ that India currently lacks. It demonstrates that transparency is not an impediment to innovation, but a prerequisite for its public legitimacy.
2. The UK Model: Judicial Assertiveness in Bridges
Where the EU relies on legislation, the UK has pioneered judicial intervention. In the landmark case of R (Bridges) v. Chief Constable of South Wales Police (2020), the UK Court of Appeal struck down the police’s use of automated facial recognition (AFR) technology.
The Court did not ban the technology; rather, it found its deployment unlawful because the legal framework allowed for too much unfettered discretion. Crucially, the Court held that the police had failed to fulfill their ‘Public Sector Equality Duty’ because they could not demonstrate that the algorithm was free from racial or gender bias. The Bridges ruling is a textbook example of ‘evidentiary assertiveness’: the Court refused to take the State’s technical claims at face value, insisting instead that the State prove the accuracy and fairness of its tools. This echoes the Indian principle of non-fettering, suggesting that even in the absence of a specific ‘AI law,’ existing administrative principles are robust enough to hold automated systems to account.
3. Lessons for the Indian Judiciary
These comparative perspectives teach us that technological complexity is no defense against constitutional scrutiny. However, these regimes emerge from distinct constitutional traditions and cannot be transplanted wholesale.
Nevertheless, the EU model proves that explicability is technically feasible, while the UK model proves that judicial review can successfully penetrate the algorithmic veil using existing public law tools. For India, the lesson is clear: we need not wait for a comprehensive AI statute to begin reigning in automated harms. By adopting a standard of ‘Technological Legibility,’ as proposed in the following section, the Indian Judiciary can draw on these global evolutionary trends to ensure that our own constitutional guarantees remain substantively intact in the digital age.
VI. The Proposal: A Doctrine of Technological Legibility
If the soul of administrative law is the ‘reasoned order,’ then an algorithmically generated decision without an accompanying explanation cannot ordinarily survive constitutional scrutiny. To bridge this gap, the Judiciary should adopt a tripartite test of Legibility, shifting the focus from the opaque process of coding to the observable outcome of governance.
1. The Requirement of Substantive Explicability
The State cannot hide behind ‘technical complexity’ to evade its constitutional duty to give reasons. Under this prong, any automated decision affecting a citizen’s rights must be accompanied by a non-technical summary of the logic applied.
For instance, if a welfare beneficiary is de-listed, the notice must specify the parameters that triggered the exclusion (e.g., ‘Income mismatch detected via integrated tax databases’) rather than a generic ‘System Error.’ This ensures that the decision remains contestable. Without explicability, the right to audi alteram partem under Maneka Gandhi becomes a mere formality, as the citizen has no known basis upon which to mount a defense.
2. Pre-Deployment Impact Audits and ‘Bias-Tracing’
Under the Proportionality test established in Puttaswamy I, the State must prove that its chosen means are the ‘least restrictive’ way to achieve a legitimate goal. Therefore, the doctrine of Legibility requires the State to conduct, and, upon judicial request, produce, Algorithmic Impact Assessments (AIAs).
These audits must demonstrate that the training data used for the algorithm was not sourced from unrepresentative or biased historical datasets. If a surveillance tool shows a statistically significant disparity in false-positive rates for marginalized communities, the State’s use of that tool is no longer ‘rational’ under Article 14. By requiring ‘bias-tracing’ as a precondition for deployment, the Court ensures that efficiency does not become a Trojan horse for systemic discrimination.
3. Individualized Human Responsibility (The Accountable Officer)
Automation bias often leads frontline officials to treat algorithmic outputs as infallible. To counter this, the doctrine proposes a ‘Non-Delegable Duty of Review.’ For every high-stakes automated system, the State must designate a ‘Competent Human Officer’ who bears legal responsibility for the final decision.
This officer must have the authority to override the algorithm’s recommendation based on case-specific facts. In a writ proceeding, it is this officer, and not the software vendor, who must justify the decision. This preserves the rule against fettering of discretion, ensuring that the machine remains a tool of the administrator, rather than the administrator becoming an agent of the machine.
The Evidentiary Shift: Rebalancing the Burden of Proof
The greatest hurdle in challenging an algorithm is the ‘Information Asymmetry’ between the State and the citizen. Currently, the petitioner bears the near-impossible burden of proving how an opaque system was arbitrary.
To make the Doctrine of Legibility effective, the Judiciary must permit a conditional shift in the burden of proof. Once a petitioner establishes a prima facie case of systematic exclusion or unexplained denial, a ‘rebuttable presumption of arbitrariness’ should arise. The burden then moves to the State to demonstrate, via its audits and internal logs, that the decision-making process met the three prongs of Legibility. If the State cannot explain its own machine, it should not be permitted to rely on its output. Each element of this doctrine draws from existing constitutional principles, i.e. reasoned decision-making, proportionality, and non-fettering of discretion, without requiring legislative intervention.
VII. Conclusion: A Call to Judicial Assertiveness
The transition to algorithmic governance does not mandate a retreat from constitutional values; rather, it demands a more assertive application of them. As this essay has argued, the ‘Rationality Gap’ produced by opaque automated systems is a solvable crisis of administrative law. By articulating a Doctrine of Technological Legibility, the Judiciary can ensure that the Executive remains accountable, even when its discretion is encoded in a statistical model. In the absence of comprehensive legislative guardrails, this responsibility necessarily falls to constitutional adjudication.
The proposed tripartite test, focusing on Explicability, Impact Audits, and Human Responsibility, is not a radical departure from precedent. It is a necessary evolution of the principles laid down in Maneka Gandhi and Puttaswamy. It recognizes that in a digital democracy, the right to a ‘reasoned order’ must be protected against the silence of automated systems.
We stand at a critical juncture in India’s constitutional journey. If the Judiciary chooses a path of ‘technological deference’ treating algorithms as matters of expert policy beyond its ken, it risks allowing a new form of arbitrary power to flourish. However, by demanding that technology remains ‘legible’ to the Constitution, the Court can ensure that automation becomes a vehicle for empowerment rather than a tool for systemic exclusion. The choice is not between technology and the law, but between a law that is blind to technology and one that commands it to be fair. Ultimately, the continued vitality of Articles 14 and 21 in the 21st century depends on our ability to ensure that the ‘Code’ of the State remains subservient to the ‘Constitution’ of the People.
Paras Sharma is an Advocate practising before the Punjab and Haryana High Court, with interests in constitutional and administrative law and public policy.
