Become a member

Get the best offers and updates relating to Liberty Case News.

― Advertisement ―

HomeIndian Journal of Law and TechnologyA Cautious Approach - India Legal

A Cautious Approach – India Legal



Under former Chief Justice DY Chandrachud, the Supreme Court accelerated its adoption of Artifical Intelligence (AI), albeit incrementally and in specific areas. Since then, what progress has been made—or not made?

On February 7, 2026, the Court issued comprehensive guidelines governing the use of AI tools in judicial administration. It adopted a studied approach, recognising AI’s “potential” to improve efficiency, accessibility, and case management, while cautioning that AI can only serve as an assistive tool and cannot replace judicial reasoning, discretion, or decision-making.

The guidelines permit AI-based applications in case listing and docket management, legal research, translation and transcription services, and data analytics to identify pendency trends. These applications are expected to help courts reduce delays, improve administrative efficiency, and enhance access to justice. However, statistics show that as of September 2025, case pendency in the Supreme Court reached a record high, nearing 89,000 cases, driven by filings consistently outpacing disposals. Despite operating at full sanctioned strength (34 judges) and achieving over 90 percent disposal rates for certain categories in specific months, the overall backlog has steadily increased. Clearly, AI has not yet resulted in any significant acceleration of judicial outcomes.

This is largely due to concerns that AI requires robust safeguards to prevent misuse. The Supreme Court has declared that AI-generated outputs cannot be treated as binding or determinative and must always be subject to human scrutiny. The apex court highlighted risks such as embedded bias, lack of transparency, and potential erosion of accountability. The guidelines also emphasise ethical and constitutional considerations, including data privacy, judicial record security, and fairness. Courts have been directed to ensure AI systems comply with principles of transparency, explainability, and non-discrimination.

Here lies the challenge: judges and court staff require adequate training to ensure informed and responsible use of AI tools. That will take time, resources, and external expertise—efforts that are still evolving. Even the use of SUPACE (Supreme Court Portal for Assistance in Court Efficiency) had clearly defined boundaries for technological adoption.

Barely a month earlier, while hearing a PIL seeking guidelines to check AI misuse in courts, Chief Justice of India (CJI) Surya Kant observed that judges “are very careful” in using AI. The bench, comprising the CJI and Justice Joymalya Bagchi, was hearing a petition against the “unregulated” use of generative AI in court proceedings. The CJI stated that judges were extremely conscious of risks arising from indiscriminate use of Generative Artificial Intelligence (GenAI) and would not allow robotic systems to take over judicial administration. “We are very conscious—perhaps over-conscious. We do not want Artificial Intelligence to overpower the judicial administration process,” he remarked.

The petition highlighted dangers of GenAI “hallucinations”, which could generate fictitious judgments or research material and perpetuate bias. The petitioner cited lower court decisions quoting non-existent Supreme Court judgments. The CJI responded: “Let that be a lesson to the Bar to verify everything they research. Judicial officers also have an equal responsibility to verify.” He noted that training camps for judicial officers were underway, but warned that opaque AI and machine learning use in governance could trigger constitutional and human rights concerns.

Exactly a year ago, the Artificial Intelligence Advisory Board (AIAB) finalised its report on AI use in the judiciary. Established in 2022, the AIAB drew upon the European Ethical Charter on AI in judicial systems (2018). Its findings included:

  • 125 AI tools identified globally to improve judicial efficiency and accessibility.
  • No fully autonomous AI systems currently operate independently within courts.
  • Many tools are embedded within internal IT systems and must adhere to transparency and accountability norms.

The Supreme Court’s Centre for Research and Planning (CRP) also released a White Paper titled “Artificial Intelligence and the Judiciary”, positioning AI as a critical tool to address structural burdens. With over five crore cases pending across Indian courts, AI-assisted systems are viewed as essential to accelerating workflows, easing judicial workload, and enhancing procedural transparency.

Indigenous AI solutions now include:

  • SUPACE: Automated document extraction and case summarisation.
  • SUVAS: Translation of judgments into 19 Indian languages.
  • TERES: Real-time transcription during Constitution bench hearings.
  • LegRAA: Generative AI trained on Indian case law.
  • AI-enabled e-filing systems for defect detection.

Yet, pendency remains stubborn. Lower courts lag in adoption, and caution persists. A 2024 UNESCO survey showed 73 percent of respondents favoured mandatory AI regulation in the judiciary. Key risks identified include accuracy failures, deepfake evidence, algorithmic bias, data breaches, and over-dependence on automation.

In December 2025, a bench led by Justices Dipankar Datta and AG Masih expressed severe displeasure after AI-generated fake case laws surfaced in a commercial dispute. The Court termed it a “grave” and “terrible error.” A similar controversy arose before the Bengaluru bench of the Income Tax Appellate Tribunal, which cited non-existent judgments.

Former Chief Justice BR Gavai observed last year that justice involves “ethical considerations, empathy, and contextual understanding”—qualities beyond algorithms. That perhaps explains why the judiciary remains cautious about rapid AI expansion.

On February 17, a bench of CJI Surya Kant, Justices Joymalya Bagchi and BV Nagarathna noted increasing instances of AI-generated pleadings without verification. “We have been alarmingly told that some lawyers have started using AI for drafting,” the CJI said. Justice Nagarathna recalled a fictitious judgment titled, Mercy vs Mankind—which simply did not exist. 

—The writer is former Senior Managing Editor, India Legal magazine and author of “Artificial Intelligence: The Coming Revolution”



Source link