By Prof (Dr) S Surya Prakash
Humanity today stands divided between the AI-literate and the AI-illiterate. Increasingly, opportunity, efficiency, and influence flow towards those who understand how to work with intelligent systems. The AI Impact Summit 2026, held at Bharat Mandapam in New Delhi, has amplified this reality, marking a decisive shift in how artificial intelligence is perceived—not merely as a technological tool, but as an institutional force.
At the Summit, Union Minister Amit Shah described Sarvam AI as a symbol of India’s technological ascent, aligning AI development with the vision of Viksit Bharat. The message was clear: AI is no longer optional. It is central to governance, economic growth, and national competitiveness.
What makes this moment distinct is convergence. Rapid advances in generative AI, visible State endorsement, growing fears of automation-driven job displacement, and urgent calls for ethical safeguards have all collided. AI has moved from laboratories and boardrooms into courtrooms, classrooms, hospitals, and public administration.
For the legal profession, this transformation is particularly profound.
AI IN LEGAL PRACTICE: PROMISE AND PERIL
Among the most widely used AI systems is ChatGPT, increasingly deployed for summarising case law, drafting pleadings, preparing legal opinions, and interpreting statutory provisions. While it enhances efficiency, it demands vigilant human oversight to avoid inaccuracies or fabricated citations.
Google Gemini supports legal research and policy analysis by integrating search capabilities with document analysis. Microsoft Copilot automates contract drafting, compliance reporting, and document review within widely used office platforms. Claude, developed by Anthropic, is valued for analysing lengthy judicial and regulatory texts with contextual precision. Meanwhile, Sarvam AI represents India’s indigenous push towards multilingual and region-specific large language models.
These tools enhance productivity, accelerate research, and democratise access to legal resources. Yet, they also raise foundational questions: Can AI replicate human judgment? Who bears responsibility when AI systems err? And how can law regulate technologies evolving faster than legislation itself?
RESPONSIBLE USE: THE DISCIPLINE OF PROMPTING
AI outputs are only as reliable as the prompts that generate them. Clear instructions specifying jurisdiction, timeframe, and legal context are essential. However, effective prompting presupposes domain expertise. Without strong legal knowledge, users may frame flawed queries and accept misleading responses.
AI systems do not independently verify truth; they detect patterns in data. In legal practice, therefore, AI must remain an assistive instrument—not a substitute for professional reasoning. Verification, cross-checking, and ethical scrutiny are indispensable safeguards.
TECHNOLOGICAL VULNERABILITIES AND JUDICIAL ALARM
AI’s dependence on large datasets exposes it to risks such as data poisoning, misinformation campaigns, and adversarial manipulation. In the age of information warfare, deliberate insertion of false content into publicly accessible sources can distort AI-generated outputs.
This risk has already materialised. The Supreme Court has cautioned lawyers against uncritical reliance on AI-generated drafts after instances of fabricated case citations appearing in petitions. In one instance, fictitious authorities such as Mercy vs Mankind were cited—cases that simply did not exist. Similar incidents have led courts in the United States, the United Kingdom and Australia to penalise counsel for submitting AI-generated false precedents.
Such episodes underscore a sobering truth: AI hallucinations are not hypothetical. When introduced into pleadings or judicial reasoning without verification, they threaten procedural fairness and institutional integrity.
GLOBAL REGULATORY RESPONSES
Governments worldwide are racing to regulate AI. The European Union has adopted the EU AI Act (2024), a comprehensive risk-based framework. The United States relies on sector-specific guidelines and executive action. The United Kingdom has adopted a principles-based approach, while China emphasises algorithmic accountability and data sovereignty.
India’s path remains adaptive—prioritising innovation, capacity-building, and ethical guidance over a standalone AI statute. Yet, global regulatory trends suggest that structured governance is inevitable.
A DARWINIAN MOMENT FOR THE LEGAL PROFESSION
The rise of AI in legal practice mirrors Charles Darwin’s theory of adaptation: survival belongs not to the strongest or most intelligent, but to those most responsive to change.
Lawyers who integrate AI responsibly into their practice—while preserving ethical judgment—will thrive. Those who resist technological literacy risk obsolescence.
Legal education must therefore evolve. The Bar Council of India, which governs legal curriculum and standards, faces mounting pressure to integrate technology literacy, interdisciplinary study, and practical digital skills. Without reform, legal education risks drifting away from professional reality.
Artificial intelligence is inevitable. But blind reliance is dangerous. The future of law lies not in replacing human intelligence with machines, but in combining technological capability with constitutional values, ethical responsibility, and informed judgment.
Those who learn to work with AI will flourish. Those who do not may find themselves left behind.
—The writer is vice-chancellor, National Law Institute University, Bhopal




