New Delhi: On Monday, the Supreme Court of India, in the bench headed by Chief Justice Bhushan Ramkrishna Gavai, took up a Public Interest Litigation (PIL) aimed at regulating generative artificial intelligence (GenAI) in the judiciary.
During the hearing, the CJI remarked with pointed irony, “Yes, yes, we have seen our morphed pictures too,” thereby acknowledging that AI-generated fake images or videos of judges are circulating — a direct threat to the dignity and trust of the judicial system.
The petition argues that unlike conventional AI systems with pre-defined operations, GenAI can create new content — imagine fake judgments, false precedents or AI-fabricated case-law — pointing to a “hallucination” risk. It contends that this unpredictability and lack of transparency may breach Article 14 (Right to Equality) and Article 21 (Right to Life and Personal Liberty) of the Constitution.
Read Also: A Turning Point for Judges: SC Makes Merit the Backbone of District Judiciary Promotions
The bench adjourned the matter for two weeks after the CJI wryly asked the petitioner’s counsel: “You want it dismissed now or see after two weeks?”
This marks an early but significant move toward how India’s apex court is gravitating toward regulating technology that intersects with fundamental rights and justice delivery.
Key Details of the Regulation of Generative AI in Judiciary PIL & Court’s Response
- A national regulatory framework for GenAI tools used by judicial and quasi-judicial bodies in India.
- Ethical licensing and risk-based governance of AI systems capable of generating images, videos or audio (deep-fakes).
- Obligation on platforms (e.g., Meta, Google) to institute transparent grievance redressal mechanisms for AI-generated fake content.
- Constitution of an expert committee (government + jurists + technologists + civil society) to recommend standards for safe AI adoption.
Why the Court Took Notice
- The CJI’s remark confirms internal awareness of judicial imagery being manipulated via AI, signalling threat to institutional credibility.
- GenAI’s “black-box” nature (i.e., non-transparent decision logic) may yield biased, fabricated outcomes — undermining fairness and equality under law.
- The judicial system already grapples with huge pendency; adding opaque AI tools without safeguards risks compounding delays, bias and injustice.
What It Means for the Judiciary
- The Supreme Court appears willing to oversee uniform governance rather than fragmented, case-by-case interventions by High Courts.
- The matter underscores that technology is not just a tool but implicates fundamental rights when deployed in justice-delivery.
- The Court’s early comments suggest a cautious adoption path for GenAI — acknowledging benefits but emphasising human oversight, transparency and accountability.
Why Regulation of Generative AI in Judiciary Matter is Crucial for the Legal-Tech Era
- Deep-fakes & morphed imagery: The remark by the CJI that even judges’ images have been morphed signals how quickly personal, institutional trust can be eroded in the digital age.
- Opaque AI decision-making: If GenAI is used in judicial or quasi-judicial workflows (for example, automated drafting, case-law search, prediction), lack of transparency can lead to unfair or biased outcomes, undermining Article 14 equality and Article 21 fairness.
- Pre-emptive governance: With AI usage proliferating globally, India’s apex court stepping in early could set a benchmark for responsible judicial adoption of technology.
- Global resonance: Jurisdictions worldwide (EU, US, Singapore, China) are advancing risk-based AI regulation — India may now signal its intent to align with this trajectory.
- Public confidence in justice: When courts are seen to rely on or be influenced by AI tools, transparency and safeguards become vital to preserve public trust in legal adjudication.
Next Steps & Likely Impact
- The Supreme Court bench has adjourned the matter for two weeks — stakeholders will watch closely whether the government or institutions propose a roadmap soon.
- Possible outcomes include appointment of an expert committee, interim guidelines for GenAI use in courts, or even a national AI regulatory authority as the PIL demands.
- Judicial training modules may expand to include awareness of AI risks, deep-fakes, bias and the need for human-in-loop systems.
- Lawyers and litigants may soon face new duties: verifying AI-generated citations, disclaimers when AI tools are used, challenges around authenticity of evidence if AI-generated.
- The tech industry (AI tool-makers) may need to ensure explainability, auditability and legal-compliance of systems marketed to legal bodies.














