Supreme Court affirms that judges remain vigilant with AI use in courts. Learn about AI hallucinations, their risks to legal decisions, and why human oversight is essential in the Indian judiciary.
Table of Contents
The Statement That Reassured the Nation
In a significant clarification that has captured widespread attention, India’s Supreme Court has firmly assured the public that judges are fully aware of the risks posed by artificial intelligence and will not allow it to dictate judicial decisions. Chief Justice Surya Kant made this powerful statement recently while hearing a public interest petition that sought strict regulations on AI use within the court system.
The Chief Justice’s words were direct and reassuring. He stated that judges use AI in a very cautious and deliberate manner and firmly declared that they do not want artificial intelligence to overpower the judicial decision making process. This statement comes at a critical time when courts worldwide are grappling with how to safely integrate advanced technology into their operations while maintaining the integrity of justice.
The Supreme Court’s position sends an important message: technology will serve judges, not replace them. AI will remain a tool in the hands of professionals who ultimately bear the responsibility for their decisions.
Understanding the Concerns: Why Did the Court Need to Respond
The Supreme Court’s statement did not emerge in a vacuum. A petition filed by Kartikeya Rawal had raised serious concerns about the unregulated use of generative AI within the judicial system. The petitioner highlighted specific problems that have already appeared in courts across India and internationally.
The most alarming concern was something called AI hallucinations. This technical term refers to a peculiar problem with artificial intelligence systems where they generate information that appears credible and factual but is actually fabricated. In the context of courts, this means AI can invent court judgments that never existed or cite cases that are not in any legal database.
Multiple instances of this problem have already surfaced. Lower courts in India have cited non-existent Supreme Court precedents in their orders. In some cases, advocates submitting petitions have used ChatGPT to research and draft portions of their arguments, only to later discover that the AI had invented fake case references and misquoted legal statutes.
The petitioner also raised concerns about algorithmic bias, where AI systems could perpetuate discrimination based on patterns in their training data. There were worries about deepfakes being introduced as evidence, privacy breaches when confidential case information is uploaded to external AI platforms, and the fundamental question of whether judges might unconsciously trust AI outputs too much and lose their independent judgment.
These were not theoretical concerns. They were real problems already showing up in courtrooms. The petition essentially asked whether the judiciary had a plan to prevent such disasters from becoming routine.
Key Takeaways on AI Use in the Indian Judiciary
| Theme | Summary |
|---|---|
| Supreme Court’s Stand | Judges remain cautious with AI and keep full control over decisions. AI supports their work but does not influence judicial reasoning. |
| Why Concerns Arose | A petition highlighted fake case citations, algorithmic bias, privacy risks, and overreliance on AI. These issues had already appeared in some courts. |
| AI Hallucinations | AI can generate case laws or facts that look real but are false. If used without checking, this can harm legal outcomes and affect fairness. |
| Court’s Response | The Court acknowledged the risks but said solutions should come through training, policies, and careful administration rather than blanket bans. |
| Human Oversight | Judges and lawyers must verify all AI content. Responsibility cannot be shifted to the tool. |
| Benefits of AI | When used correctly, AI helps with research, transcription, translation, and managing heavy workloads without affecting judicial independence. |
How AI Hallucinations Could Affect Legal Decisions
To understand why the Supreme Court took this concern seriously, consider what happens when an AI hallucinates inside a courtroom. A judge relies on a legal assistant powered by AI to research precedents and summarize case law. The AI presents what looks like a legitimate Supreme Court ruling from two years ago that appears relevant to the case. The judge, working under enormous caseload pressure with thousands of pending matters, may unconsciously trust this information without independent verification. They base their judgment partly on this non-existent precedent.
Later, when the judgment is appealed, the error becomes apparent. But by then, someone’s life or business has been affected by a decision founded on information that never existed. In criminal cases, this could mean an innocent person faces conviction based on false legal precedent. In commercial cases, it could lead to wrong rulings that harm legitimate businesses.
The problem extends beyond simple citation errors. AI hallucinations can affect how judges understand law itself. An AI system might misinterpret legal principles or create logical connections between cases that do not actually exist. These errors are particularly dangerous because they are not obviously wrong. They resemble genuine legal reasoning.
There is another concern called automation bias. This occurs when humans unconsciously place too much trust in computer generated outputs. A busy judge might accept AI recommendations about case priority or legal relevance without the careful human scrutiny that the judicial process demands. Over time, this could shift judicial thinking away from independent reasoning toward following AI suggestions.
The Key Points in the Petition and Court’s Response
The petition raised five main arguments about why AI posed unique dangers to the judiciary. First, it argued that AI hallucinations could violate the constitutional right to equality and fairness because decisions would be based on fabricated law rather than actual legal precedent. Second, it warned that citizens have a constitutional right to understand the reasoning behind judicial decisions, but if AI influences that reasoning, the transparency required by law becomes compromised.
Third, the petition pointed out that AI systems reflect biases in their training data and could amplify discrimination against marginalized groups. Fourth, it raised concerns about confidentiality and data security when case information is uploaded to external AI platforms. Fifth, it argued that current judicial processes had no safeguards to prevent this misuse.
The Supreme Court acknowledged that these concerns were valid and important. However, the Chief Justice explained that these issues should be addressed through administrative measures rather than judicial orders. The Court noted that it has already issued a white paper on AI and the judiciary, which outlines principles for responsible use. The Kerala High Court has developed comprehensive AI policies for subordinate courts. The judiciary is establishing training programs for judges and lawyers on AI use.
Most importantly, the Court emphasized that responsibility falls on both judges and lawyers. Judges must cross check AI generated information and verify its accuracy. Lawyers must ensure that any material they submit to court has been carefully verified and is not based on AI hallucinations. This duty to verify is part of professional responsibility and cannot be avoided by claiming that the AI generated the information.
The Chief Justice invited the petitioner to submit suggestions and recommendations through administrative channels so the Supreme Court can consider them as it develops its policies further. The Court also made clear that it will not issue blanket judicial directives forbidding AI use, as this would be impractical and counterproductive. Instead, the judiciary will develop nuanced policies that harness AI’s benefits while minimizing its risks.
Why Accuracy and Human Oversight Are Non Negotiable
The Supreme Court’s emphasis on human oversight reflects a hard learned lesson from other jurisdictions. In the United States, multiple cases have resulted in lawyers facing sanctions and fines for submitting court documents filled with AI generated fake citations. Courts have expressed frustration that attorneys apparently conducted no verification before submitting AI generated content to the court.
In one notable case called Roberto Mata v. Avianca, an attorney submitted arguments supported by case citations that simply did not exist. The court was stunned to discover that the lawyer had apparently used ChatGPT to generate legal arguments without any independent verification. Similar incidents have occurred repeatedly.
This pattern demonstrates a critical truth: AI is powerful at generating text that looks authoritative and professional but can be fundamentally wrong. The burden to verify falls on humans. There is no substitute for a trained legal professional checking facts, confirming citations exist, and ensuring that AI generated content actually makes sense in the context of the case.
Human oversight becomes even more important when one considers the nature of judicial decisions. A judgment affects people’s lives, freedom, and livelihoods. The law itself depends on following established precedent and reasoning from genuine legal principles. If a judge bases a decision partly on a non existent precedent that an AI invented, the entire foundation of that decision is compromised.
This is why Justice Surya Kant has stated clearly that human oversight is non negotiable. AI may assist in researching authorities, generating drafts, or highlighting inconsistencies. But AI cannot perceive subtle elements of human experience that matter in law. It cannot hear the uncertainty in a witness’s voice. It cannot understand the anguish behind a petition from someone whose life is in upheaval. It cannot weigh the moral dimensions of a judgment. These fundamentally human aspects of law remain the domain of judges and lawyers.
The Potential Benefits That Justify Careful AI Use
Despite these dangers, the Supreme Court has not rejected AI. The judiciary recognizes genuine benefits that careful AI use can deliver. India’s courts face an enormous backlog of pending cases. Some courts have years of pending litigation. AI tools can help reduce this burden in specific ways.
AI powered platforms like SUPACE (Supreme Court Portal for Assistance in Courts Efficiency) can summarize lengthy case files and extract key facts that matter for judicial reasoning. This frees judges from spending hours reading through volumes of documents to identify the essential information. Similarly, AI can improve transcription of court proceedings, replacing slow handwritten notes with digital records that can be searched and analyzed. When a case goes to appeal, parties can quickly locate the specific statements they need to review.
AI can also assist with translation. Many Indian courts handle cases involving multiple languages. AI translation tools can help ensure that parties in different language regions receive fair treatment without communication barriers. These are administrative and efficiency benefits that improve access to justice without affecting the core act of judicial reasoning.
The Kerala High Court has taken the lead by mandating an AI tool called Adalat.AI for recording witness statements across the state’s district courts. This tool creates accurate digital transcripts of depositions, reducing the need for manual note taking that can introduce errors. The Kerala High Court’s policy carefully specifies what AI can and cannot do. AI can handle transcription and translation. AI cannot draft judicial orders or make predictions about how judges should decide cases.
When AI is used this way, it functions as a genuine tool that serves legitimate purposes without threatening judicial independence. The problem arises only when AI is allowed to drift into areas where judicial reasoning and ultimate decision making happen. This line must be carefully maintained.
Moving Forward: The Framework Emerging From Courts
The Supreme Court’s white paper on AI and the judiciary, released in November 2025, outlines a framework for responsible AI use. The document recommends that courts establish AI ethics committees comprising legal experts, technology specialists, and policy makers to evaluate AI tools before they are deployed. These committees would set standards and ensure that only tools meeting rigorous requirements are used in courts.
The white paper recommends that courts develop their own AI systems rather than relying on external platforms. This reduces risks of confidentiality breaches and ensures that sensitive case information remains under judicial control. It also recommends mandatory training programs for judges, lawyers, and court staff on how AI works, what its limitations are, and how to use it responsibly.
Clear guidelines are being developed for different user groups. Judges are instructed to verify all AI generated outputs before relying on them. Lawyers must disclose when they have used AI and must verify that any AI generated content is accurate and not hallucinated. Support staff must understand which AI tools are authorized and how to use them properly.
The framework also recommends that all AI use in courts be disclosed and documented. When an AI tool generates information that a judge considers, there should be a record of what the AI was asked to do and what it produced. This creates accountability and helps identify patterns of AI errors that might require tool improvements or additional training.
Conclusion
The Supreme Court’s firm statement that judges will not allow AI to overpower the judicial process is both a reassurance and a commitment. It reassures the public that India’s courts understand the risks of artificial intelligence and will not abdicate their responsibility to maintain the integrity of justice. It commits the judiciary to moving forward thoughtfully with technology, capturing its genuine benefits while protecting against its dangers.
The real challenge lies in implementation. The principles are clear, but translating them into daily practice across thousands of courts requires sustained effort. Judges and lawyers must remain vigilant about verifying information and maintaining their independent judgment. Court systems must invest in training and oversight mechanisms. Technology developers must build tools that enhance rather than replace human decision making.
Ultimately, the Supreme Court’s message reflects a deeper truth about law and justice. These are profoundly human enterprises. Technology can serve these purposes, but it cannot embody the values that make justice meaningful. Only human judges, acting with wisdom and responsibility, can deliver the kind of justice that citizens deserve and that society requires. AI will support this noble work, but will never substitute for it.
Source: Judges caution against Artificial Intelligence in courts, flag ‘hallucinated’ citations & Truth, Trust, And Technology: Legal Profession In Age Of Al Hallucinations
Read Also: India Eases Deepfake Labeling Rules: A Win for Creative Industries & When AI Copies Your Face: The Fight for Personality Rights in the Digital Age