India exempts film and advertising from 10% deepfake labeling rule. Learn how the updated regulations balance creative freedom with deepfake control.
Table of Contents
The Big Shift in India’s Deepfake Regulation
India’s approach to managing artificial intelligence-generated content just became significantly less restrictive for filmmakers, advertisers, and digital creators. The Ministry of Electronics and Information Technology announced that it will exempt the film, advertising, and allied creative sectors from the controversial 10% mandatory labeling rule that was originally proposed to combat deepfakes.
This decision represents a meaningful recalibration of India’s deepfake regulatory framework. Rather than applying blanket rules across all sectors, the government has decided to recognize the unique challenges faced by creative industries while maintaining protections against malicious synthetic media. The exemption shows that policymakers listened to industry concerns and adjusted their approach to balance transparency with practical production realities.
Why Were These Rules Created in the First Place
The deepfake crisis in India has become increasingly urgent. High-profile cases like the 2023 Rashmika Mandanna deepfake video, which went viral across social media platforms, prompted the government to take decisive action. Prime Minister Narendra Modi himself warned that deepfakes pose a new national crisis threatening public trust and individual dignity.
On October 22, 2025, the Ministry of Electronics and Information Technology released draft amendments to the Information Technology Rules, 2021, defining synthetically generated information for the first time under Indian IT law. The original proposal mandated that any social media platform distributing AI-generated content must affix a visible, permanent label covering at least 10% of the screen space for videos or 10% of the duration for audio content.
The intention was sound. These rules aimed to empower ordinary citizens to distinguish between authentic and manipulated content. The government wanted to hold social media platforms accountable and prevent deceptive synthetic media from spreading without clear disclosure. The concerns were legitimate. Deepfakes have been weaponized for political propaganda, financial fraud, defamation, impersonation, and harassment of public figures, particularly women and celebrities.
The Creative Industry’s Powerful Objections
Once the draft rules became public, the film, advertising, and creative sectors responded with significant pushback. Their concerns were not about blocking regulation. Rather, they highlighted how a blanket 10% labeling requirement would be impractical and harmful to legitimate creative work.
Industry leaders pointed out that modern filmmaking and advertising routinely use AI-assisted tools for entirely legitimate purposes. Post-production specialists use AI for noise reduction, color correction, voice dubbing, background replacement, and image restoration. These are standard industry practices, not deceptive deepfakes. Forcing visible 10% labels on every frame of content using AI would effectively require creators to re-render entire projects, incurring enormous costs without advancing transparency.
Rajan Navani, co-chairman of the CII National Committee on Media and Entertainment, explained that a visible disclosure requirement would force costly re-rendering and degrade visual quality, particularly problematic for visually intensive content. He suggested a dual compliance path: visible labels only where manipulation could mislead audiences, and machine-readable metadata for industrial use cases.
Legal experts raised additional concerns. Ranjana Adhikari, partner at Shardul Amarchand Mangaldas and Co, noted that AI-assisted post-production and voice enhancement are now standard practices. Heavy-handed labeling could confuse audiences and undermine creative authenticity. Tanu Banerjee, partner at Khaitan and Co, pointed out that a tag covering 10% of the screen or audio would break the aesthetic flow of digital advertising and short-form content, two of the fastest-growing sectors in India’s creative economy.
Beyond practical concerns, the industry raised a critical psychological issue. A persistent AI-generated label could unfairly suggest that an entire work lacks originality, potentially reducing audience engagement and commercial value. This stigma could discourage innovation and experimentation in India’s rapidly evolving creative sector.
What the New Rules Actually Change
The exemption granted to film, advertising, and creative sectors represents a significant shift from the original draft. However, it is important to understand that this is not a complete abandonment of deepfake regulation. Rather, it reflects a recognition that not all AI-generated content requires the same level of visible disclosure.
The government has signaled that it will take a calibrated approach balancing regulation with the practical realities of high-end media production. This means the 10% mandatory labeling requirement will not apply to content created within professional creative workflows for films, advertisements, and similar formats. The exemption acknowledges that creators in these sectors will use AI as a tool, not primarily as a means of deception.
However, the rules will remain in place for social media platforms and user-generated content where the risk of deceptive deepfakes is higher. The distinction is important: platforms must still require users to declare whether they are uploading AI-generated content and deploy verification tools to identify undeclared synthetic media. The specific visible labeling requirement was the point of tension, and that has been modified for professional creative work.
How This Affects Production and Post-Production
The exemption brings meaningful relief to film studios, advertising agencies, and digital content creators. Production workflows will not need to be completely redesigned to accommodate visible labeling requirements. Post-production specialists can continue using AI-powered tools for color grading, audio enhancement, and visual restoration without worrying that every frame will require a 10% visible tag.
This matters for cost efficiency. One analysis noted that the 10% labeling rule could have raised production costs by forcing creators and platforms to rework production workflows, build new verification systems, and continuously audit content. Studios operating on tight budgets would have faced substantial technical and financial burdens.
The exemption is also significant for creative control. Visual effects teams can deliver the aesthetic vision directors intended without persistent on-screen warnings that might distract viewers or reduce the perceived quality of the content. For advertising specifically, where every pixel matters in brand communication, the removal of mandatory visible labels preserves the creative integrity that advertisers depend upon to connect with audiences.
Additionally, animation studios and VFX companies will not face the same compliance overhead that social media platforms and user-generated content sites must navigate. This creates space for experimentation and innovation, particularly important for India’s growing digital content and animation sectors.
Balancing Regulation with Creative Freedom
Despite the exemption, the government has not abandoned deepfake regulation entirely. The approach reflects what officials describe as a consultative process that engages with industry stakeholders with an open mind. This is a meaningful distinction from purely regulatory mandates handed down without consideration of practical implications.
The shift also reflects broader global trends. The European Union’s AI Act and the US Federal Trade Commission’s approach both allow flexible formats rather than rigid display thresholds, recognizing that blanket rules can stifle legitimate innovation. India’s decision aligns with this more nuanced international thinking.
However, serious protections remain in place. The underlying concern about malicious deepfakes has not disappeared. Social media platforms with 5 million or more users still face significant obligations. They must obtain user declarations about whether uploaded content is AI-generated, deploy automated tools to verify these declarations, and ensure that confirmed synthetic content is clearly labeled and not distributed without proper disclosure.
This creates a tiered system. Professional creators working within studios and production houses face fewer restrictions, recognizing that their work is generally not designed to deceive. Simultaneously, platforms where ordinary users upload content face stricter requirements because the risk of deliberate deception is higher on these channels.
The Broader Implications for India’s Digital Economy
This regulatory adjustment matters beyond just production and post-production workflows. It signals that India’s government is capable of refining policy based on industry feedback rather than rigidly applying one-size-fits-all approaches. This flexibility could affect how India regulates other emerging technologies in the future.
For India’s creative industries, the message is that growth and regulation can coexist. The animation sector, visual effects companies, and digital content creators operate in a highly competitive global market. Excessive domestic regulations could have pushed companies and talent to other countries. The exemption preserves India’s competitive position while still addressing genuine concerns about malicious deepfakes.
The decision also reflects recognition that authentic AI-generated content has legitimate uses. Not every AI-assisted edit is a deepfake. Not every use of synthetic media represents deception. By distinguishing between these cases, the government has created space for beneficial innovation while maintaining guardrails against genuine harms.
What Happens Next
The final rules are expected to be notified shortly. While the specifics of how the exemption will be codified remain to be seen, the direction is clear. Film studios, advertising agencies, and creative production companies will have significantly more freedom than social media platforms when it comes to disclosing AI-generated content.
The challenge now is implementation. Government agencies, platforms, and creators must work together to define exactly which sectors qualify for the exemption and what compliance looks like for those still subject to the rules. This requires ongoing dialogue and clear technical standards.
Industry bodies continue to emphasize the importance of balanced consultations as rules are finalized. They stress the need to preserve creative freedom and commercial viability while still tackling malicious deepfakes. This collaborative approach, rather than purely adversarial regulation, appears to be the path forward.
Thoughts
India’s decision to exempt creative industries from the 10% mandatory deepfake labeling rule reflects a maturing approach to AI regulation. Rather than treating all synthetic content as suspicious, the government has recognized that context matters. AI-assisted color correction is not a deepfake. Voice dubbing is not deception. Professional creative work operates under different standards than user-generated social media content.
This calibrated approach preserves both protection against genuine harms and space for legitimate innovation. It shows that policymakers can listen to concerns, adjust course when appropriate, and still maintain meaningful safeguards. For India’s creative industries, it is an important recognition that growth and responsibility can advance together. For the broader regulatory landscape, it demonstrates that flexibility and nuance can sometimes serve the public interest better than rigid, uniform rules.
Conclusion
India’s shift on deepfake rules marks a more mature stage in its approach to AI governance. By carving out an exemption for filmmakers, advertisers, and other creative professionals, the government has acknowledged that not all synthetic media carries the same risk. This gives legitimate creators room to work without intrusive labels that would undermine quality, inflate costs, or confuse audiences. At the same time, platforms that host user-generated content still face firm obligations to detect, verify, and label manipulative media. That balance matters. It protects people from harmful deepfakes while keeping India’s creative and digital sectors competitive and open to innovation. The decision also signals a more responsive regulatory mindset, one that values consultation and practical implementation. As final rules take shape, collaboration between policymakers, platforms, and industry will be essential. If done well, India can set a thoughtful model for managing synthetic media in a fast-changing digital world.
Source: As India looks to mandate AI content labelling, examining the growing menace of deepfakes & What are India’s new deepfakes & AI-content guidelines?
Read Also: India’s New AI Guidelines: Building a Safe and Trustworthy Future & When AI Copies Your Face: The Fight for Personality Rights in the Digital Age