Discover how India’s new AI governance guidelines promote safe innovation through the “Do No Harm” principle, balancing technology growth with accountability and user protection.
Table of Contents
A Framework for Responsible Innovation and Digital Growth
India has taken a major step toward becoming a global leader in artificial intelligence by introducing comprehensive AI governance guidelines in November 2025. Released by the Ministry of Electronics and Information Technology, these guidelines set the stage for safe, responsible, and inclusive adoption of AI technology across all sectors. The framework is built on the core principle of “Do No Harm” and focuses on protecting citizens while encouraging innovation. As India prepares to host the AI Impact Summit 2026 in New Delhi, these guidelines position the country as a pioneering voice in ethical AI development, especially for developing nations.
Why India Needed These AI Rules
The government recognized that artificial intelligence presents both tremendous opportunities and significant risks. India is now the second largest user of AI tools like ChatGPT after the United States, making proper governance essential. The rapid growth of AI has brought challenges including deepfakes, misinformation, algorithmic bias, data privacy concerns, and threats to national security. Without clear guidelines, these risks could harm individuals and society.
India witnessed several incidents that highlighted the need for regulation, from AI generated fake news to privacy violations. The technology was advancing faster than legal frameworks could handle. To address this gap, the government formed an expert committee led by Professor Balaraman Ravindran from IIT Madras to develop a balanced approach. The goal was to create rules that would maximize economic and developmental gains from AI while protecting democratic values and individual rights.
The Seven Core Principles
The guidelines are anchored in seven fundamental principles called “sutras,” a Sanskrit term chosen to reflect India’s cultural heritage. These sutras form the ethical foundation for all AI development and deployment in the country.
Trust forms the foundation of the entire framework. AI systems must be reliable and earn public confidence. People First ensures that technology serves human needs rather than the other way around. The principle of Innovation over Restraint represents India’s commitment to encouraging technological progress without imposing excessive restrictions. Fairness and Equity requires AI systems to treat all users justly, without discrimination or bias. Accountability establishes clear responsibility for AI outcomes, with a graded liability system based on risk levels. Understandable by Design means AI systems should be transparent and explainable to users. Finally, Safety, Resilience and Sustainability ensures AI development protects both people and the planet.
Managing Risks and Ensuring Safety
The guidelines propose creating an India specific risk assessment framework based on real world evidence of harm. Rather than copying regulatory approaches from other countries, India will classify AI risks according to its own social, cultural, and economic context. A National AI Incident Database will track actual harms caused by AI systems, helping policymakers understand emerging threats.
The framework emphasizes techno legal solutions, where compliance is built into system design rather than enforced only through laws. This includes watermarking for AI generated content, privacy preserving data architectures, and mandatory labeling of deepfakes on social media platforms. For high risk applications or those affecting vulnerable groups like children, additional safety obligations will apply. The government established the AI Safety Institute under the IndiaAI Mission to conduct safety research, develop testing standards, and collaborate with international partners.
How The Rules Affect Government and Private Companies
For government agencies, the guidelines require integrating AI with Digital Public Infrastructure like Aadhaar and UPI to deliver personalized public services. Ministries and sectoral regulators such as the Reserve Bank of India, Securities and Exchange Board of India, and telecom authority will oversee AI applications in their respective domains. This sector specific approach allows tailored regulations rather than blanket technology restrictions.
Private companies developing AI systems are encouraged to adopt voluntary ethical frameworks and publish transparency reports. They must establish grievance redressal systems for citizens affected by AI related harms. Firms seeking government funding or integration with public platforms under the IndiaAI Mission must adhere to the seven sutras as normative standards. The framework proposes dual layer obligations where companies follow MeitY’s philosophical guidance while meeting binding requirements from sector specific regulators. Importantly, India does not plan to create a standalone AI law at this stage, instead relying on existing legislation like the Digital Personal Data Protection Act and Information Technology Act with targeted amendments.
Building Infrastructure and Protecting Privacy
A central goal is expanding AI computing infrastructure. The government has already deployed over 34,000 graphics processing units through the IndiaAI Mission, creating one of the world’s largest publicly accessible AI computing facilities. This infrastructure is made available to startups, researchers, and students at subsidized rates, democratizing access to expensive technology. The target is to continue scaling this capacity to support indigenous AI model development.
Increasing data availability while safeguarding privacy is another priority. The IndiaAI Dataset Platform will provide access to high quality, anonymized datasets for research and innovation. Legal updates are being proposed to address copyright issues related to AI training data and AI generated content. User privacy is protected through the Digital Personal Data Protection Act, which requires explicit consent for data processing and gives citizens rights to access, correct, and delete their information. Significant data fiduciaries must conduct algorithmic audits to detect bias and ensure AI systems do not pose risks to individuals.
Why Clear AI Rules Matter for India’s Future
These guidelines represent a pragmatic approach that balances opportunity with responsibility. By prioritizing innovation over heavy regulation, India aims to attract global investment and retain talented developers. The framework positions India as an attractive hub for AI development while ensuring responsible deployment through flexible governance mechanisms.
Clear rules build trust among citizens and businesses, essential for widespread AI adoption. They provide certainty to companies about what is expected, reducing compliance confusion. For the Global South, India’s model demonstrates how developing nations can govern transformative technologies without copying Western regulatory approaches. As artificial intelligence reshapes everything from healthcare to agriculture, having governance structures in place now prevents future crises. India’s emphasis on safety research, inclusive development, and ethical principles sets a foundation for long term sustainable growth in the AI economy.
Conclusion
India’s new AI governance framework marks a defining moment in the country’s digital evolution. By combining ethical responsibility with innovation, it lays the groundwork for safe and inclusive technological progress. The emphasis on trust, accountability, and transparency ensures that AI serves people while protecting their rights and privacy. This balanced, forward-looking approach positions India not only as a global leader in responsible AI but also as a model for other developing nations. With clear principles and robust infrastructure, India is building an AI ecosystem that fosters growth, safeguards democracy, and empowers every citizen in the digital era.
Source: India AI Governance Guidelines: Empowering Ethical and Responsible AI & What govt’s AI guidelines mean for tech regulation in India
Read Also: OpenAI Launches IndQA: Benchmark Showcasing India’s Role in Inclusive AI & ChatGPT vs Google AI: Which Tool Actually Saves You More Time?
One thought on “India’s New AI Guidelines: Building a Safe and Trustworthy Future”