AI Regulation in India: Between Innovation and Accountability
MY BLOGS
Prachi Deva
5/26/20257 min read
I. Introduction – The Invisible Hand of AI
Think about your last online purchase, the route your ride-hailing app chose, or even the news that popped up in your feed. Chances are, AI had a hand in it. We’re living in a world increasingly shaped by algorithms—some helpful, others eerily opaque. Globally, we’ve seen AI diagnose rare diseases, help fields flourish with smart irrigation solutions, and even generate hyper-realistic deepfakes. All this sounds exciting, promising, and, yes, a little unnerving. India is no bystander in this revolution. With more than 900 million smartphone users, a digital-identity infrastructure entwined with everything from welfare to finance, and an ambitious pipeline of AI startups and government pilots, the subcontinent is waking up to a powerful, data-driven future. But here’s the catch: as India embraces AI, it’s playing leapfrog—jumping straight into deployment without laying down solid guardrails. The question isn’t if AI will shape India’s future, but whether India can shape AI so it serves people while preserving rights and dignity.
II. India’s Rapid AI Adoption – Innovation with Blindspots
India launched its exploration of AI with a bang. In 2018, NITI Aayog’s “AI for All” strategy set the tone, highlighting agriculture, health, and education as key use cases while stressing ethical deployment. Fast forward to 2023: the Digital Personal Data Protection Act (DPDP) brings data into legal territory, and the 2025 budget allocated ₹10,372 crore for the IndiaAI Mission, aiming to build foundational models and state-backed GPU clusters. Meanwhile, states like Odisha and Telangana have begun piloting AI officers in government departments and deploying facial technology in policing.
On the rural front, agritech is taking off. Platforms offering tailored crop advice, based on satellite imagery and local weather patterns, have helped farmers cut pesticide and water use by about twenty percent. That’s not a marginal tweak—it’s an agricultural leap. In healthcare, AI tools help screen diabetic retinopathy in underserved communities. Financially, AI-driven credit scoring has enabled grassroots lending for millions.
Yet, this speed comes with tradeoffs. In finance, the Securities and Exchange Board of India (SEBI) recently floated a five-point rulebook for AI use in trading platforms and portfolio risk analysis. It calls for mandatory disclosures, bias testing, model governance, testing norms, and data security. Promising, but still in consultation mode—meaning no requirement for firms to adhere to it yet.
And nothing exposes the gap between technological adoption and citizen protection more than the Delhi Police’s use of facial recognition technology (FRT). When asked under the Right to Information Act, authorities revealed their FRT systems operated at around 80 percent accuracy—vastly improved from an alarming two percent, but still far from reliable for judicial use. Yet matches above 80 percent are treated as “positive,” even if that means false positives could trigger intrusive questioning or arrests without a clear appeal mechanism. Communities such as Muslims and lower-income groups report being disproportionately flagged—one recent investigation talked about more than 1,900 matches during the Delhi riots, some of which were evidently mistaken.
These examples illustrate a pattern: India’s AI ambitions are bold and accelerating, but accountability frameworks are murky, fragmented, or absent. That’s a systemic risk—not just for individual freedom and trust, but for innovation itself.
III. Cracks in the Foundation – The Regulatory Vacuum
When AI-powered systems misfire, cause injustice, or erode privacy, who cleans up the mess? Currently, no one—at least not clearly. Our existing legal scaffolding is fragmented across data privacy (DPDP Act), signature-based regulations (some sectors), and aspirational ethical notes. But there’s no cohesive accountability architecture—no right to explanation, no audit requirement, no registration of high-risk AI. There’s no defined penalty when algorithmic errors hurt people; there’s no accessible grievance cells to challenge a bad credit decision, wrongful biometric match, or misdiagnosis.
Take accessibility: If an AI-driven educational tool rejects your child’s math assignment, there’s no mandate requiring the platform to reveal why. Take health: If an AI misreads a retinal scan and misses risk markers for glaucoma, there’s no legal regime prescribing recourse. Take surveillance: facial recognition is used widely—124 government-authorized FRT projects have been documented—but without transparency or oversight, they risk becoming tools of mass surveillance rather than public safety.
Section-specific uniqueness is also a problem. SEBI’s rulebook may eventually oversee finance, but what about law enforcement under the Police Act? Or education tools purveyed by ed-tech platforms governed by the IT Act? Or healthcare AI under the jurisdiction of the Clinical Establishments Act? This patchwork leads to glaring regulatory gaps and jurisdictional confusion Nor is this merely theoretical. A study tracking four major FRT systems trained on Indian faces found error rates as high as 14 percent for gender classification and up to 42 percent for age estimation, especially among darker-skinned women. This isn’t just technology failing—it risks entrenching gender and caste-based bias, deepening distrust in public systems. The alarm bells are loud.
The core of the problem: we’re racing ahead with AI applications, but treating the legal framework and public safeguards as optional extras.
IV. India’s Road Ahead – Learning from the World, Leading with Intention
India stands at a compelling crossroads. We are rapidly innovating with AI across sectors—agriculture, education, health, finance, and law enforcement—but doing so in an environment that’s largely unregulated. The road ahead demands bold imagination and clear institutional design. And we don’t need to start from scratch—because across the world, other nations have made moves we can learn from, adapt, and even improve upon.
Take the European Union, for instance. Its AI Act, adopted in 2024, is arguably the world’s most comprehensive framework. It classifies AI systems by risk levels—banning some, tightly regulating others, and offering a compliance-light touch for minimal-risk systems. High-risk AI tools, like those used in policing or recruitment, must undergo pre-deployment audits, register on a central EU database, and offer meaningful human oversight. Fines for non-compliance go as high as 7% of global turnover. These are powerful levers. India need not replicate the bureaucracy of the EU, but the logic of tiered risk, mandatory registration, and explainability for high-impact systems makes sense—and can be scaled to our regulatory strengths.
The United States offers a contrasting model. Its Blueprint for an AI Bill of Rights is not law but an aspirational rights-based framework. It speaks to the need for safe and effective systems, algorithmic transparency, protections against discrimination, and human alternatives when automated decisions go wrong. While it lacks enforceability, it places people—especially vulnerable communities—at the heart of AI governance. India could incorporate such values into a publicly-declared Citizen’s AI Charter, while giving legislative teeth to selected guarantees in critical sectors.
Then there's China, where regulation has taken a more directive shape. The Chinese model mandates real-name verification for algorithmic services, registration of generative AI tools, and even watermarking for AI-generated content. Their system is tightly centralised, which India needn’t replicate—but the structural ideas of algorithm registration, impact assessments, and model disclosures are highly relevant. Imagine if Indian platforms were required to publicly file the datasets and logic behind high-risk models—say, those used in credit-scoring or predictive policing. That would be transformative.
Back home, promising signals are emerging—but they need coherence. The SEBI draft rulebook for AI in the securities market is a strong start: it proposes model governance, bias mitigation, data transparency, audit trails, and mandatory testing. These should be formalised into binding regulations—not just guidance—and serve as templates for other sectors. The Digital Personal Data Protection Act, 2023, creates a foundation for consent and purpose limitation, but lacks provisions for algorithmic accountability or explainability—these must be layered on, either through amendments or a complementary AI Regulation Bill.
We should also empower sectoral regulators. The National Medical Commission could mandate certification of AI diagnostic tools, just as the EU now requires for health-tech under its Medical Devices Regulation. The National Education Policy should be expanded to include governance of AI-driven ed-tech platforms—requiring them to disclose training data, assessment algorithms, and enable grievance mechanisms. In policing, courts must review FRT use, and judicial approval should be necessary for any biometric surveillance beyond defined thresholds.
And India’s federal structure can be a strength here. States like Odisha are leading the way with their AI policy, which mandates training for government officers, AI skilling for schoolchildren, and deployment of ethical standards in public procurement. Telangana, too, has begun mapping AI interventions in agriculture and citizen grievance redress. These pilots could serve as testbeds for regulatory sandboxes—just as the EU has encouraged through its innovation-friendly model.
All of this, however, requires an anchor institution. India must create an independent AI Commission of India—a multi-disciplinary body that registers high-risk AI tools, oversees audits, receives complaints, issues fines, and coordinates with existing regulators like SEBI, RBI, UIDAI, NMC, and state agencies. This body should also publish an annual "State of AI Accountability" report, akin to how the U.S. FTC or the UK ICO reports on digital practices. Beyond institutions, the public needs to be engaged. AI literacy should be embedded in school curricula, just as Odisha is planning, and in bureaucratic training modules. Citizens need to know when an AI has made a decision, and be able to contest it. A dedicated AI Bill of Rights—perhaps as a schedule to the future AI Act—could guarantee explainability, fairness, non-discrimination, redress, and transparency.
Finally, India can lead internationally—not just follow. Hosting the AI Global Partnership (GPAI) summit gave us a voice, but we can shape the agenda by introducing a Model AI Law for the Global South, grounded in inclusion, cost-sensitivity, and democratic checks. We can propose an international registry of harmful AI uses, voluntary codes for low-resource countries, and collaborative research hubs that pool open data under ethical terms.
V. Conclusion – A Future Built with Responsibility
India is racing towards an AI-powered future. That future can bring clean water allocation, personalized education, precision farming, rapid diagnostics, scalable justice, and inclusive governance. But it must not leave behind fairness, privacy, equity, or trust. An AI framework that couples national law with sectoral enforcement, institutional capacity, and public literacy can turn AI from a blindriver to a guided force. India doesn’t have to reboot regulations from scratch—it can innovate responsibly, learn from global peers, and lead from a uniquely Indian context. That means grounding AI in civil rights, embedding explainability in everyday apps, and ensuring that when things go wrong, people aren’t left unheard or uncompensated.
In the next two years—through legal reform, public awareness, and regulatory muscle—India has the chance to be the first large democracy to deploy AI responsibly at scale. Not because it’s easy, but because it’s essential. Let’s build a future where AI empowers India’s billion voices, without silencing a single one.