
On March 1, the Ministry of Electronics and Information Technology, specifically the Cyber Law and Data Governance Group, released an advisory that instructed intermediaries and platforms to exercise due diligence in platforming content, to make certain said content does not run afoul of the IT Rules 2021. It asked, “intermediaries or platforms to ensure that use of Artificial Intelligence model(s)/LLM/Generative AI, software(s) or algorithm(s) … does not permit its users to host, display, upload, modify, publish, transmit, store, update or share any unlawful content as outlined in the Rule 3(1)(b) of the IT Rules or violate any other provision of the IT Act”. Thus, this advisory is not only limited to generative models which create synthetic content, as many commentators are misreading, but applies to all AI models, including classification and recommendation systems that every platform uses to decide which content to push on your feed. To boil down this particular instruction, it holds intermediaries responsible both for any content they host, or via recommendations, promote.
A second ask of the advisory is, “The use of under-testing/unreliable Artificial Intelligence model(s)/LLM/Generative AI, software(s) or algorithm(s)… must be done so with explicit permission of the Government of India”. This bit ignores the physical reality of AI systems — all such systems are “unreliable” in the mathematical sense. Machine learning (ML) systems constitute the vast majority of AI and are by definition stochastic — which means regardless of claims of responsibility and trustworthiness, all ML systems do have errors. Also, most ML systems as a function of their mathematical complexity are not auditable and AI “transparency” is a distant ambition. Thus, this bit of the advisory in its literal sense means every single AI model requires the explicit permission of the GoI. And AI systems are not limited to the overhyped Large Language Models, but a plethora of machine learning systems which are used in every aspect of our lives, having been made ubiquitous by the industry in a policy vacuum.
The first and the second demands, quite predictably and reasonably, have elicited cries of overreach and vagueness from the AI industry, academics and technology media. The first demand demolishes safe harbour in all forms and the second one is physically impossible and unenforceable. The commentary also is coalescing around the idea that this demonstrates all regulation on AI is harmful, “will harm innovation”, and thus harm the public good. I agree with the commentators from the AI industry that the advisory has ill-defined and unreasonable demands which are unenforceable. Where I disagree with them is the conclusion that the field itself should remain free of political oversight and intervention, AI being maths and maths being unable to be regulated. Maths can and must be regulated if it is being used to decide which person goes to jail, which person to give a loan to, which person to deem trustworthy, etc. These are socio-technical systems and the industry must not be naïve to the fact that in its deployments, it has been doing a significant amount of quasi-regulation and policymaking of its own on a society which has no vote to reject these policies. The question is what needs to be regulated and how one regulate well, not if regulation on AI should be on the table.
First, we must dismantle this market term of “artificial intelligence” which serves neither a technical nor social purpose, and be specific. Specific use cases of machine learning impact society, and in return, society should create an impact on it. No one can and should attempt to regulate models, and this distinction is important. Also, the “mathematical models” lose their innocence as they are trained on vast amounts of data, and both the human rights and political-economic aspects of data are not a settled matter. There should be loud debates on actually existing machine learning use cases and bright red lines designed on first principles to make certain that uses do not violate human or economic rights. Let alone regulation, outright bans are imperative for fraudulent (e.g., “emotion detection”) and harmful use cases which violate the dignity and human rights (e.g., facial recognition technology in public and work spaces). To be model-centric is to play the farcical game of regulatory whack-a-mole, to enforce existing rights is easier.
Second, any policymaking should keep in mind what it is trying to address. For example, if the concern, quite legitimate, is that social media platforms may subvert democracy and poison the well of political discourse, we must look separately at the platforms’ ability to selectively promote content, which arguably makes it a “media house” and may be regulated as a media house, from its ability to simply host political speech which makes it an intermediary, and should never be denied. Limiting the mandate to stop social media companies deemed public squares from algorithmically promoting selected content is something that can be debated but this is different from holding intermediaries responsible for all hosted content which creates a chilling effect. Of course, attempting to regulate every recommendation system is absurd. Finally, giving either the state or a private platform the mandate to decide what is misinformation is counter to democracy.
The false binary of regulation in AI is a consequence of a lack of specificity, both social and technical. To reach that specificity requires significant work involving academics and practitioners of AI and of society which AI impacts, which should start from transparent and rigorous technical and public consultation and end at robust laws. This hard work can begin once the advisory is recalled.
The writer is an assistant professor, working on AI and Policy, at the Ashank Desai Centre for Policy Studies at IIT Bombay