Introduction
In the dawn of the AI era, nations around the globe are grappling with the challenges and opportunities presented by this transformative technology. As a burgeoning powerhouse in AI research and application, India stands at a crossroads, poised to shape its regulatory trajectory. This article delves deep into India's evolving stance on AI regulation, juxtaposing it against the contrasting strategies of the European Union's horizontal approach and China's vertical stance. Furthermore, in the backdrop of the "Brussels Effect," where the EU's regulations influence global standards due to its market size and regulatory power, we evaluate how other jurisdictions are responding and adapting. As India seeks to craft a regulatory framework tailored to its unique socio-cultural fabric and economic aspirations, we explore the intricacies of striking a delicate balance between fostering innovation and safeguarding individual rights.
India's Dilemma: Striking the Right Balance in AI Regulation
In recent years, India has emerged as a significant global player in research and innovation, particularly in the deployment of cutting-edge technologies like AI across various sectors such as health, agriculture, and education. This growth is attributed to the collaborative efforts of state governments, research institutions, leading private sector applications, and a dynamic AI start-up ecosystem.
The discourse on AI ethics and governance is evolving globally, with numerous sets of AI ethics principles proposed by multilateral organizations, private entities, and nation states. India, too, needs to establish frameworks that implement these principles across the public sector, private sector, and academia. Striking a balance between promoting AI benefits and mitigating risks specific to Indian society is crucial. The emphasis on Responsible AI suggests the need for ethical principles in the design, development, and deployment of AI, with hopes that these principles will influence India's policymaking.
The legal landscape regarding the use of copyrighted material for training AI models remains uncertain. Existing intellectual property laws may not adequately address creations and works generated by AI, even if prompted by humans. Despite this ambiguity, there is a consensus, as reflected in the Gen AI Report, that generative AI regulations should prioritize protecting individuals from harm. Such harms include privacy violations, breaches of data protection rights, discrimination in accessing services, and exposure to false or misleading information. Additionally, addressing second-order harms like intellectual property rights violations requires the establishment of clear laws in AI regulation. Thus there is a genuine need for regulation but such a regulation should not stifle the growth and innovation in AI especially in a developing nation like India.
The Global AI Regulatory Landscape: EU's Horizontal vs. China's Vertical Approach
With the rapid growth of artificial intelligence in different domains and platforms, several jurisdictions and organizations-both national and international, have started regulation of AI. Different bodies have used different approaches and methods for the same. For example, in the European Union, the legislation passed recently in December 2023 has a more horizontal and risk-based approach. On the other hand, China has adopted a more vertical approach by enacting regulations that are specific towards certain applications. However, when it comes to India, we see a rather vacillating approach towards regulation of artificial intelligence. While the Indian government once declared that they would not regulate artificial intelligence because that might result in curtailment of potential innovations that can come up in the future, the Ministry of Electronics and Information Technology soon pointed that they would come up with a Digital India Act to regulate artificial intelligence in the country. Till the passage of the act, regulation of AI in India is seen to be extremely scattered with a less-risk, no harm approach, with no central law regulating AI.
The AI Act passed by the European Union has classified AI into three main categories, based on the risk that they pose. These different categories have different obligations for the providers and users. The three categories are: Unacceptable risk-that pose a significant threat to people, for example, cognitive behavioural manipulation of people, social scoring, biometric or facial identification. This type will be banned. The second category is high risk- this negatively impacts the rights of the people and hence, poses a high risk. The third is limited risk AI, which requires the producers to make some transparency obligations that makes users aware that they are interacting with AI and that their data might be used by AI to generate or manipulate content. Hence, this is a more horizontal approach, wherein one piece of legislation covers all impacts of AI comprehensively.
On the other hand, China has had different regulations for different facets of AI. In 2021, China had passed a set of regulations for recommendation algorithms. In 2022, rules for deep synthesis, whereas in 2023, draft rules on generative AI were passed. Different organizations are responsible for rolling out these laws. Thus, China is laying out individual laws to set the groundwork for a national AI law. This is a vertical approach which lays out individual laws for governing different aspects of AI. In contrast to both these countries, lies India, which hardly has any regulation over AI. The only exception in this regard is the enactment of the Digital Personal Data Protection Act of 2023. This Act regulates some aspects of AI when it comes to misuse and breach of data protection. However, it has many shortcomings in itself. When it comes to India, a major question that lies before the policy-makers is what approach should be taken to regulate AI, that adequately scrutinizes the existing platforms while also not getting outdated by upcoming changes in AI.
The horizontal and vertical approaches adopted by the EU and China, respectively, have their own criticisms. The approach adopted by the EU has been critiqued by business groups to have an adverse impact on innovation. China’s approach is criticised for being too rigid for controlling information. Another aspect in China’s regulation is that some terminology has been so vaguely drafted that they run the risk of losing their relevance in the long run. This shows a lesson that no approach can completely exist on its own. We need to incorporate the elements of both these approaches in a way that suits the Indian scenario, economy, culture and legal context. The challenge for India remains to find an adequate framework that meets this balance without overregulation and yet, protection of data privacy and the fundamental rights.
Conclusion
In the rapidly advancing domain of artificial intelligence, India stands at a pivotal juncture. While the European Union and China have charted contrasting paths in AI regulation, each with its own set of challenges and criticisms, India must carve out a distinct approach that resonates with its unique socio-cultural landscape, technological aspirations, and economic imperatives.
The European Union's horizontal approach, though comprehensive, has drawn concerns about potentially stifling innovation. China's vertical stance, while focused, risks becoming outdated due to its rigidity and vague terminologies. For India, a country with a burgeoning AI ecosystem and aspirations of global leadership, striking a balance is paramount.
The Digital Personal Data Protection Act of 2023 was a step in the right direction, addressing some facets of AI regulation. Yet, as the nation stands on the cusp of a digital revolution, there's a pressing need for a cohesive, forward-looking AI regulatory framework. This framework should be adaptive, fostering innovation while safeguarding individual rights, especially data privacy. India’s strength lies in its diversity, adaptability, and innovative spirit. As it navigates the intricate maze of AI regulation, it has the opportunity to set a global precedent—one that harmonizes technological advancement with ethical considerations, ensuring a prosperous and equitable digital future for all its citizens.
Author & Co-Author: Nalin Arora and Dhanishtha Arora
University & Year: Jindal Global Law School, 3rd Year