Shardul Amarchand Mangaldas & Co. Releases Report Calling for Comprehensive Reform of India’s AI Liability Regime

Shardul Amarchand Mangaldas & Co. Releases Report Calling for Comprehensive Reform of India’s AI Liability Regime

Shardul Amarchand Mangaldas & Co. Releases Report Calling for Comprehensive Reform of India’s AI Liability Regime

New Delhi, Feb 28:  Shardul Amarchand Mangaldas & Co. (SAM), has released a comprehensive report titled “Reforming India’s AI Liability Regime: Report on Artificial Intelligence and Legal Responsibility in India”, calling for reforms to India’s legal framework governing liability for harms caused by artificial intelligence (AI) systems.

As AI technologies become increasingly embedded across critical sectors such as healthcare, finance, transportation, and public services, the report underscores that India’s existing liability laws were not designed to address the unique characteristics of AI systems. These include opacity in decision-making, self-learning and adaptive behaviour, and the involvement of multiple actors across the AI value chain. 

The report highlights the growing urgency for a modernised liability framework that can respond to the risks posed by AI while continuing to support innovation and adoption.

Key Findings

The report identifies three fundamental gaps in India’s current legal framework:

  • Ambiguity in AI Value Chain Participation: Existing statutes lack precise taxonomies for entities across the AI lifecycle including designers, data providers, developers, and deployers. This absence of clarity complicates judicial determination of liability when AI systems cause harm.
  • Anachronistic Definitions of “Product” and “Defect”: The Consumer Protection Act, 2019, remains largely oriented towards tangible goods and does not explicitly account for intangible, adaptive software or algorithmic failures, such as statistical bias and autonomous decision-making errors.
  • Evidentiary and Procedural Barriers: The “black box” nature of many AI systems create significant information asymmetry. Unlike emerging international norms, Indian law currently lacks the procedural mechanisms – such as rebuttable presumptions – that would enable claimants to establish causation in complex technical disputes. 

Comparative Insights

Drawing on comparative analysis of legal frameworks in the European Union, United States, Australia, and Japan, the report identifies international best practices that India could consider adapting:

  • The European Union’s revised Product Liability Directive and AI Act explicitly recognise the multiplicity of actors within the AI ecosystem and allocate responsibility based on degrees of control and influence.
  • Jurisdictions such as the European Union and Australia have expanded legal definitions to include software, digital services, and evolving AI systems.
  • The European Union has introduced procedural innovations including rebuttable presumptions of causation and evidence disclosure rights, to ease the burden of proof for claimants. 

Principal Recommendations

The report proposes a principled reform agenda for India, including:

  1. Adoption of a Control-Based Liability Framework: Legal responsibility should correspond to the degree of control and influence exercised by actors at various stages of the AI lifecycle, rather than relying solely on traditional categories such as manufacturer or service provider.
  2. Clarification of Legal Definitions: Statutory definitions of “product” and “defect” should be updated to explicitly encompass AI systems, software, and adaptive digital products.
  3. Introduction of Procedural Safeguards: Mechanisms such as evidence disclosure obligations, presumptions of causation, and access to technical expertise should be incorporated to address the evidentiary challenges unique to AI-related disputes.
  4. Consideration of Safe Harbour and Public-Private Models: Certification-based safe harbours and multistakeholder regulatory organisations should be explored to balance innovation with accountability.
  5. Phased introduction of Specialised Dispute Resolution: The report recommends beginning with dedicated AI benches within existing High Courts, supported by technical experts, with a view to evolving specialised forums as the regulatory ecosystem matures.

Leadership Perspective

Commenting on the reportDr Shardul S. Shroff, Executive ChairmanShardul Amarchand Mangaldas & Co., said: “Artificial intelligence is no longer a future concern—it is already shaping critical decisions across the economy. India’s legal framework must evolve to address the distinctive risks posed by AI systems while continuing to foster innovation. This report seeks to contribute constructively to that evolution by identifying principled, comparative, and implementable pathways for reform.” 

Pallavi Shroff, Managing Partner, Shardul Amarchand Mangaldas & Co., said: AI-driven systems challenge some of the most settled assumptions in liability law—from causation and foreseeability to responsibility across complex value chains. India now has an opportunity to develop a forward-looking framework that protects individuals from harm while providing legal certainty to businesses deploying AI at scale. This report is intended to inform that balance with comparative insight and practical recommendations.”

 Akshay Chudasama, Managing PartnerShardul Amarchand Mangaldas & Co., added: “As AI becomes integral to commercial decision-making and public infrastructure, the absence of a clear liability regime creates risk for all stakeholders—developers, deployers, users, and consumers alike. A principled, control-based approach to liability, supported by procedural safeguards, is essential to ensure accountability without stifling innovation. We hope this report contributes meaningfully to India’s evolving technology governance discourse.”

Neel Achary

Website: