India AI Summit
AI-Generated Image

India hosted the fourth global AI summit in New Delhi from 16th to 21st February 2026, following Bletchley Park (2023), Seoul (2024), and Paris (2025) — the first in this series to be hosted by a developing nation. India’s framing around impact, putting People, Planet, and Progress at the center and organised through seven thematic working groups, was an intentional shift away from a risk-focused approach and a diplomatically-charged pivot to operationalise AI as a tool for national development and global equity. It signalled that India was prepared to redirect the terms of global AI governance toward questions that have mattered to its own development agenda: inclusion, access, economic growth, and digital sovereignty.

The summit drew delegations from over 100 countries, more than 20 heads of state, and leading technology figures including Sam Altman, Sundar Pichai, Dario Amodei, and Demis Hassabis. Its headline outcome — the AI Impact Summit Declaration — was endorsed by 91 countries and international organisations. Another notable outcome was the New Delhi Frontier AI Commitments, a set of agreements drawn up by the Indian Government and endorsed by 13 leading AI companies. Furthermore, $250 billion in AI-related investments were announced, including $110 billion pledged by Reliance Industries. Though all were non-binding commitments, they serve as market signals of optimism. 

This commentary provides overarching context for the summit, and hones in on India’s approach towards governance, trust, and safety in an environment where AI is rapidly gaining a stronghold across industries and societies across the world.

India’s Strategic Motivation

Hosting this summit was, for India, as much a diplomacy instrument as a governance and economic one. Being the first developing nation to host this series, and placing emphasis on economic and social outcomes, carries a message of leadership: that it, along with other Global South nations, is creating its own seat at the table in shaping AI cooperation.

Furthermore, the summit is India’s strategic assertion of agency in a rapidly fragmenting global AI order. On one side, the EU’s binding, risk-tiered AI Act. On the other, the United States, rejects the notion that AI governance should be global, as declared by White House official Michael Kratsios. India positions itself in the space between the EU and the US — though, as this commentary argues, its position appears to sit closer to the US model in its disproportionally larger focus on opportunity versus risk, and its non-promotion of binding regulation, than to Europe’s rights-protective approach. That said, India’s position is still unique in its strong emphasis on society and the realities of a developing nation with its focus on human capital, democratisation of AI foundations for equitable innovation capacity and resource efficiency.  

This middle ground may reflect domestic realities. For a populous country with vast linguistic diversity and persistent development gaps, AI governance is about economic advancement, quickly scaling human-centric AI with safeguards from the get-go, expanding access to services and public infrastructure, delivering welfare at scale, responsible resource use, and global competitiveness of domestic firms. 

This is also a play for balance of power in a rapidly concentrating technological landscape. The Brookings Institution noted that the summit championed “middle powers” as a third path of influence, breaking what it called “path dependency from the old world order. ” India is aware that the alternative to shaping global AI norms is having them shaped for it — by a handful of AI companies headquartered in the US or by a regulatory architecture designed primarily for European market conditions. Hosting the summit was a way of codifying and asserting India’s own approach.

On the economic front, the summit also deepened strategic supply chain partnerships, with India formally joining the US-led Pax Silica coalition, a key moment signalling shared commitment to secure the ‘silicon stack’, preventing overconcentration in global supply chains. Furthermore, India has committed to an additional expansion of national compute capacity from the 38,000+ GPUs already provisioned as part of the IndiaAI Mission, reinforcing itself as a key producer and consumer in the AI ecosystem. 

Governance as Development Agenda

The summit’s architecture — 7 interconnected thematic areas covering human capital, inclusion, trust, resilience, science, democratising resources, and social good — indicates India’s perspective on AI governance: as a horizontal network of collaborative action areas that countries can engage with selectively, according to their own capacities, rather than a regulatory regime. Moreover, safety is to be integrated with development rather than as a separate domain.

This modular, opt-in approach has advantages for a developing-country convener. It allowed nearly 100 countries to engage without requiring agreement on a single binding text and produced several co-created outputs in addition to the main declaration: the Equitable AI Transitions Playbook with the ILO, the Charter for the Democratic Diffusion of AI signed by 22 countries, and the Alliance for Advancing Inclusion Through AI endorsed by 20 countries and UNICEF. Additionally, the Human Capital Working Group produced Voluntary Guiding Principles for Skilling and Reskilling endorsed by 24 countries – practical tools for nations to respond to AI-driven labour disruption.

Governing AI as a Commons

The most substantive governance outcome from the Safe and Trusted AI Working Group is the Framework for the Trusted AI Commons — an attempt to operationalise the principle that responsible AI should be governed as a shared societal resource, rather than a private technology or a regulated product.

The framework’s starting diagnosis is this: while there is broad alignment on responsible AI principles, ‘their translation into practice remains constrained by limited mechanisms for implementation as well as uneven access to resources, technical support, and institutional capacity.’ The Commons proposes to address this gap through an open, federated repository of technical AI safety resources — evaluation tools, benchmarks, red-teaming frameworks, auditing instruments, and bias-mitigation methods. It is accessible to all but designed especially for resource-constrained countries.

Its architecture is explicitly modular. Rather than prescribing a single regulatory model, it allows countries to access, adapt, and deploy resources within their own governance frameworks without demanding convergence. Implementation is through a light-touch Secretariat hosted by India, an informal semi-annual Steering Group, and voluntary contributor networks. As of February 2026, the Commons had been endorsed by 22 countries and UNICEF — including Brazil, Canada, France, Japan, Nigeria, Singapore, and the UK. The US was absent from the endorsement.

It is important to note that this is an explicitly commitment-oriented model, mandating no compliance and establishing no liability. This approach implies a core motive of capacity-building by lowering the entry and growth barriers for other developing nations to access the technical infrastructure of responsible AI deployment currently concentrated in a handful of well-resourced nations. For a country like Nigeria or Tanzania, access to shared benchmarks to evaluate a government procurement AI system for fairness is a real capacity gain. The Trusted AI Commons’ value as it exists today appears to be in making governance possible where it would otherwise be absent.

Implications: The Credibility Test

India’s ambition to serve as a governance anchor for developing countries’ engagement with AI is significant. However, India should consider that several factors will influence its credibility.

The limits of soft law: The New Delhi Declaration is non-binding. The Trusted AI Commons is voluntary. The Frontier AI Commitments made by 13 leading developers are pledges, not contracts. Per Amnesty International, AI summits have failed to advance meaningful action towards creating a digitally safe future, instead producing ‘techno-solutionist narratives and soft governance instruments.’ If the Commons remains a repository with no binding agreement to audit actual deployment, the risk is that these efforts will fail to adequately achieve the kind of innovation that people and societies trust.

The US veto: Washington’s rejection of global governance, combined with its parallel promotion of US-centric AI sovereignty through the American AI Export Program, represents a structural constraint. India’s AI governance for humanity is stunted if the world’s most powerful AI producers are aggressively deepening their presence in India via local infrastructure investments and partnerships, while also originating from a contradictory governance landscape.

The antitrust blind spot: The governance frameworks created are designed to work with, not against, the existing structure of the AI industry. But that structure is highly concentrated today on a few companies, and is highly susceptible to market failure. Linda Griffin, Vice President of Global Policy at Mozilla, shared that it is not ‘sovereignty for a select few companies to own and control AI,’ warning that ‘dependency-oriented partnerships are not true partnerships.’ Mozilla was one of the few organisations to run a competition panel — underscoring how underrepresented these questions remained in the formal agenda. Given that the concentration of market power is more likely to constrain AI’s benefits than the cost of compliance, a model relying primarily on voluntary, market-based mechanisms may not always incentivise the safe and trustworthy deployment of AI in India.

Missing voices: A structural absence at the summit was civil society. TechPolicy.Press notes that the summit’s prominent forums, such as the CEO Roundtable and Leaders’ Plenary brought heads of state and tech executives together to set the agenda and make important decisions, while offering no equivalent high-level platform for civil society, labour leaders, human rights defenders, or marginalized communities. This platform disparity is a concern. Given the monumental impact that AI will soon have across sectors and in people’s lives, and if AI is to be adopted by citizens, then it is important for civic society to be involved beyond mere participation and more so in shaping the agenda, outputs, and decision-making. Ultimately, India’s credibility will be tested by how it responds to harmful deployments within its own borders. Ethical guidelines may not always translate to actual protection of human rights. If AI is genuinely for the people and the planet, as the summit’s vision declares, people must actively shape the technological futures they want , and not simply be recipients of the futures that industry and government negotiate on their behalf.

Conclusion

The India AI Impact Summit 2026 was a diplomatic achievement from an industry and state perspective. The Trusted AI Commons, in particular, is an innovative instrument for closing the capacity gap that prevents most countries from participating meaningfully in responsible AI governance. But the gap between what was proclaimed and what was committed — between a non-binding declaration and enforceable standards, between voluntary principles and mandatory rights — remains wide.

India’s credibility in the AI race will not be established at summits alone. It will be established through continued diplomatic influence, success cases of responsible AI, rights protections for communities affected by AI deployment, and discerning regulation of non-negotiables (e.g., human intervention in autonomous, agentic AI). The UN global forum on AI in July 2026 and the Geneva summit in 2027 will be the next tests. The wager India has made — that a development-first, inclusion-centred model can earn legitimacy to become a genuine governance architecture — may have successfully sent the message that India is a serious player at the table. But beyond declarations, there is a gap in accountability to the people for whom AI is ultimately intended.


Varsha Radhakrishnan is a Research Fellow (Technology Policy and Artificial Intelligence), at the Centre for Public Policy Research (CPPR), Kochi, Kerala, India.

Views expressed by the authors are personal and need not reflect or represent the views of the Centre for Public Policy Research (CPPR).

Research Fellow (Technology Policy and Artificial Intelligence) at  |  + posts

Varsha leads Analytics & Growth initiatives for an e-commerce platform at IBM, focusing on user engagement, adoption, and retention through choice architecture. Previously, she developed AI-driven solutions using predictive technologies, robotic process automation, and recommendation engines.

She holds a BSc in Economics from University of Warwick and an MSc in Public Policy & Management from Carnegie Mellon University, specializing in Behavioral Economics. Her research interests include digital privacy, cybersecurity policy, international education, and cross-cultural socio-economic and technological developments.

Varsha Radhakrishnan
Varsha Radhakrishnan
Varsha leads Analytics & Growth initiatives for an e-commerce platform at IBM, focusing on user engagement, adoption, and retention through choice architecture. Previously, she developed AI-driven solutions using predictive technologies, robotic process automation, and recommendation engines. She holds a BSc in Economics from University of Warwick and an MSc in Public Policy & Management from Carnegie Mellon University, specializing in Behavioral Economics. Her research interests include digital privacy, cybersecurity policy, international education, and cross-cultural socio-economic and technological developments.

Leave a Reply

Your email address will not be published. Required fields are marked *