Minister for Electronics and Information Technology Ashwini Vaishnaw made a straightforward argument at the World Economic Forum: most real-world AI problems don’t require the massive models that Silicon Valley and Beijing have been racing to build. 95 per cent of use cases, he said, can be handled by models with 20 to 50 billion parameters.
What he suggested is a pragmatic position that contrasts with the industry’s emphasis on scale. And it’s gaining relevance as Western companies face mounting pressure to prove their AI investments actually deliver returns.
The western ROI problem
Last year’s numbers tell a cautionary tale. According to Kyndryl’s latest readiness report, while 54 per cent of organisations reported seeing positive returns on AI investments, 62 per cent still haven’t advanced their AI projects beyond the pilot stage.Only 26 per cent of companies have working AI products in production, according to Harvard Business Review, and just 4 per cent have achieved meaningful returns.
The deployment challenge is stark. MIT research shows that 95 per cent of custom enterprise AI solutions never reach production. Meanwhile, 50 per cent of generative AI budgets go to sales and marketing rather than solving actual problems.
The gap between investment and returns has created a show me the money moment for AI in 2026. Enterprises will need to demonstrate concrete returns on their spending, and countries will need to show meaningful productivity gains to justify continued AI investment.
India’s approach
Vaishnaw’s strategy for India centers on deploying smaller, cost-effective models across multiple sectors rather than competing on sheer scale. He described India’s AI development in five layers: application, model, chip, infrastructure, and energy, with emphasis on the application layer where returns are realised.Vaishnaw talked about the practical executions including a public-private partnership that deployed 38,000 GPUs as shared compute infrastructure available to students, researchers, and startups at roughly one-third the cost of access elsewhere. India is also training 10 million people in AI skills and providing a suite of pre-built models across sectors.
He argued that large models don’t automatically confer geopolitical advantage. The real economic value, he said, comes from deploying efficient solutions that solve immediate problems. With custom silicon becoming more widely available globally, the concentration of AI power through massive models is less inevitable than it might appear.
This approach aligns with industry trends. IBM’s Asia-Pacific AI Outlook for 2025 identified growing adoption of “Rightsizing AI”—smaller, purpose-built systems requiring less training data—as companies shift from experimental pilots to returns-focused deployments.
Own sovereign model yet to be seen
While Vaishnaw positioned India in the “first category” of countries that develop AI capabilities, India has not yet built its own large foundation model, though that is changing quickly.
In January 2025, the government invited proposals from startups and enterprises to develop indigenous large language and multimodal models, offering financial backing and subsidised computing access.
Speaking to Business Today at Davos, Vaishnaw said India expects to run most of its AI work on homegrown models within a year. A top Ministry official said in October 2025 that India’s sovereign AI model will be ready before the AI Impact Summit scheduled for February 2026.
This means Vaishnaw’s efficiency argument, while economically sound, also reflects a current constraint. India’s advantage lies in demonstrating that scale isn’t essential.
EY analysis of India’s AI landscape shows that cost reductions from open-source and smaller models have made AI deployment affordable at scale. According to the analysis, enterprises can deploy meaningful AI for a few thousand rupees monthly in India, shifting the calculus for deployment.
Vaishnaw’s emphasis on smaller models and practical deployment reflects a different approach to AI development—one focused on near-term application rather than long-term model dominance. Whether this strategy delivers better returns than the large-model approach will likely become clear as Western enterprises continue reporting on their AI investments.


