
Nashik: ESDS Software Solution Limited today announced one of its most significant service portfolio expansions with Sovereign-grade GPU as a Service, during their company’s 20th Annual Day Mega event. The event was hosted at Sula Vineyards, Nashik, in the presence of Mr. Piyush Prakashchandra Somani – Promoter, Managing Director and Chairman of ESDS along with the Executive Council members, Board of Directors and other esteemed guests. This move is architected to fuel the exponential rise of AI/ML, GenAI and Large Language Model (LLM) workloads across key enterprises, research institutions, BFSI and government sectors.
This landmark launch positions ESDS as a provider of the complete spectrum of cloud, managed services, data centre infrastructure and software solutions, while now demonstrating its capability as a sovereign-grade managed GPU provider delivering high-performance AI compute at a global scale. Global spending on AI-optimised servers including GPUs & accelerators is projected to hit ~US $329.5 billion in 2026. The need for deterministic, high-throughput compute environments has never been greater. ESDS now enables enterprises, BFSI, research institutions and government bodies to run mission-critical AI workloads on purpose-built GPU SuperPODs designed for consistent performance, secure operations and low-latency distributed training. ESDS has evolved its expertise into a fully managed GPU infrastructure stack that helps organizations confidently scale AI with the right architectural foundation.
Piyush Somani, Promoter, Managing Director and Chairman of ESDS, said, “This is a strategic leap forward to address the surging demand for large-scale AI infrastructure across industries. As global AI value is projected to approach nearly US $15.7 trillion by 2030 and nearly 80% of that investment directed toward GPU, the urgency for dependable, high-performance GPU ecosystems has escalated to an entirely new level. For far too long, organizations have wanted to scale AI but were held back by the complexity, ambiguity and prohibitive cost of GPU infrastructure. With this launch, we are democratizing access to large-scale GPU clusters and SuperPODs, making them straightforward, transparent and purpose-built for enterprises that have AI ambitions. Our GPU SuperPODs fundamentally change that narrative by delivering predictable performance, stability and scale. To empower customers even further, we created the SuperPOD Configurator tool that lets businesses choose their GPU model, design their cluster and instantly gain visibility into the architecture and cost”
At the core of this launch is a powerful lineup of cutting-edge GPU systems, including NVIDIA’s DGX and HGX B200, B300, GB200 and the revolutionary NVL72 architecture, along with AMD’s MI300X platforms and more. These systems enable enterprises to train extremely large models, accelerate inference speeds, run simulation workloads and manage massive clustered data operations with unprecedented performance. ESDS’s GPU SuperPODs are engineered with high-bandwidth NVLink connectivity, unified memory pools, intelligent scheduling, enhanced thermal management and AI-optimized orchestration, ensuring predictable performance at any scale. ESDS’s full-spectrum GPU service portfolio includes advisory and design consultancy for captive GPU clusters, end-to-end supply, deployment and operations of GPU environments, dedicated GPU infrastructure-as-a-service, hybrid CPU+GPU cloud options, and an on-demand managed GPU cloud. Enterprises can now deploy isolated, compliance-ready AI environments; run large-scale distributed training workloads; or spin up GPU power like a utility, with ESDS managing everything from rack engineering to network optimization, container orchestration, performance tuning and 24×7 monitoring with AI/ML ops service options.
As part of this launch, ESDS has also introduced its unique SuperPOD Configurator, a simple tool that allows enterprises to design their AI infrastructure with absolute precision. Users navigating the tool can simply choose their preferred GPU model, customize compute density, memory profiles, storage tiers and interconnect options and the configurator automatically builds a fully optimized SuperPOD architecture tailored to their workload needs. The system instantly generates performance estimates, recommended configurations and transparent cost projections, empowering organizations to plan, scale and budget their AI environments with complete clarity before deployment.
One of the research laboratories demonstrates the impact of ESDS’s GPU as a service platform. The firm were not able to deal with fragmented infrastructure that stretched the training of a 50-billion-parameter model to over 40 days and significantly inflated operational costs. By transitioning to NVL72-based GPU Rack scale Infrastructure, supported by optimized containers, high-speed NVLink bandwidth and managed MLOps, the lab reduced training time to just 10 days, cut costs by 60 percent and achieved 30× faster inference with 4× shorter iteration cycles. This success underscores the real-world capability of ESDS’s AI-centric infrastructure to deliver breakthrough performance.
ESDS has its emphasis on infrastructure that is intended to match global AI performance standards, yet it is conceived, constructed and optimized in India. Trusted by over 1,300 clients across enterprise, BFSI and government sectors, ESDS offers transparent pricing, flexible consumption models, deep compliance capabilities and a unified cloud services layer that integrates managed services, security, SaaS, PaaS and colocation. By infusing technological innovation with operational excellence, ESDS aims to enable organizations to build, train and scale AI systems with confidence, cost efficiency and enterprise-grade reliability.
This marks the beginning of a new era where enterprises can unlock transformational AI capabilities, reduce time-to-results and operate on infrastructure built from real-world deployment expertise. ESDS invites enterprises, researchers and innovators to engage with its AI infrastructure specialists and explore how high-performance GPU environments can power their next leap in AI-driven transformation.


