Jeff Tatarchuk, Co-founder & CEO of TensorWave, discusses how his company is disrupting the AI compute market by exclusively deploying AMD GPUs, offering a cost-effective and sustainable alternative to NVIDIA. He shares insights into building modern data centers and the future of AI infrastructure.
Key takeaways
- TensorWave is addressing the AI compute shortage by focusing exclusively on AMD GPUs, providing a competitive alternative to NVIDIA's dominance.
- Building data centers today is heavily bottlenecked by access to power, requiring innovative solutions and strategic partnerships to overcome.
- While NVIDIA's CUDA has been a significant advantage, frameworks like PyTorch and TensorFlow allow for seamless code porting to AMD GPUs, reducing the barrier to entry.
- TensorWave aims to be the 'AI utility company,' providing resilient, secure, and performant cloud infrastructure that customers can rely on for their AI training and inference needs.
- Fine-tuning models at the enterprise level presents a significant opportunity, as businesses increasingly seek to integrate AI into their specific use cases.
Who this episode is for
- AI/ML engineers
- Data scientists
- Cloud computing professionals
- Startup founders in the AI space
- Investors interested in AI infrastructure
Nataraj welcomes Jeff Tatarchuk, Co-founder & CEO of TensorWave, to discuss the evolving landscape of AI compute and the rise of new cloud strategies. TensorWave is positioned as a key player in providing AI compute, exclusively working with AMD GPUs to address the growing demand.
From FPGAs to AMD GPUs: A Cloud Evolution
TensorWave started as an FPGA cloud business (VMXL), gaining valuable experience in deploying cloud infrastructure and managing data centers. Recognizing the shift towards AI and the GPU shortage, they pivoted to focus on AMD GPUs.
The decision to go all-in on AMD was driven by customer demand for alternatives to NVIDIA and the belief in AMD's vision and roadmap. TensorWave aims to be the premier AMD support, making it easy for customers to deploy AMD chips at scale.
The Challenges of Building Modern Data Centers
Access to power is the major bottleneck in building data centers today. The demand for power far exceeds the available supply, requiring creative solutions and strategic partnerships.
Building quickly and deploying rapidly is crucial, as customers are willing to pay a premium for fast access to compute. However, challenges such as permitting issues and workforce availability can impact timelines.
Financing and the Competitive Landscape
Raising capital is essential for building data centers, but it's only one piece of the puzzle. Cost of capital, customer credit ratings, and strategic partnerships are all critical factors.
The AI cloud market is becoming crowded, but TensorWave differentiates itself through its technical expertise, deep integration with AMD, and focus on solving the challenges of deploying AMD GPUs at scale.
Overcoming the CUDA Moat and Driving AMD Adoption
While NVIDIA's CUDA has been a significant advantage, most AI engineers use frameworks like PyTorch and TensorFlow, which allow for seamless code porting to AMD GPUs.
TensorWave is actively building an ecosystem around AMD, partnering with researchers and engineers to showcase the capabilities and performance of AMD GPUs. They also launched the 'Beyond CUDA' summit to promote innovation outside the CUDA ecosystem.
AMD GPUs offer architectural advantages, such as more memory (VRAM), making them suitable for hosting larger models. Their chiplet architecture also allows for greater flexibility and efficiency.
The Future of TensorWave: An AI Utility Company
TensorWave aims to be a resilient, secure, and performant cloud provider that customers can rely on for their AI training and inference needs. They want to abstract away the complexities of managing GPUs, allowing customers to focus on their core business.
Success for TensorWave means providing viable options for customers to buy compute, fostering a competitive market, and playing a significant role in democratizing access to AI infrastructure.
Get one tactical founder breakdown per week
No fluff. Just the decisions behind product-market fit and scale.
