Category: Podcast Episode

The podcast episodes from Startup Project podcast, hosted by Nataraj Sindam.

  • Jeff Tatarchuk of TensorWave: Pioneering the AMD GPU Cloud Revolution

    Jeff Tatarchuk of TensorWave: Pioneering the AMD GPU Cloud Revolution

    Jeff Tatarchuk, Co-founder & CEO of TensorWave, discusses how his company is disrupting the AI compute market by exclusively deploying AMD GPUs, offering a cost-effective and sustainable alternative to NVIDIA. He shares insights into building modern data centers and the future of AI infrastructure.

    Key takeaways

    • TensorWave is addressing the AI compute shortage by focusing exclusively on AMD GPUs, providing a competitive alternative to NVIDIA's dominance.
    • Building data centers today is heavily bottlenecked by access to power, requiring innovative solutions and strategic partnerships to overcome.
    • While NVIDIA's CUDA has been a significant advantage, frameworks like PyTorch and TensorFlow allow for seamless code porting to AMD GPUs, reducing the barrier to entry.
    • TensorWave aims to be the 'AI utility company,' providing resilient, secure, and performant cloud infrastructure that customers can rely on for their AI training and inference needs.
    • Fine-tuning models at the enterprise level presents a significant opportunity, as businesses increasingly seek to integrate AI into their specific use cases.

    Who this episode is for

    • AI/ML engineers
    • Data scientists
    • Cloud computing professionals
    • Startup founders in the AI space
    • Investors interested in AI infrastructure

    Nataraj welcomes Jeff Tatarchuk, Co-founder & CEO of TensorWave, to discuss the evolving landscape of AI compute and the rise of new cloud strategies. TensorWave is positioned as a key player in providing AI compute, exclusively working with AMD GPUs to address the growing demand.

    From FPGAs to AMD GPUs: A Cloud Evolution

    TensorWave started as an FPGA cloud business (VMXL), gaining valuable experience in deploying cloud infrastructure and managing data centers. Recognizing the shift towards AI and the GPU shortage, they pivoted to focus on AMD GPUs.

    The decision to go all-in on AMD was driven by customer demand for alternatives to NVIDIA and the belief in AMD's vision and roadmap. TensorWave aims to be the premier AMD support, making it easy for customers to deploy AMD chips at scale.

    The Challenges of Building Modern Data Centers

    Access to power is the major bottleneck in building data centers today. The demand for power far exceeds the available supply, requiring creative solutions and strategic partnerships.

    Building quickly and deploying rapidly is crucial, as customers are willing to pay a premium for fast access to compute. However, challenges such as permitting issues and workforce availability can impact timelines.

    Financing and the Competitive Landscape

    Raising capital is essential for building data centers, but it's only one piece of the puzzle. Cost of capital, customer credit ratings, and strategic partnerships are all critical factors.

    The AI cloud market is becoming crowded, but TensorWave differentiates itself through its technical expertise, deep integration with AMD, and focus on solving the challenges of deploying AMD GPUs at scale.

    Overcoming the CUDA Moat and Driving AMD Adoption

    While NVIDIA's CUDA has been a significant advantage, most AI engineers use frameworks like PyTorch and TensorFlow, which allow for seamless code porting to AMD GPUs.

    TensorWave is actively building an ecosystem around AMD, partnering with researchers and engineers to showcase the capabilities and performance of AMD GPUs. They also launched the 'Beyond CUDA' summit to promote innovation outside the CUDA ecosystem.

    AMD GPUs offer architectural advantages, such as more memory (VRAM), making them suitable for hosting larger models. Their chiplet architecture also allows for greater flexibility and efficiency.

    The Future of TensorWave: An AI Utility Company

    TensorWave aims to be a resilient, secure, and performant cloud provider that customers can rely on for their AI training and inference needs. They want to abstract away the complexities of managing GPUs, allowing customers to focus on their core business.

    Success for TensorWave means providing viable options for customers to buy compute, fostering a competitive market, and playing a significant role in democratizing access to AI infrastructure.

  • Eilon Reshef on How Gong Leverages AI to Revolutionize Revenue Operations

    Eilon Reshef on How Gong Leverages AI to Revolutionize Revenue Operations

    Eilon Reshef, Co-Founder and CPO of Gong, discusses the company's journey from conversational intelligence to a comprehensive revenue AI operating system. He shares insights on product-market fit, the impact of AI, and the future of enterprise software.

    Key takeaways

    • Gong initially targeted a narrow niche of mid-sized SaaS companies selling in English over video conferencing to achieve early product-market fit.
    • AI's effectiveness in sales coaching lies in moving the average performers up the curve, rather than turning everyone into top performers.
    • The future of AI in enterprise software involves carving out specific tasks for AI agents, allowing them to augment human capabilities rather than replace them entirely.
    • Gong's evolution into a revenue AI operating system involved expanding its capabilities to include pipeline management, forecasting, and enablement, creating a unified platform for revenue teams.
    • The most impactful AI applications for executives often revolve around improving communication, including refining terminology and organizing narratives.
    • Orchestration, combined with AI-driven insights, enables companies to rapidly implement and personalize changes across their revenue teams, driving significant improvements in performance.

    Who this episode is for

    • Sales leaders and professionals
    • Product managers in the SaaS space
    • Entrepreneurs interested in AI applications
    • Revenue operations professionals
    • Anyone interested in the future of enterprise software

    Nataraj welcomes Eilon Reshef, Co-Founder and CPO of Gong, to the Startup Project. Eilon's background includes co-founding Webcollage, a SaaS platform acquired in 2013, showcasing his experience in the tech industry. Gong leverages AI to analyze customer interactions and sales conversations, helping teams boost productivity and drive efficient growth. The conversation will explore Gong's origins, product-market fit, and the transformative impact of AI on their business.

    The Genesis of Gong and the Power of Data

    Eilon explains that Gong's vision has remained consistent since its founding in 2015: to bring data-driven workflows to the revenue space, which was traditionally treated as an art. The core idea was that AI could make sales processes more efficient, but only with access to quality data. Gong started by capturing conversations with customers, integrating with platforms like WebEx (later Zoom, email, and CRMs) to create a revenue graph. This graph would then be used to apply logic and help teams be more productive and leaders gain better intelligence.

    In the early days, recording calls wasn't common, and many VCs were skeptical that people would want to be recorded. Gong had to develop its own recording technology, including a bot that joins calls. Initially, transcription accuracy was poor, but Gong quickly moved to a homegrown system. The focus was on building a robust, cloud-first system to capture and analyze these conversations.

    Achieving Product-Market Fit

    Eilon emphasizes the importance of focusing on a small niche to achieve product-market fit. Gong initially targeted software-as-a-service companies in North America selling in English over video conferencing. This narrow focus allowed them to refine their product and meet the specific needs of their early adopters. Even though transcription wasn't perfect, early adopters found value in search capabilities. This allowed them to track mentions of competitors or specific topics within conversations.

    Collaboration tools were also key, allowing salespeople to easily tag colleagues and share relevant moments from calls. A pivotal moment occurred when 11 out of 12 design partners paid for the product after a beta period, signaling strong product-market fit. This early success validated Gong's approach and paved the way for future growth.

    From Conversational Intelligence to Revenue AI Operating System

    Gong has evolved from analyzing single conversations to providing a comprehensive revenue AI operating system. This includes pipeline management, forecasting, sales engagement, coaching, and enablement. By expanding its capabilities, Gong now serves a wide range of customers, from small companies to large enterprises, across various industries. The goal is to help everyone along the customer journey, from prospecting to post-sales, and even product managers.

    Eilon discusses the idea of AI-driven coaching and training. While AI can't turn everyone into a top performer, it can move the average performers up the curve. Gong's AI trainer simulates customer interactions and provides coaching based on actual conversations, helping sales reps improve their skills and effectiveness.

    The Future of AI Agents and Enterprise Software

    Eilon believes that AI agents will gradually take on specific tasks within sales workflows, augmenting human capabilities rather than replacing them entirely. These agents can handle tasks like walking customers through proposals or answering common questions, freeing up sales reps to focus on more complex interactions. Gong is developing tools that allow companies to train AI agents based on historical data, ensuring they are effective and aligned with business goals.

    Eilon also touches on the trend of commoditization of software and the exaggerated claims surrounding AI's capabilities. While AI can improve engineering productivity and enable rapid prototyping, it won't eliminate the need for skilled developers or comprehensive software solutions. He emphasizes the importance of focusing on real-world applications and delivering tangible value to customers.

    Fundraising and Product Leadership

    Eilon shares his approach to fundraising, emphasizing the importance of starting with a product and demonstrating product-market fit. Gong followed a traditional route, raising capital to build a team and acquire customers. The company also took advantage of favorable market conditions in 2021 to raise additional funds for future growth.

    As Chief Product Officer, Eilon prioritizes customer centricity and close collaboration with design partners. He believes that understanding customer needs is crucial for developing valuable products. Gong's product development process involves extensive customer feedback and iteration, ensuring that every feature adds value.

    Trends and Exciting Developments

    Eilon discusses the importance of embracing AI to improve individual job performance. He encourages professionals to identify tasks that can be enhanced with AI, rather than fearing job displacement. He also highlights the potential of AI to transform enterprise software, but notes that the industry is still in the early stages of figuring out how to best leverage AI's capabilities.

    Gong is launching an orchestration product that automates tasks and guides revenue professionals, helping them prioritize opportunities and drive change. Eilon envisions a future where companies can use AI to analyze their business processes and implement changes with the click of a button, enabling rapid optimization and personalization.

  • Treating Every Customer Like the Only One: Andrew Bialecki on Building Klaviyo

    Treating Every Customer Like the Only One: Andrew Bialecki on Building Klaviyo

    Andrew Bialecki, CEO of Klaviyo, discusses the company's journey from a database solution to a marketing automation powerhouse. He shares insights on product-market fit, capital efficiency, and the evolving role of AI in customer engagement.

    Key takeaways

    • Focus on building a product so good that customers can't help but talk about it, driving word-of-mouth marketing.
    • Delay fundraising as much as possible to maintain control and build a capital-efficient business.
    • Embrace AI to automate marketing processes, allowing businesses to personalize customer experiences at scale.
    • Encourage a culture of entrepreneurship within the company, fostering innovation and independent thinking.
    • Retail brands are evolving into value-added services, offering expertise and support beyond just products.

    Who this episode is for

    • E-commerce business owners
    • Marketing professionals
    • Startup founders
    • CRM and marketing automation enthusiasts
    • Anyone interested in AI-driven customer engagement

    Nataraj welcomes Andrew Bialecki, CEO of Klaviyo, to discuss how the company has reshaped customer connections for B2C brands. Klaviyo started as an email marketing company and evolved into a platform integrating marketing attribution, SMS marketing, and AI capabilities. Today, Klaviyo is a public company valued around $9 billion, serving 180,000 brands globally and generating impressive revenue with substantial year-over-year growth.

    From Database to Marketing Powerhouse

    Andrew Bialecki explains Klaviyo's mission: to enable businesses to treat every customer as the most important one. Initially, Klaviyo started as a database business, aiming to replicate human thinking and information storage. The company then expanded into marketing automation after realizing customers were using the database for marketing purposes.

    Klaviyo's platform now extends to customer service with its customer agent, aiming to empower businesses to deliver exceptional experiences at scale through technology. The goal is to help businesses act as their best sales representative and product expert through technology.

    Finding Product-Market Fit and the Shopify Partnership

    Andrew Bialecki knew Klaviyo was on the right track when a haberdashery customer promised to triple their spending if Klaviyo expanded into marketing. A pivotal moment was integrating with Shopify, which opened up a significant opportunity in the e-commerce space.

    Klaviyo focused on a partner model for customer acquisition, emphasizing word-of-mouth, marketing agencies, and platform integrations. Shopify's forward-thinking approach to an app store allowed Klaviyo to help businesses with marketing and customer experiences.

    Capital Efficiency and the Decision to Go Public

    Klaviyo was notably capital-efficient, raising around $400 million but spending only a fraction before going public. Andrew Bialecki advises founders, especially those with technical backgrounds, that they often need less capital than they think. He emphasizes building a great product and acquiring customers, which attracts the best investors.

    While many companies go public to fundraise, Klaviyo's decision was driven by factors like providing liquidity to investors and telling its story more directly. The company wanted to maintain its long-term vision and customer focus, believing its strong culture would withstand the transition.

    The Rise of AI and the Future of Customer Engagement

    Klaviyo had a successful holiday season, processing billions of data points and powering billions of customer experiences. A key trend is the evolution of retail brands into value-added services, offering expertise and support around their products.

    Klaviyo is leveraging AI through its marketing and customer agents. The marketing agent automates the entire marketing process, from identifying trends to generating creative content. The customer agent uses customer context to answer questions and provide personalized recommendations.

    Andrew Bialecki emphasizes that AI is increasing the overall amount of marketing and customer service engagement. He believes that AI will enable software to run itself, allowing businesses to focus on higher-level strategies.

    AI's Impact on Company Culture and Hiring

    Klaviyo's bootstrapped beginnings instilled a DNA of generalism and automation. The company seeks employees with a wide range of skills and encourages them to automate repetitive tasks. AI is seen as another iteration of this philosophy.

    Andrew Bialecki emphasizes the importance of curiosity and an AI-first mindset. He encourages everyone to integrate AI into their workflows and values those who are actively building AI applications.

  • Ilya Levtov of Craft: Building Resilient Supply Chains with AI-Powered Intelligence

    Ilya Levtov of Craft: Building Resilient Supply Chains with AI-Powered Intelligence

    Ilya Levtov, CEO of Craft, discusses how his company is revolutionizing supply chain management with an AI-powered platform that provides 360-degree visibility and risk mitigation. Learn how Craft helps governments and enterprises proactively avoid supply chain disasters by monitoring and analyzing vast amounts of data.

    Key takeaways

    • Craft's platform uses over 2,000 data points across 14 dimensions to provide a 360-degree view of a company, going beyond traditional financial metrics to include operational, environmental, social, governance, and cybersecurity insights.
    • Cross-correlation of data points, such as employee sentiment and cybersecurity vulnerabilities, can provide predictive insights into potential supplier risks, enabling proactive mitigation strategies.
    • AI-driven tools, like Craft's Intelligent Workspace, can significantly reduce the time required to generate comprehensive risk reports, from eight person-hours to just 45 seconds.
    • A multi-source data validation approach, using multiple providers for the same data type, improves data integrity and flags potential inaccuracies for deeper investigation.
    • Companies should prioritize building resilient supply chains that are closer, more controlled, and less reliant on brittle, just-in-time models, especially in the face of geopolitical instability and evolving tariff landscapes.

    Who this episode is for

    • Supply chain professionals
    • Procurement leaders
    • Risk management professionals
    • Government agencies
    • Enterprise leaders

    Nataraj welcomes Ilya Levtov, CEO of Craft, to the Startup Project podcast. The episode dives into the complexities of global supply chains and how Craft is helping governments and enterprises build more resilient systems. Ilya's background includes stints at Goldman Sachs and Venrock, providing a unique perspective on the challenges and opportunities in the supply chain space.

    The Evolving Landscape of Global Supply Chains

    Ilya notes that the pandemic exposed the brittleness of global supply chains, which had become overly reliant on outsourcing and just-in-time inventory management. While the pandemic brought attention to the issue, the focus has shifted to more secular concerns, recognizing the inherent complexity and dynamism of global supply chains.

    He emphasizes that we are still in the early stages of truly mapping and understanding supply chains with data and intelligence.

    Key Concerns and Risk Dimensions

    Ilya highlights several key risk dimensions that are top of mind for supply chain leaders. These include cybersecurity, geopolitical risks (such as weaponization of the supply chain and foreign ownership concerns), ESG considerations, and traditional financial stability concerns.

    He emphasizes that large enterprises often struggle to integrate disparate data sources and build comprehensive risk governance capabilities. Craft aims to solve this by delivering a holistic approach to supplier management and risk mitigation.

    Ensuring Data Integrity and Proactive Risk Mitigation

    Ilya explains that Craft addresses data integrity by using an “all of the above” strategy, incorporating multiple data sources for validation. This allows for comparison and flagging of potential inaccuracies, enabling targeted investigation.

    Craft's platform helps companies identify and mitigate cyber risks by providing scores and qualitative attributions from leading cyber scanning companies. This allows companies to quickly identify vulnerable suppliers and implement mitigation efforts. Continuous monitoring ensures that even suppliers that appear safe are immediately flagged if their risk profile changes.

    Craft's AI-Powered Intelligent Workspace

    Ilya discusses how Craft leverages AI to enhance its platform, particularly through its Intelligent Workspace. This AI-driven tool provides procurement professionals with prioritized alerts on critical supplier events, such as financial distress, cyber vulnerabilities, and labor violations.

    AI also significantly accelerates the creation of risk reports, reducing the time from days to seconds. Craft utilizes a range of existing LLMs, including OpenAI, Anthropic, and Google Gemini, and is exploring the development of its own supplier risk models to better cater to specific customer needs and risk tolerances.

    Craft's Journey and Future Outlook

    Ilya shares Craft's initial go-to-market strategy, which involved offering free company profiles to attract organic traffic and generate leads. This approach led to early adoption by government agencies and large enterprises.

    Looking ahead, Craft aims to expand its supplier intelligence layer across the entire procurement and supply chain function, fostering collaboration and reducing silos within enterprises. The company also plans to enhance data egress and ingress, enabling seamless integration with other enterprise systems.

  • Andrew Feldman of Cerebras on the Future of AI Compute and Sovereign AI

    Andrew Feldman of Cerebras on the Future of AI Compute and Sovereign AI

    Andrew Feldman, CEO of Cerebras, discusses the company's innovative approach to AI compute, challenging NVIDIA with its wafer-scale engine and open-source strategies. He shares insights on sovereign AI, the shift to open-source models, and how Cerebras is redefining AI infrastructure.

    Key takeaways

    • Cerebras is focused on building systems, not just chips, to provide complete solutions for AI compute, offering both on-premise and cloud-based options.
    • The company's wafer-scale engine allows for more on-chip memory, significantly increasing the speed and efficiency of AI inference compared to traditional GPUs.
    • Inference is becoming increasingly important as AI applications move from novelty to practical use, driving the need for faster and more efficient compute solutions.
    • Cerebras is strategically partnering with sovereign institutions like G42 in the UAE to build AI infrastructure and develop models tailored to specific regions and languages.
    • The shift to open-source models is creating new opportunities for smaller companies to innovate and deliver AI-powered applications without relying on proprietary technologies.
    • Ease of use is critical for widespread AI adoption, and Cerebras is making it simple for developers to switch to their platform with minimal code changes.

    Who this episode is for

    • AI founders
    • AI operators
    • AI investors
    • Deep tech enthusiasts
    • Individuals interested in the future of AI compute

    Nataraj interviews Andrew Feldman, CEO of Cerebras, about the company's mission to revolutionize AI compute. Cerebras is challenging established giants like NVIDIA with its wafer-scale engine, designed for unprecedented AI compute power and efficiency. The conversation explores the current AI landscape, the shift to open-source models, and the future of sovereign AI.

    The Genesis of Cerebras

    Founded in 2015, Cerebras aimed to address the emerging need for specialized AI compute. Feldman and his team, with a background in building chips and systems, recognized the potential for a new computer architecture tailored to AI workloads. They predicted that AI would usher in a new era of compute, similar to how cell phones and networking equipment created new demands and opportunities.

    Despite NVIDIA's dominance at the time, Cerebras believed it could build a new type of computer that would be orders of magnitude faster. This vision led to the development of the wafer-scale engine, a groundbreaking innovation in AI hardware.

    Building the World's Largest AI Chip

    Cerebras embarked on a mission to build the largest chip in the history of the computer industry. This involved fundamental design, creativity, engineering, innovation, and invention. After three years and half a billion dollars, the company created its first wafer-scale engine.

    The larger size allowed Cerebras to keep more data on the chip, reducing the need for frequent data movement and minimizing power consumption. This resulted in significantly faster performance compared to traditional chips. The initial period was challenging, with 15 months of high spending and no working chip, but the team persevered and eventually achieved a breakthrough.

    The Advantage of On-Chip Memory

    Cerebras's architecture utilizes SRAM, a fast type of memory, to overcome the limitations of traditional memory solutions. By stuffing the chip with SRAM, Cerebras achieved both capacity and speed, resulting in significantly faster performance. This is particularly beneficial for inference, where large amounts of data need to be moved quickly to generate results.

    The memory bandwidth of GPUs is a known bottleneck, and Cerebras's design addresses this by placing the SRAM right next to the compute core. This allows for much faster data movement, leading to significant improvements in inference speed.

    Product Strategy and Market Approach

    Cerebras sells complete computer systems, not just chips, to provide comprehensive solutions for AI compute. These systems can be deployed on-premise or accessed through the cloud, either through Cerebras's own cloud or through partner clouds like Amazon Marketplace and Microsoft Marketplace.

    The company also offers forward-deployed engineering services to help customers accelerate the delivery of AI solutions. This comprehensive approach sets Cerebras apart from other chip companies and allows it to cater to a wide range of customer needs.

    The CUDA Challenge and Open Source

    While CUDA has been a significant barrier for new chip companies, it is becoming less relevant in the inference space. Developers can easily switch to Cerebras's platform with minimal code changes, making it an attractive option for those looking to improve inference performance.

    The rise of open-source models is further weakening CUDA's dominance, as developers can leverage these models without needing to know CUDA. This is creating new opportunities for innovation and making AI more accessible to a wider range of users.

    Sovereign AI and Geopolitics

    Cerebras is strategically partnering with sovereign institutions like G42 in the UAE to build AI infrastructure and develop models tailored to specific regions and languages. This reflects the growing importance of sovereign AI, as nations seek to build domestic AI capabilities.

    Feldman believes that the US should sell chips and AI systems to its allies but not to its adversaries. This highlights the geopolitical considerations that are shaping the AI landscape and the importance of aligning AI development with national interests.

  • How AI Is Unlocking Materials We’ve Never Been Able to Build | Radical AI

    How AI Is Unlocking Materials We’ve Never Been Able to Build | Radical AI

    Discover how Radical AI is revolutionizing material science using self-driving labs.

    About the episode:

    Nataraj hosts Joseph Krause, CEO of Radical AI, to explore how they’re speeding up material R&D by combining AI, engineering, and robotics. Joseph shares his journey from material science to venture capital, highlighting Radical AI’s mission to create a self-driving lab that autonomously designs tests and discovers new materials. The episode dives into Radical AI’s materials flywheel concept, their open-source engine, and how they’re attracting funding to drive innovation in material science. Discover how Radical AI is set to revolutionize industries from aerospace to energy with cutting-edge material discovery.

    What you’ll learn

    • Understand the traditional challenges hindering the commercialization of new materials and how Radical AI is overcoming them.
    • Discover the materials flywheel concept and how it accelerates the speed of material discovery.
    • Learn about the types of customers who are seeking new materials and the diverse applications across various industries.
    • Explore the role of AI in simulating and experimenting with materials, and the importance of experimental validation.
    • Understand the types of AI models Radical AI uses, including machine learning, generative AI, and computer vision.
    • Identify Radical AI’s hiring strategy to build an interdisciplinary team across machine learning, software engineering, robotics, and material science.
    • Comprehend the importance of experimental data in materials science and how self-driving labs capture and utilize this data.
    • Learn about Radical AI’s stepwise approach to focus on customer-driven problems and enabling technologies.

    About the Guest and Host:

    Guest Name: Joseph Krause, Co-founder and CEO of Radical AI, aiming to revolutionize material science with AI, engineering, and robotics.

    Connect with Guest: 

    → LinkedIn: https://www.linkedin.com/in/josephfkrause

    → Website: https://www.radical-ai.com/

    Nataraj: Host of the Startup Project podcast, Senior PM at Azure & Investor. 

    → LinkedIn: https://www.linkedin.com/in/natarajsindam/  

    → Substack: ⁠https://startupproject.substack.com/⁠

    In this episode, we cover  

    •  (00:01) Introduction to Radical AI and Joseph Krause
    • (01:15) Joseph’s diverse background and how it led to Radical AI
    • (05:01) Traditional ways preventing commercialization of new materials
    •  (09:06) Radical AI’s product: novel materials for aerospace, defense, and energy
    • (11:36) Customers seeking new materials and the advantage of speed in the materials flywheel
    • (13:39) Challenges in digital research and the importance of physical experimentation
    • (16:18) How Radical AI picks directions for new material discovery
    • (23:48) The AI part of Radical AI: hiring and AI models used
    • (27:13) Predicting crystal structures with AI
    • (31:57) Why New York is the best place for Radical AI
    • (33:37) Joseph’s best AI use case for personal research
    • (37:35) Material research happening at Apple

    Don’t forget to subscribe and leave us a review/comment on YouTube Apple Spotify or wherever you listen to podcasts.

    #RadicalAI #AI #MaterialScience #Robotics #DeepTech #Innovation #VentureCapital #Aerospace #Defense #Energy #NewMaterials #SelfDrivingLabs #MachineLearning #GenerativeAI #OpenSource #Podcast #Startup #Technology #Research #NVIDIA

  • How Decagon Built Human-Level AI Support: Ashwin Sreenivas on customer obsession, early traction, enterprise complexity, and the AI concierge future

    How Decagon Built Human-Level AI Support: Ashwin Sreenivas on customer obsession, early traction, enterprise complexity, and the AI concierge future

    Unlock the secrets to Decagon AI’s $1.5 billion valuation and AI-powered customer support.

    Ashwin Sreenivas is the co-founder of Decagon AI, a company revolutionizing enterprise customer support with AI agents. Founded in 2023, Decagon has rapidly grown to a $1.5 billion valuation, automating support workflows for brands like Duolingo and Notion. Ashwin, previously co-founder of Helio (acquired by Scale AI), shares insights into Decagon’s product-market fit, secret sauce, and tangible business impact, revealing how AI is transforming customer interaction. If you’re curious about the future of AI in enterprise solutions, this episode is a must-listen.

    Listen now YouTube | Apple | Spotify

    Quotes from the episode

    • Traditional chatbots relied on rigid decision trees, leading to frustrating customer experiences, but Decagon’s AI agents are trained like humans, enabling fluid, natural conversations.
    • Decagon’s AI agents follow Agent Operating Procedures (AOPs), which are similar to human SOPs, and this allows them to handle customer interactions across chat, phone, SMS, and email.
    • The key is to focus on building AI agents that can follow instructions effectively, allowing businesses to offer personalized customer concierge services and seamless user experiences.
    • Instead of predicting what customers want, AI should learn customer preferences and remember them, making interactions more seamless and efficient, enhancing overall satisfaction.

    What you’ll learn

    • Understand how Decagon AI is transforming customer support by using AI agents that can handle conversations across various channels.
    • Learn about Agent Operating Procedures (AOPs) and how they enable AI agents to follow instructions and interact with customers like humans.
    • Discover how Decagon AI helps businesses expand their support offerings, leading to higher retention and happier customers through increased support access.
    • Explore the importance of solving customer problems quickly and seamlessly, regardless of whether the interaction is with a human or an AI agent.
    • See how Decagon AI is expanding beyond customer support to offer customer concierge services, enabling personalized and friction-free interactions.
    • Learn how focusing on customer needs and building something people will pay for can simplify early-stage company challenges.

    Takeaways

    • Decagon AI’s agents use Agent Operating Procedures (AOPs) to mimic human-like interactions, which contrasts with older chatbot tech that relied on rigid decision trees.
    • Unlike traditional approaches, Decagon AI focuses on creating a single agent adept at following instructions, improving onboarding and iteration for customers.
    • Training smaller, fine-tuned models can outperform larger models on specific tasks, providing better performance and lower latency for customer interactions.
    • Customer support is evolving into a brand differentiator, with companies like Amazon and American Express setting the standard for excellent service and customer trust.
    • By making support more affordable, businesses can reinvest savings into providing more extensive support, leading to higher customer retention and satisfaction.
    • Early customer acquisition requires manual effort, including networking, cold emailing, and LinkedIn messaging, with a focus on charging for the software from day one.
    • Concentrating on building solutions that customers are willing to pay for within a short timeframe helps to validate business models and weed out unpromising ideas.

    Don’t forget to subscribe and leave us a review/comment on YouTube, Apple, or Spotify.

    It helps us reach more listeners and bring on more interesting guests.

    Stay Curious, Nataraj

  • From Forbes to Founder: Alex Konrad on new media, creator economy, AI tools for journalists, Midas List secrets, and why traditional media is losing to independent voices

    From Forbes to Founder: Alex Konrad on new media, creator economy, AI tools for journalists, Midas List secrets, and why traditional media is losing to independent voices

    Discover the evolving roles of traditional vs. new media in tech and gain insights into effective content creation strategies.

    About the episode:

    In this episode, Nataraj hosts Alex Konrad, founder and editor of Upstart Media, to explore the shifting dynamics of tech media. Alex shares his experiences at Forbes, defines the roles of traditional and new media, and discusses the challenges and opportunities for new tech publications. He also dives into storytelling trends, content creation strategies, and the impact of AI on the media landscape. Learn how new media publications can thrive in a decentralized ecosystem and why staying close to the "engine room" of innovation is crucial for success.

    What you’ll learn

    • Identify the key differences between traditional and new media in the tech landscape.
    • Understand the challenges and opportunities for new tech publications in a decentralized media ecosystem.
    • Discover effective content creation and distribution strategies for building a successful media brand.
    • Learn how to build a pipeline of stories and get sources as a media startup without the brand of a larger publication.
    • Explore the impact of AI on the media landscape and its potential to empower independent creators.

    About the Guest and Host:

    Alex Konrad: Founder and editor of Upstart Media, a tech publication focused on the startup ecosystem.Connect with Alex: → LinkedIn: https://www.linkedin.com/in/alexrkonrad/→ Website: https://www.upstartsmedia.com/Nataraj: Host of the Startup Project podcast, Senior PM at Azure & Investor. → LinkedIn: https://www.linkedin.com/in/natarajsindam/ → Substack: ⁠https://startupproject.substack.com/⁠In this episode, we cover

    • (00:01) Introduction to Alex Konrad and Upstart Media
    • (01:24) Alex’s experience at Forbes and its role in today’s tech media
    • (03:43) Defining traditional vs. new media and the rise of independent content creators
    • (06:25) The challenges and differences between working at a large publication vs. running a startup media company
    • (07:01) The components of a new tech publication in 2025: Substack, YouTube, podcasts, and events
    • (08:41) Content strategy: balancing consistency, quality, and multiple platforms
    • (11:05) Admired content creators and their successful practices in the media space
    • (13:12) The importance of an existing brand and network for new media ventures
    • (15:49) The creator economy’s power law and the challenges of standing out
    • (16:56) Distribution strategies: leveraging LinkedIn, X, and Substack recommendations
    • (18:56) The evolving landscape of social media and the rise of Threads
    • (22:16) Bandwidth challenges and the need for AI-powered tools for content creation and distribution
    • (25:05) The Midas List: its methodology, significance, and controversies
    • (30:41) Other influential lists and their impact on the tech industry
    • (32:45) Identifying potential niches and innovative approaches in the media space
    • (36:04) Alex’s insights on AI-driven workflows and automation in various industries
    • (38:48) Nataraj’s perspective on AI as a transformative force and its potential impact
    • (41:11) The importance of being close to the "engine room" of innovation
    • (42:55) Building a pipeline without a big brand name and leveraging word-of-mouth

    Don’t forget to subscribe and leave us a review/comment on YouTube Apple Spotify or wherever you listen to podcasts.

    #Startup #TechMedia #NewMedia #ContentCreation #AI #VentureCapital #Startups #Forbes #Substack #Podcast #SocialMedia #Threads #MidasList #Innovation #Entrepreneurship #MediaStrategy #TechTrends #DigitalMedia #ContentMarketing #UpstartMedia

  • AI Hype, Future Trends & Research Trends with Best Selling Author of The Master Algorithm Pedro Domingos

    AI Hype, Future Trends & Research Trends with Best Selling Author of The Master Algorithm Pedro Domingos

    This week we are republishing one of our favourite conversations that didn’t get much visibility when it first came out.

    About the episode:

    Nataraj hosts Pedro Domingos, a distinguished figure in AI and machine learning, to discuss the current state of AI, hype cycles, and future trends. Pedro shares insights from his early career, his widely-read book "The Master Algorithm," and his satirical novel "2040." He offers a critical perspective on LLMs, the AI safety debate, and what truly drives progress in the field, while providing guidance on navigating the complex information landscape and choosing impactful research problems. The discussion dives deep into the societal impact of AI, the importance of critical thinking, and the future of AI research, offering a unique blend of technical insights and thought-provoking commentary. Why should you care? Understand the reality behind the AI hype, identify future trends, and learn how to navigate the complex world of AI research.

    What you’ll learn  

        – Understand the reality of AI progress, separating it from the hype and misconceptions surrounding LLMs and AGI [1, 2].

        – Learn about the history of AI, including Herb Simon’s Nobel Prize and the evolution of machine learning as a subfield of AI .

        – Discern the truth about AI, including the importance of machine learning, reasoning, and other AI fields .

        – Identify the key factors driving investment in AI and the potential risks of the current AI bubble, including how technological progress can be shaped using S-curves .

        – Gain insights into the future of AI research, exploring the limitations of transformers and the need for diverse research directions .

        – Explore the concept of "The Master Algorithm" and how it provides a comprehensive view of AI, beyond narrow slivers of research .

        – Learn practical tips for navigating the information landscape, including identifying reliable sources, being a critical consumer of information, and maximizing your "Sharpe ratio" in terms of impact .

        – Discover the importance of mentorship and community in AI research, including attending conferences, engaging in discussions, and learning the empirical method of machine learning .

    About the Guest and Host:

    Guest Name: Pedro Domingos is a professor emeritus of computer science and engineering from the University of Washington and a leading expert in artificial intelligence.

    Connect with Guest: 

    → LinkedIn: https://www.linkedin.com/in/pedro-domingos-77b183/

    Nataraj: Host of the Startup Project podcast, Senior PM at Azure & Investor. 

    → LinkedIn: https://www.linkedin.com/in/natarajsindam/  

    → Twitter: https://x.com/natarajsindam

    → Substack: ⁠https://startupproject.substack.com/⁠

    → Website: ⁠⁠⁠https://thestartupproject.io⁠⁠⁠

    In this episode, we cover  

        (00:01) Introduction and Guest Introduction

        (01:26) Pedro’s early career and why he chose machine learning

        (03:51) Nobel Prizes and AI

        (07:12) AI vs Machine Learning

        (08:53) LLMs and the current AI hype cycle

        (14:17) Comparing Models to Human Intelligence

        (16:56) Investment in AI and progress

        (21:22) Thoughts on OpenAI

        (25:56) Investing in talent

        (29:05) AI Safety

        (35:10) Master Algorithm

        (40:27) Jensen Wong and NVIDIA’s Pivot

        (43:59) AI Chip Projects

        (47:05) 2040

        (52:19) How AI Will Change Society

        (55:12) Recommendation Systems

        (59:56) Sources of Consumption

        (01:07:50) What Pedro is consuming

        (01:09:59) Advice to those interested in AI research

        (01:12:41) Mentors

        (01:15:47) Advice to Researchers

    Don’t forget to subscribe and leave us a review/comment on YouTube Apple Spotify or wherever you listen to podcasts.

    #AI #MachineLearning #DeepLearning #ArtificialIntelligence #MasterAlgorithm #2040 #PedroDomingos #Research #Innovation #Tech #Technology #Podcast #Startup #Entrepreneurship #VentureCapital #LLMs #AGI #AISafety #FutureofAI

  • Offline AI, personal AI assistants, edge computing revolution, compression of neural networks, privacy, compute efficiency, AI infrastructure startups | David Stout, Founder of webAI

    Offline AI, personal AI assistants, edge computing revolution, compression of neural networks, privacy, compute efficiency, AI infrastructure startups | David Stout, Founder of webAI

    Discover how webAI is revolutionizing AI by enabling powerful models to run directly on devices.

    About the episode:

    Nataraj chats with David Stout, founder of webAI, about bringing AI infrastructure to edge devices. David shares his journey from a Michigan farm to pioneering techniques for compressing AI models onto phones, laptops, and even airplanes. They discuss the importance of data privacy, real-time processing, and the shift from cloud-based AI to a web of interconnected, specialized AI systems working across devices. Learn about WebAI’s vision for the future of AI and how they achieved a $700 million valuation.

    What you’ll learn

    – Understand the limitations of cloud-based AI and the advantages of edge computing for data privacy and real-time processing.

    – Discover how webAI compresses and optimizes large AI models to run efficiently on everyday devices like phones and laptops.

    – Learn about the potential of a decentralized “web of models” and how it could revolutionize various industries.

    – Explore the key factors driving webAI’s success, including their focus on horizontal technology and support for diverse AI models.

    – Gain insights into the future of AI hardware and software.

    About the Guest and Host:

    Guest Name: David Stout, Founder of webAI, pioneering AI infrastructure for edge devices.

    Connect with Guest:

    → LinkedIn: https://www.linkedin.com/in/davidpstout/ 

    → Website: https://www.webai.com/

    Nataraj: Host of the Startup Project podcast, Senior PM at Azure & Investor.

    → LinkedIn: https://www.linkedin.com/in/natarajsindam/

    → Substack: ⁠https://startupproject.substack.com/⁠

    In this episode, we cover

    (00:01) Introduction and Guest Introduction

    (01:35) David’s Journey in AI and Machine Learning

    (03:51) Bringing AI Models to Devices

    (05:49) Applications of AI on Devices

    (08:10) The Thesis for Starting webAI

    (11:16) Optimizing for Specialized Models

    (15:02) Web AI’s Customers and Use Cases

    (17:00) Hardware Optimization for webAI

    (19:12) Lightweight Personal Models

    (22:55) The Trajectory of Foundational Model Companies and AGI

    (31:48) Emergent Behavior and Intelligence

    (35:17) Prompts Don’t Pay Bills

    (41:22) Meta’s Approach to Recruiting and Spending

    (44:51) Form Factors in AI

    (51:04) Surprising AI Products

    (52:42) Breakthroughs in AI Research

    (56:31) XAI’s Models

    (01:03:24) The Future of Google and Search

    (01:06:39) Finding David and webAI

    Don’t forget to subscribe and leave us a review/comment on YouTube Apple Spotify or wherever you listen to podcasts.

    #AI #EdgeComputing #MachineLearning #DataPrivacy #WebAI #ArtificialIntelligence #DecentralizedAI #AIInfrastructure #Startup #Technology #Innovation #Podcast #Entrepreneurship #TechStartups #DeepLearning #ModelCompression #AIModels #Inference #NatarajSindam #StartupProject