Category: Podcast Episode Transcript

Full transcripts of the Startup Project podcasts.

  • Transcript: Jeff Tatarchuk, Co-founder & CEO on TensorWave | Startup Project

    Transcript: Jeff Tatarchuk, Co-founder & CEO on TensorWave | Startup Project

    This page contains the readable, near-verbatim transcript from this Startup Project episode.

    • Guest: Jeff Tatarchuk
    • Company: TensorWave

    Full Transcript

    Nataraj (00:02.214) Hello, everyone. Welcome to Startup Project. Today on the show, have Jeff. Jeff is the co-founder of Tensorwave. With so much of AI compute spending, I think we're in an interesting time where, first time ever, we're seeing a bunch of new Neo clouds evolve around different strategies in order to provide more AI compute that we need. And Tensorwave is one such company. So today, I'm going to talk about

    You know how tends to be solving the AI compute problem. Why they're exclusively working with AMD and what it takes to build in the modern day data centers and a lot more with that Jeff welcome to the show.

    Jeff Tatarchuk (00:47.79) Alright, it's good to be here, man. Thanks for having me.

    Nataraj (00:49.924) Yeah, so a couple of years back, if you ask me, like, you know, there were smaller cloud companies, know, Cloudflare was a smaller cloud company. They were trying to be a larger cloud company. There's DigitalOcean, which is even a much smaller cloud company. But I always wondered, why are more cloud companies not coming up? And then once, like, the whole

    AI compute spending cycle started, then we've seen a lot of Neo cloud companies doing different things. Some are focused on exclusively bringing more infrastructure online, some are bringing, you know, some companies like OAS data bringing whole new architectures. I think every time we sort of have a big wave of new innovation, there's always a new architecture innovation that comes up and then we see companies built around that. And I think you are one such company as well.

    So, you for those folks who have never heard about Tensorwave, can you give a little bit of introduction of, you know, what is Tensorwave, what do you guys do, and how did the whole idea of Tensorwave come together?

    Jeff Tatarchuk (01:58.05) Yeah. So TensorWave is simply put a Neo cloud that only deploys AMD GPUs. And it started out. So my other co-founder, Derek and I initially had launched a FPGA cloud business about eight years ago. And we were solving one of the harder problems first. FPGAs are usually more of an edge chip that's used for really low latency use cases.

    And we decided to make them available at scale in the cloud. And we were one of the largest FPGA clouds previously. so, yeah, yeah, yeah. So VMXL was the company. And then we were working with a lot of the chip providers, Xilinx at the time, and Intel who had acquired Altera and then recently kind of decoupled Altera. And once they realized how challenging FPGAs can be,

    Nataraj (02:34.007) was that VM itself.

    Jeff Tatarchuk (02:56.21) It's we started with that. And so we were mostly focused on video transcoding and weather modeling and not really focused on AI at the time. But it taught us what we needed to know to deploy cloud infrastructure, set up data center infrastructure, create a really easy experience as possible for the end user.

    And so we were doing that for some time and we realized that, you know, as soon as the market shifted all into AI after chat GPT was launched, all the attention shifted from any resources that were going into FPGAs into GPUs. And, you know, in 2022 and 2023, there was huge demand for GPUs. Nvidia had supply shortages. You couldn't get access to anything. And we had a friend come up to us asking, saying,

    And, you know, knowing that we were doing cloud, you know, we were kind of AI adjacent asking if we could help him and his portfolio companies get access to some GPUs. And we said, you know, we're not really focused on GPUs right now, but we do work with, and we had worked primarily with Xilinx and Xilinx had gotten acquired by AMD about four plus years ago at the time. And then our company

    We were working with AMD. We became kind of their support internally with our FPGA cloud. They would send us all of their latest and greatest FPGAs and we'd get them, you know, we'd co-develop and work and get them debugged for them in the cloud. And so we had already kind of built and were embedded into AMD. And then when they announced their GPU offering, it made sense for us to make the shift to go all in and deploying.

    their GPUs at scale. And so when our friend, my VC buddy, came up to us saying, hey, can you help us get access to some GPUs? We said, would your portfolio companies consider going after AMD? And he said something that I never forgot. And he says, if it works, we will definitely encourage them to use AMD. And the light bulb went off the next day. Frankly, we created TensorWave.

    Jeff Tatarchuk (05:03.566) and called AMD and said, hey, we need a significant allocation of GPUs and we're going to go all in to be the first and best to deploy your GPUs at scale. And we were announced December, 2023 as one of AMD's official launch partners for their first kind of data center chip, the MI300X. And we've been off to the races ever since. so it started with, you know, the supply shortage and then the frustration that the customers were having around

    you know, the, the pricing and the profit margin that, Jensen and video were demanding. And we kept having people coming to us saying, Hey, we're tired of giving our, you know, profit and margin to Jensen. We need another solution. And so helping bring optionality to the market. we, with, our experience with AMD and seeing, you know, Lisa Sue's vision and roadmap on where they plan on going. knew that AMD was the next best solution to,

    to go all in and be that for them. And so we are the premier AMD support. If you are considering AMD, we're your best solution to make it as easy as possible to deploy their chips at scale. So that's how it got started. And we've been off to the races ever since.

    Nataraj (06:21.318) Talking a little bit about the development of FPGA Cloud, what was that business like? Who were your customers? Because I think that will help understand us, how did that help in transitioning towards building.

    Jeff Tatarchuk (06:37.868) Yeah, so I mean with

    With FPGAs, there's a lot of significant challenges around it. There aren't compilers written, or if they are, they're not very good. And everybody was trying to solve that problem. And one of the things with an FPGA is, yes, theoretically, it can be more flexible and more efficient and performant than a GPU. But the amount of complexity it takes to squeeze out that extra performance.

    we found out very quickly wasn't worth the squeeze. so even if we could get an extra 10 to 15 % performance boost, the amount of extra work you had to do wasn't worth it. And so that's one of the lessons we learned very early on as we were one of very few companies working in this space and working with a small group of ecosystem.

    Nataraj (07:32.262) Are you trying to build a cloud for FPGAs or are you?

    Jeff Tatarchuk (07:35.694) There was a couple, but not very many, but it was more so the ecosystem that we were working with, working with Zylinks and Altera, and there were a few other researchers that were also trying to solve this problem as well. so everybody was kind of leaning in. I'll never forget being at Intel and Intel is like, asking us the same question, like, who are your customers? Like, who's actually buying these things? And we did have a number of customers that

    you know, that we're building various products with like whether I said, what, like whether modeling products and video transcoding products, a lot of simulation, products. so it, it was, it was rough because there wasn't a lot of tools available. then the developers needed, you know, cause FPGA is in order to read, you know, for those that don't know, and FPGA is a field programmable gate array. that is, you know, very, very fast and

    you can, you know, reprogram an FPGA, but before you had to be an electrical engineer to go in and reprogram, you know, and bear a log or whatever. but now like the engineers that can create, you know, reprogram an FPGA and create the bitstreams necessary to do it. They're very, very expensive, and specialized. And so I ran into a lot of just challenges early on as we were, you know, putting all this together, but you know,

    It taught us everything we needed to know as we were deploying the data center infrastructure to support this, building our relationships with our OEMs and dealing with building a cloud platform on top to make sure that it's as easy as possible. our initial goal was to create a cloud platform that abstracted away what was underneath the hood completely. And so they just could say, this is what we need.

    and we can spin it up and give them access to it. so working on some of those problems early on is what gave us the experience to it. And it was funny when people were, you know, when we first went into doing the GPUs with AMD, everybody's like, you know, these could be very challenging. You know, the first, first batch of AMD GPUs, it be a lot of problems. And we're like, man, you don't know the kind of problems we were dealing with previously. Like the problems we're dealing with on the, on the GPU side is nothing in comparison.

    Jeff Tatarchuk (10:02.784) So yeah, so building out the infrastructure, supporting customers, and then making a platform as easy for the customers to use as possible with a very complex chip was our first kind of strike at this.

    Nataraj (10:17.476) And for the PGA cloud, is it similar to building a regular cloud where you're working with like different data center vendors, who are your partners in that effort?

    Jeff Tatarchuk (10:31.158) Yeah, so we actually we started off co located when we first built we just needed access to power and we needed to do it quickly. So we actually got set up in Cheyenne, Wyoming. And then quickly.

    Nataraj (10:44.676) Like co-located and this is for the audience, like there's already working data centers and we basically take some space in a working data center and put your stuff on infrastructure there.

    Jeff Tatarchuk (10:55.288) That's right. That's right. And so, yeah, they'd already built the data center. They were managing it. All we had to do is bring our servers, deploy them. yeah, they had a team working there around the clock that would manage and maintain it. And if we had any issues, they would go in and help us with it. And so we started with that and then we were able to actually build our own, data center. And, it's where we were able to get, really creative and work on some, some new and exciting efficiencies that that made it,

    even better for us to do it ourselves. We're actually able to save a lot more money and learn along the way. we, learned the whole stack from, you know, acquiring the power to building out the data center infrastructure to managing and maintaining the servers to building out the software and cloud stack to support it with a very small tiger team at the time. So, yeah.

    I still have PTSD from those days, it was a great experience and learned a lot of lessons about when you start a company. There was a lot of incentives that Cheyenne Wyoming had given us to move out there. And they'd given us some grants and different things, and all of that was contingent upon us hiring people for…

    you know, to fulfill those grants. And if you're in a town or a place where people don't want to live, it's very hard to recruit. And so we were trying to recruit from the coasts and in a town that didn't have access to the infrastructure or housing with amenities or resources. It became, it became a challenge. And so we thought, Hey, like, this is a place where we can, you know, be a big fish in a small pond. can grow, we can work with the city and the government and make all these different moves and they're to give us money.

    But sometimes when people give you money, comes at a cost. And so you have to count that cost when you're starting a company. that was one of the things we definitely learned early on.

    Nataraj (13:00.55) So in some sense you will write time, write place with the right experience, deeply embedded into the infrastructure space and how to build data centers. And you basically are in some ways position to start TensorWave if you look back at it. So let's talk about TensorWave. So you're now building data centers with

    AMD GPU clusters that offer AMD GPU clusters. And top of that, you offer a cloud or platform on which I could just apply my own compute clusters, do inference training, all that stuff. Is that the right way to describe conservative?

    Jeff Tatarchuk (13:46.776) That's right. So we, yeah, we, we will do the whole thing from identifying power, building the data center, retrofitting older data centers to support these GPUs and then build out the whole stack for customers that need access to compute for both training and inference and production.

    Nataraj (14:07.598) What does building data center today look like? How long does it take? What are the challenges that we're seeing? How much of the challenges that you see in the news about getting access to power, getting access, like how quickly people want to get these things running? Or how much of that is high-pressure? How much of that that you see on ground?

    Jeff Tatarchuk (14:31.468) No, it's a real problem. Power is the commodity that is the bottleneck right now. They can make the chips. The supply chain is fine for now. But there isn't currently enough developed critical power to deploy the GPUs. And so there are a lot of companies out there that have access to power.

    that are trying to sell it, but you you need to be able to get your substations and everything built and set up and the water and all the other pieces necessary in order to, to build your data center on top. And those timelines are a lot longer, take a lot longer than, people usually anticipate. And so, yes, like getting access to power is a major concern. there isn't enough power available, I think like in order to hit the current demand over the next few years, we need like 30 gigawatts. And if you think like a,

    It takes one nuclear power plant to power one gigawatt of power. you know, how fast are we putting up, you know, nuclear power plants today? It's just not fast enough. so one of the things that we focus on is, you know, we have a pipeline of opportunities for us to build and they could be, you know, completely open, you know, greenfield opportunities that allows us to, you know, build on top and do all of those things. But for us, we have to optimize for speed. We have to be able to build.

    quickly and deploy quickly. have customers that want it now. They said optimize for speed. They're willing to pay a premium for being able to deploy fast because this is a race and the big AI labs need access to as much compute as they possibly can. And yeah, as I mentioned, the biggest bottleneck is getting access to power. then we're seeing today, so you can, there's a lot of stranded power out in the middle of Texas.

    And we have some of these other larger projects that we're seeing that yes, you can get access to power and you can build these data centers, but getting the people out to those data centers to act, to build and support the data centers is the challenge. Cause it takes thousands of plumbers and electricians and you know, everybody else to support and build it. So if it's out in the middle of nowhere, you know, you almost have to build your own little town around it to support it. So that comes with its own individual challenges and

    Jeff Tatarchuk (16:56.738) There are some towns as we're seeing that are running into permitting issues or having the towns or cities, you know, protests, having data centers in their town. They don't want, you know, AI taking over their jobs. And so there's a lot of moving parts that have to go into identifying data centers and making sure that the data centers and power and builds all happen within the proper timelines for the end user. that's,

    you know, really what's most important to them. So yeah, there's a lot of moving parts and then it's the financing those things that have to go into it as well. So

    Nataraj (17:33.991) Yeah, that was my next question. I mean, you guys raised 100 million in CVC. It looks like a lot of amount, but when you're thinking about constructing a lot of data centers, then it still looks like a small amount considering, I think, big tech is spending about 600 billion in 2026 on capital expenditure of building data centers. So how's, and you guys have built, I'm assuming, I think two data centers already.

    that are up and running or is the second one already up and running. But you have two data centers up and running, which is quite a cost for like, how does the financing today work and how are you sharing costs?

    Jeff Tatarchuk (18:19.214) Yeah, it is a lot. It's a lot of money. When we got the hundred million dollars, it just comes in the bank and right out the bank. always tell people that we're still living on ramen, even though we've raised, you know, mid nine figures currently for our data center, you know, GPU deployments. And so luckily we do have, you know, great partners when it comes to financing with our investors, our lead investor.

    is Magnetar. Magnetar is the fund that took CoreWeave from Seed to IPO and backed their financing. And then they're able to help bring in some of the other larger banks to also help with the financing as well. so cost of capital is an important factor in this. We are new company. We're only two years old. And so

    you know, being able to number one, like get access to customers that have a great credit rating and the better the credit rating, the better our cost of capital is. but then we're able to lower the price of our deployment for the, for the end user. so, we are in a great position with funding, to do everything we need. But again, it still comes down to, you know, lining up or playing air traffic control to line up your.

    power with your customer, with the financing and making sure all of those things land and can be built and deployed at the time necessary and coordinate all of that is really the challenge behind it. one of the things you mentioned early on is that, there are a lot of people attempting to do cloud. Before it was just the hyperscalers and then a couple of others. And then all of a sudden, one of my friends at Nvidia said they have 150.

    you know, Neo clouds at, you know, that are like that. I don't even feel like there are 150 AI labs, you know, in this ecosystem. And so how, how is a, you know, ecosystem with 150 Neo clouds actually able to stay alive? And I do think some people saw, you know, the, the, the, the profit opportunity to make money where people think all you need is a data center, some servers with some GPUs in them, you plug them in and you rent them back to.

    Jeff Tatarchuk (20:42.606) open AI and you're fine when that's there's a lot more that goes into it that a lot of people don't take into consideration as they are as they're doing this. And so I think some people just thought as a financial arbitrage and I think those that saw it as such will find out the hard way that there's a lot more that goes into it.

    Nataraj (21:07.406) I think the difference would be companies who on paper can just say, go to a company like Equinox or Equinix that builds data centers and just partner with someone like that. And then if you can arrange financing and buy GPU clusters from Nvidia, then put that on the same data center and create a cloud on top of it and rent it out. That seems like a doable version given if you're good at raising certain amounts of capital on paper.

    I think there a lot of companies like that, but I have not considered them as know, clubs because there is still a technical challenge of bringing a new type of GPU into the data center and building the racks, the cooling systems, the compilers, know, NVIDIA SCUDA. Now you have to make sure that if I'm a large AI lab, I'm

    running my models on both Nvidia and AMD. So that means there's a technical challenge of how do you make that happen for your customer who is running a large cluster and they already have Nvidia already in another one easy way to run on your cluster. So there's a technical challenge of building the actual rack and making it available to a customer through your cloud. think that if we count that way, I think there are not that many Neo clouds.

    But I think the NeoCloud market definition is slightly different because on paper a lot of companies can look like NeoClouds which are not actually NeoClouds because there's no technical differentiation or there's no technical challenge that is being solved as a company, right?

    Jeff Tatarchuk (22:46.402) No, no, it's all it is spreadsheets and financial arbitrage. and if you put all of that together, you can, you know, make it happen, but the rubber meets the road when you actually have to deploy it. So GPS can be finicky. And, as you mentioned, like there, there are a lot of, there are a lot of challenges at stake. So

    We love it though, because we worked on some of the harder problems first in the FPGA world, like seeing some of the opportunities that we're able to work on with AMD to truly bring this to market is really exciting for us.

    Nataraj (23:24.474) So, I mean, you obviously talk a little bit about your role. mean, you're a chief growth officer. You're trying to grow this thing. Talk a little bit about customers, right? One thing is, all the top AI labs want more and more compute. That's pretty obvious. But whenever I think of training workload, how many, there's, think, a larger market for fine tuning. I think a lot more companies are doing fine tuning with either smaller models or larger models.

    But how many companies are going afterwards big clusters and how do you see that demand shaping up?

    Jeff Tatarchuk (24:00.226) Yeah.

    You're right. I do think at, especially at the enterprise level is where we see fine tuning is the bigger opportunity. And there are a lot of companies out there that are working specifically with enterprise where they can, you know, take one of the larger models and then fine tune it to the enterprise's specific, you know, use case or needs. And that being done at scale, I think is a significant opportunity. And still, I do feel like the enterprise is still trying to navigate the AI.

    you know, landscape and how they're going to integrate and implement it into what they're doing. And so I do see a lot of pent up demand on the other side of enterprise that hasn't fully broken yet, but there are a lot of people trying to solve that. on the other front, yes. Like, I mean, you, asked how many, I, again, I don't think there's more than a hundred that are doing significant like hero training runs that need more than like a thousand GPU clusters for years at a time.

    I could be wrong. Maybe there's a lot more hiding underneath the bushes, but I can't imagine there being more than that. And, you know, the primary focus is on being able to support the top 10, you know, hyperscalers and AI labs that need access to compute. And so for both training and for inference now with AMD, AMD started out optimizing their GPUs for inference and

    That was their first use case and making it optimized for that was really important to hit the ground running while Nvidia was focused more on training. AMD was able to capture a lot of the inference market. Meta announced that they host their llama models on AMD. Obviously, Azure has AMD in their platform. OpenAI just announced.

    Jeff Tatarchuk (25:55.906) A few months back that they're doing a significant deployment. They six gigawatts of AMD will be deployed in the future. And so we're seeing a lot more of the AI labs interested and focused and they're still their primary focuses on inference. But as of recently, we are seeing more like if you look at our 8,000 GPU cluster that we deployed of the MI325s, it was built as a training cluster.

    And so Lisa Sue had given us the challenge and the mandate to create the most performant AMD training cluster. And we did in record time. And so we've been able to focus on that. And so from my perspective as the chief growth officer, as I like to joke, the chief GPU officer, as I'm meeting with the labs that need access to compute, they have a lot of people banging down their door.

    The amount of customers I have that are, you know, that announced that they've raised 50 to over a hundred million dollars. They're like, man, we're getting so many steak dinners from all of these AI clouds and getting flown on private jets all over the place. And they have a lot to choose from when it comes to the Nvidia world. But if they want to make a bet on helping to diversify and democratize, you know, what they're using, they have to look for an AMD solution. And, you know, we are the best with that.

    When we first started, mean, was, was the challenge of people saying, I had no idea. And he did that. And so it was coming up with as many different creative ways to get the attention of the market, to make sure that they knew that AMD yes, has a GPU. Yes. It, you know, can do inference and yes, it also can do training and giving people options to try it. And so I'll never forget when we first started the company, like

    three months in, we started in December and then GTC in 2024 was in March. And we had just raised our first like 40 million bucks and we decided to get a one of those LED trucks and we would surround the San Jose Convention Center during GTC. And we had, you know, a comparison of the H100 and the MI300 on the back and showcoasing all the different specs and showing that the MI300X definitely is better. And at the end, there was this robot.

    Jeff Tatarchuk (28:21.07) that had a red pill in its hand and it went like this to the audience, to the thing. And everybody loved it. It blew people's minds. knew exactly what we were trying to communicate. And people were taking selfies in front of it. And everybody at a lot of the big companies that were at GTC at the time were like, yeah, you guys were all in our Slack channel as a way of getting the attention. So that people just needed to know, right? It was kind of a grassroots guerrilla marketing.

    Nataraj (28:27.088) matrix.

    Jeff Tatarchuk (28:49.25) that needed to be done to just kind of shake people like, there's a new player that is viable on the market and we should consider it. And so we got a ton of leads coming in and how we took, like a lot of people are just curious. know, initially we had a lot of people coming who had worked for all the national labs, which do, you know, lot of those supercomputers are built using AMD. And so they were already familiar with AMD. It was kind of our initial audience that was coming over. then a lot of, we, have some.

    clients that have never bought an Nvidia GPU, like intentionally, just because they love the, they love AMD, they love what they stand for and they're committed to it. And so we still see that kind of transfer over from the consumer side and to those that are now developing in the AI world who want only AMD. And then it's, it's then it's, you know, we still have to go out and prove ourselves that we, know, that it does work. It's just as good, if not more efficient than an Nvidia GPU.

    And we can show that we are the best at supporting them at that process of scale. So we typically will bring in a customer, we'll analyze what are they doing, what's their use case, what frameworks are they building on? And then we will then go and validate to make sure that everything that they're using, our internal AIML team will validate what they're doing and then make sure that there aren't any gotchas or bugs or any issues that they're gonna have.

    run it and then once we validated it, and sometimes there are issues that we have to go back to AMD, we have to go back to one of the frameworks and fix something before we let the prospect on. And then once we have all that ironed out, we then get them on a POC to let them, you know, take a test drive, sit behind the wheel for themselves and show them that yes, it does truly work and it's just as efficient and better than Nvidia in a lot of ways.

    Nataraj (30:44.294) Are there any specific architecture advantages people get by using AMD? Like is there a differentiation that AMD does better than Nvidia?

    Jeff Tatarchuk (30:44.45) But even.

    Jeff Tatarchuk (30:54.786) Yeah. And so like right now AMD still has an advantage and landed with the advantage of having more, more memory. They have more VRAM significantly than, the, than Nvidia does. And so if you need to host some of the larger models, like for instance, if you're hosting like a 70B model, you need, you know, two.

    GPUs in order to do it on an H100 that has only 80 gigabytes of VRAM versus the MI300X that has 192 gigabytes of VRAMs. You're able to host more without having to kind of split it up over across more GPUs effectively. And then AMD has their chiplet architecture that Lisa bet on early on. And as a result of the chiplet architecture,

    Yeah, it's starting to pay off showing that now they can break a chip down into more pieces and they could take advantage of a chip even more so than what you can do on Nvidia.

    Nataraj (32:07.142) One of the biggest advantages that NVIDIA has is obviously CUDA, it provides libraries, compilers, and debugging tools for GPUs and sort of like their proprietary system that will run on GPUs, NVIDIA GPUs, designed for NVIDIA GPUs, and that has been long argued as one of the biggest sort of NVIDIA mode out there. How does that affect customers?

    Jeff Tatarchuk (32:27.15) Mm-hmm.

    Nataraj (32:34.096) for trying to run out of AMD. Like, the development team might be already familiar with, you know, building on top of Nvidia. And this is sort of like a, in some sense, it's not a unique problem. I think in some sense, like, when AWS, Azure, and GCP have, like, if your organization has AWS as a strength, then you hire more AWS people. That's sort of like compounds, and that's one of the reasons why AWS is so popular in startups, because the early talent knows AWS in every startup.

    sort of builds on AWS, right? They have the small company advantages. So, talking about those challenges of having CUDA as a big model, how does that change, what are you guys building? Because now you're sort of like one of the earliest clouds that is building AMD clusters. How does that look like for AMD?

    Jeff Tatarchuk (33:21.17) Yeah, so that was the you know, the initial kind of gut responses, you know, how, is the software and what we found is yes, there are some folk that have built on some of the CUDA specific libraries that are proprietary and if they were to have to switch, you know, if they were doing like Kublaz or CDNN or

    you know, kudy and then or something like that, they would have to take some time porting it back from that to AMD. so, aim, Nvidia has CUDA and then AMD has Raco and, but what it simply means is like most, what we're finding is most AI engineers are, using, you know, PI torch, TensorFlow or Jacks. And if you are building with any of those frameworks,

    You can port your code over from CUDA to or NVIDIA to AMD seamlessly. This was actually one of the things that we used to raise our first initial money was there was an article that was done by Databricks and the Mosaic ML team at the time with Naveen and Abhi over there. They were simply showcasing that they did it on an MI250 that they could showcase that you can.

    you know, take your code from Nvidia on CUDA and run it straight out of the box on AMD and it works. And that was when the light bulb went off for us. This was in, I believe like late 2022, 2023 or sorry, 2023. Um, and it, it revolutionized everything. Like, we didn't even know it was possible to do that. And so, um, yeah, it's, it is, um, you know, uh, yes, there are challenges, but

    Every day, AMD has done a lot to catch up on the software side as things are becoming more and more efficient. And one of things that we've done at Tensorwave is me and my team, launched a summit called Beyond CUDA. And so the first year we did the LED truck, the second year we did Beyond CUDA during GTC. We did it the Monday of GTC.

    Jeff Tatarchuk (35:39.534) And we brought together the best researchers, founders and engineers who were building things outside of the CUDA ecosystem to showcase what they were doing and that it was actually possible and easy and just as performant to do. And we had about 400 show up and it had so much energy and excitement around it that it's continued to, you know, that the momentum has continued to this day. so,

    So continuing to see, know, AMD's done a lot to support all of the other like Inference Engine frameworks from Triton to SG Lang to VLLM and the others. And so there's a lot of people working on some really cool projects that are trying to create kind of heterogeneous, you know, ability to switch from a TPU to an Nvidia GPU to an AMD GPU or whatever. for us, you know,

    One of my things is I want to build an ecosystem of people that are doing these sort of things, solving these sort of problems and work on doing them together. And so we just had a customer the other day as we were telling them, you know, I telling them about, you know, cause we usually hear that it's the number one people think people say is about the, the kuda moat. And this particular customer said, you only need kuda if you don't know how to, if you don't understand, you know, how to really, you know, do low level development on the GPU. And for them, like kuda

    they don't need it at all. Like they, they know what to do with the hardware and to get it to do what it needs to do. So, one of the original guys that developed CUDA, name is Greg De Almos. He's on our team. he's worked for Nvidia and Intel and AMD and did his own startup that was built on AMD GPUs. And now, you know, his main focus is on this and he now has this, this

    Nataraj (37:33.734) think he's the founder of the open source scalar AI, right?

    Jeff Tatarchuk (37:37.358) Yeah, that's right. Exactly. He's, he's great. And so Scalar LM is a unified training and inference stack that he's been developing. one of the quips that he says is that CUDA was always built with the purpose of going beyond CUDA and, know, growing and going beyond the, the ecosystem. so, that's one of the things that we partner very closely with AMD to.

    support the projects that are out there to help build the ecosystem and strengthen it so that it is more brust, more robust and that people have the resources they need to be supported. yes, CUDA is no longer remote. AMD works out of the box. And yeah, if you think otherwise, we'd love to let you try it for yourself and we'll show you.

    Nataraj (38:32.27) And so that means anyone like OpenAI can just easily change or quickly adapt and deploy that training cluster on any AMD based cloud essentially.

    Jeff Tatarchuk (38:34.862) Thank

    Jeff Tatarchuk (38:45.344) Exactly. Yep, exactly.

    Nataraj (38:47.789) How does success for TensorFlow look like, know, three years down the line, you know, how does TensorFlow being successful will look like?

    Jeff Tatarchuk (38:59.852) Yeah, I guess I'll start with me. What, what success would look like from my perspective is that the customer has viable options for them to buy compute and they don't have to be dictated by the demands of Jensen or Nvidia and that there is a true, you know, competitive market available for compute and from tensor waves.

    perspective, we are able to play a significant part in that by providing the best optionality and providing the best AMD GPUs for the market to be able to consider those options. so for us, we want to make sure that we have a resilient, secure, performant cloud that they can rely on. We think of ourselves as the AI utility company. We want to make it just as

    You know, just as secure and just as you're just as confident as you flipping a switch in your house. You're just as confident in your cluster running behind the scenes as you're doing your training and then you're running your inference and production. You are just as confident in that. And so for me, I would, I would see success is that, you know, our customers are happy because the compute decision that they have to make is the largest, you know, purchase they're going to make.

    as a company where they're spending hundreds of millions of dollars, if not billions of dollars on these GPUs. And if they have to spend a lot of time messing with them and fixing them when a GPU dies and when there's downtime and working on all of the extra stuff, they're wasting time that they should be focused on their customers, their product, and solving the problems that they should be focused on. So I would consider Tensorwave a success.

    If we've completely abstracted that away and we have a great resilient product that is able to provide their AI training and inference needs at scale. And yeah, we are the ultimate go-to for that on the AMD side.

    Nataraj (41:12.102) Yeah, I think that's a good note to end the conversation. I think this has been a very fascinating conversation. And thanks for coming on the show.

    Jeff Tatarchuk (41:19.17) Hey, thanks for having me.

  • Transcript: Andrew Bialecki, Chief Executive Officer, Co-Founder, and Chairperson of Klaviyo | Startup Project

    Transcript: Andrew Bialecki, Chief Executive Officer, Co-Founder, and Chairperson of Klaviyo | Startup Project

    This page contains the readable, near-verbatim transcript from this Startup Project episode.

    • Guest: Andrew Bialecki
    • Company: Klaviyo

    Full Transcript

    Nataraj (00:03.73) Hello everyone, welcome to Startup Project. Today we have an incredible guest who's not only built a multi-billion dollar company, but also fundamentally reshaped how businesses will connect to their customers. My guest is Andrew Bulecki, co-founder and CEO of Clavio, an AI for CRM platform that has become indispensable for B2C brands worldwide. Whether it's Liquid Death or Skims or Mattel.

    all use Klaviyo to stay in touch with their customers and market to their customers. Klaviyo initially started as an email marketing company in 2013 and later integrated advanced marketing attribution, optimization, SMS marketing, and recently cutting-edge AI capabilities with the launch of Klaviyo AI. Klaviyo's impact has been undeniable. They are a public company worth around $9 billion, about 180,000 brands globally use them.

    as of Q3 2025, that is an impressive $300 million in revenue and expected to reach $1.2 billion in revenue for this fiscal year and have impressive 32 % year-over-year growth rate. So we'll talk about Andrew's entrepreneur journey, the decisions that change gave his growth trajectory, his perspectives on the evolving landscape of data and AI.

    and how they're approaching building products using AI. So get ready for an interesting and high value conversation. With that, Andrew, welcome to the show.

    Andrew BIalecki (01:34.328) Yeah, thanks for having me.

    Nataraj (01:36.05) So I want to set the stage for the audience. Anyone who is in e-commerce or anyone who's selling something on the internet, I think is familiar with Klaviyo. But if the audience is not familiar with Klaviyo, can you just give a brief description of what Klaviyo does and what are the products that you have?

    Andrew BIalecki (01:55.182) Yeah, well, that was a good description upfront. man, so we started Clivia 13 years ago. And our dream was whether you were an entrepreneur just starting out, or you're an iconic, know, multinational business, we felt like you ought to be able to treat every single one of your customers like they were the most important only customer in the world. And, you know, we found that like a lot of businesses

    when they really want deliver an exceptional experience, like just a stunning customer experience, they really relied on people to do that. And our feeling was, like, well, in the future, it's going to be less, you know, like white glove through, you know, you or I getting on the phone with somebody. And it's going be much more delivered through software. I think that's probably a take that's, I don't know, guess feels like it's aging pretty well, given all the stuff that's, you know, happening with AI and LLMs these days. But anyways, we wanted to make that

    possible. so we actually, we actually started out as a, you know, as a database business. You know, because we felt like, if you're going to do that, then you're gonna have to replicate the way a human thinks and stores information. And then we actually, after we built that kind of brain, this kind of like, you know, very low latency, you know, database, but that could handle some like deep analytical queries.

    You know, we gave that to a bunch of customers and they, you know, they said, okay, this is awesome. And we asked them like what they were hooking it up to or what they were doing with the results. And a lot of them said, Hey, we're plugging that into our marketing system. And so we actually, for a while, trust hard, tried to partner up with a bunch of those other marketing, you know, software's platforms. And then we just pretty quickly realized that, you know, it's a little bit like, you know, if took a human brain and you sort of like, without being connected to like, I don't know, some arms, some legs, like a voice is like, just actually not that useful.

    we realized it was actually pretty hard to integrate this with some of the older tech that was out there. And that got us into marketing, like you mentioned, and then this past, you know, a couple months ago, we expanded the surfaces, as we say, the Clavio supports to go from marketing to now customer service. We have our customer agent as well extend onto people's websites and their mobile apps with our customer hub products. And yeah, I mean, ultimately for the 180,000 brands that we serve.

    Andrew BIalecki (04:16.118) Our goal is that if you want to treat every single one of your customers like the only one that matters and you want to be yourself as an entrepreneur or like at that business, your best sales rep, product expert, you can do that at scale and you can do all that through technology.

    Nataraj (04:31.516) What was that, maybe initially like what was that year or moment when you felt like, okay, we hit some kind of product market fit here.

    Andrew BIalecki (04:41.442) Yeah, so, well, I knew we were on the right track when, know, so we started out and I had this experience, before Klaviyo where I had all these little side projects. And, you know, remember building this, race search engine, like road race search engine. Cause I ran a lot of like five Ks and half marathons and things like that. And I remember, you know, I would, you know, the business there was like, was go talk to these race organizers, right? And I keep, I'd pitch, you know, this like search engine that we built.

    And I remember doing that over and over over again. I was like, man, this is really repetitive. It'd be awesome if there was like software that could do this for me, but do it the same way I was doing it. The same enthusiasm, right? The same like, you know, level of depth of like what a product could do. And so anyways, we built this. I remember one of our first customer was this haberdashery. So for folks that don't know haberdashery, means they make, you know, kind of these bespoke, you know, custom tailored suits. You know, in their case, most of their clients are men, but some women as well. And,

    I remember talking to that business and they're like, yeah, yeah, this personal touch really matters. And we built this kind of database engine that just knew everything about every single one of those that, that customer, that business's clientele. And I remember talking to them and saying like, you know, Hey, we're thinking about getting into marketing and messaging and not just being this database and source for analytics. What do you think? And they said, man, if you do that,

    we'll triple the amount we spend with you. And I remember calling my co-founder after that and saying, hey, Ed, I think we're onto something here. Like this idea of being the brain of a business and being the way that they actually like talk to their customers. Like, you know, for this company that really, really cared about every little detail of the customer experience, you know, that's, think when we knew we onto something. I think after that, you know, I had a friend probably maybe six months after that that had,

    you know, had just started his own business. Uh, and he was selling, uh, you know, quilts, um, you know, selling quilts online, doing e-commerce and he just spun up. He's like, Hey, there's this thing called Shopify. Have you ever heard of it? And I said, no, but I looked at it I was like, Oh, I was really impressed by, know, their APIs and the product itself. said, Hey, you know, I think we could probably integrate with this both to like kind of, you know, be this, you know, like pull in all this information into our kind of brain that we built. Um,

    Andrew BIalecki (07:06.158) And then also to like, you know, integrate, you know, onto their website, and help helps off them. And anyways, I remember he's like, Yeah, I would use Klaviyo if you guys could support them. And man, I remember we built that, you know, we were connecting with a lot of different SaaS products in like 2012 2013. But I remember that just took off like crazy within like, you know, six months, 12 months. And then we're like, Okay, I think we found, you know, this really good use case. And

    retail and commerce and that became, you know, kind of where we started.

    Nataraj (07:39.842) Were you an app on Shopify marketplace at that point of time or how did the integration look like at that point?

    Andrew BIalecki (07:46.479) Yeah, it's one of the really interesting things like I you know, I you know, this this idea that platforms have kind of an app store, you know, a little bit modeled off off of Apple and I remember, you know, Toby and the Shopify team were really early into thinking about this. Like the funny part was, I don't think it was I mean, they might have called it an app store, but it was I mean, there weren't, you know, hundreds of, you know, apps or integrations in there at the time. And so one of the things actually, I we're very strong believers.

    about when you're building a business is we really like this partner model of how do customers find you? And we have this thesis that, hey, look, there's really like three things that you should aim at if you want to get discovered. there three ways to think about like, know, normally like marketing and doing demand generation. And those three are like peers. like word of mouth, like you want people to talk about you. So if your product's so good that people are like, our limousine test was always like.

    Hey, if people are at like a cocktail party or they're hanging out with their friends and you know, maybe they all do something similar and somebody says, Oh, hey, like what's interesting. You want to build products that are still good. People just can't help but talk about, right. So, you know, rely on like word of mouth and peers and then, you know, partners for us. You know, we work with over 3000, you know, marketing agencies and we kind of felt like they were the experts that everybody's turning to. So if they were recommending us, it great. And then the third was platforms.

    And we had this thesis that when people were building businesses, everybody obviously wants to know their crew their customers are and wants to treat them well. But we could get to a lot of those businesses faster if we could partner up with some of the platforms. And so we thought about what are the platforms that people are building businesses on? And this is around 2010, 2012. We thought that a lot more of that would be software as a service. I mean, think that's also like as I think

    come to be true. But at the time, there was a lot of people that were building things on their own, or they were self-hosting, using open source. And we kind of felt like more and more businesses were just going to say, no, no, I don't want to deal with any of that. Just point me to the platform. That's a winner. you know, I, we, so we talked to dozens of these platforms and Shopify was just so forward thinking about this. You know, they had this approach. They're like, look, you know, we're going to be great at this part of the entrepreneurial journey.

    Andrew BIalecki (10:09.202) And, you know, we cared a lot about, you know, entrepreneurs and folks just starting out too. Because it was in my co-founder and I's, you kind of background and roots. And, you know, we said, Hey, look, we can, I think we can help solve some of like, you know, these businesses, not just, you know, you know, setting up a website, and, you know, actually transacting on the internet, doing all the hard things that resolve around payments and fraud. I think we can be that source of truth of who your customers are and then help.

    you help them do better marketing, better experiences, which will in turn will drive, you know, more growth for them. You know, we should partner up. And yeah, they were just, they were awesome. And I think back then, I've talked to bunch of folks that have talked about, how to think about these like app stores and platforms. Now, it's gotten a lot more maybe competitive or little more crowded. But back then, there actually weren't a lot of folks thinking about this way. And there were very few, like Shopify that were actively trying to curate, you know, you know, developers and software companies.

    to kind of help them, you know, flesh out or, you know, make their overall product better.

    Nataraj (11:10.108) Yeah, one of the things that I always think about Shopify is as it's sort of like they've looked at WordPress and removed all the complexity for e-commerce.

    And you basically copy what WordPress can do and used to do, but remove all the complex part of WordPress and make it more easy to do this for an entrepreneur to not think about hosting WordPress, to not deal with plugins. Plugins evolved into App Store and Shopify, but it's a much better marketplace. It's much better integration. It's such a simple idea of abstracting away things, but it became sort of like this big opportunity when you just focus on e-commerce.

    In the internet stack of things, we have these broad use cases. One is if you wanted to get customers, you use Facebook or Google Ads. So they found their product market fit there. If you want to just host anything, then you have the three clouds. And I feel like there's a small space where if you want to talk to your customers consistently once you know who your customer is, then you need to have Klaviyo or Klaviyo Live product. I feel like in that map of internet tech stack, then Klaviyo has its base, soft drama.

    Andrew BIalecki (12:21.55) Yeah, like our so yeah, I totally agree. You know, some product principle we have is you can learn a lot from the folks that are maybe the early adopters, the most advanced users. Yes, we have this product principle that's like, hey, first build things to make it possible so that the folks that are pushing the limits, they can at least accomplish what they want to accomplish. So I remember for us, it was interesting when we first started. Yeah, we built this database. We built all these APIs.

    And we pretty quickly realized there are a whole bunch of, you know, we actually expand our reach if we would just build some of these like connectors, these data connectors, because there were so many folks are like, I'm either technical, but I don't have time to build integration into this. And so we sort of a bunch of like, libraries, Ruby, Python, Node, PHP, et cetera. Then we realized it's like, if we actually just connect to some of these platforms, it actually would take a lot of the cognitive load and a lot of the work off for folks.

    And then obviously you got a whole bunch of people that just were not technical at all. And they were like, why, by the time I go find a developer and build this thing, like I'll have already given up. So this idea of like, make it possible first, like maybe with those APIs and then make it really easy. So then you add the kind of the, know, the almost like the sugar layer on top, right? That makes it fast is the thing. And then, yeah, I think any, any great product company, know, you look at, Hey, what is the abstraction or we use this, you engineering term, what's the primitive, what's this like Lego block that you're putting out in the world?

    Because you're right. Because then, people when they're, you know, when they're building any project, right, whether it's a business or startup, or maybe they're, you know, they just think about like, who has the best, you know, infrastructure and kind of building blocks. And, you know, there's really powerful thing when you start to become almost like a default, right, folks, you know, one the the crazier things that's happened at Klaviyo as we've grown is, I used to talk to our customers, and I'd ask them, you know, you hey, why did you pick us? And they'd had these very technical reasons. I tried you, I baked you off against these other products.

    And increasingly, it's a little jarring. They're just like, well, why would I start anywhere else? mean, know, Klaviyo is just the best. I think about that, I'm like, man, that's like not founded in any kind of logic. I mean, that's great, but it's become almost like a brand identity of what you're good at. So we have to work to keep that. And then also, think for some of the things we're building now is like we have to basically, we have to become the building block or the primitive for those other use cases.

    Nataraj (14:45.138) One of the headlines that really struck me when Cliveo went public was this, how capital efficient you were. I think you raised around $400 plus million, but you only spent like a fraction of it by the time you went public. Talk a little bit about your fundraising journey and the approach to being capital efficient, which seems to be a little bit of a rare thing in startups these days.

    Andrew BIalecki (15:12.494) Yeah, it's interesting people talk now about this. Hey, you know, when were the first company with maybe, you know, I don't know, 10 folks, even a single folk, you know, achieve, I don't know, a million dollars, right, 10 million or 100 million in revenue. That was kind of our method, our thinking from the start. So my belief, my advice to founders is, especially if you're technical is you actually don't need as much capital as you think. mean, it depends a little bit on, you know, what the, you know, what

    what industry or problem you're after. But in general, think people overweight, you know, how much fundraising and sort of the boost that, you know, gives you and really just boils down to like, you just got to build and you just got to, you know, you got to go nail it for some customers. And the interesting part is, once you do that, I found a lot of the best investors, the best VCs, they're sort of following, you know, customers and markets anyways. I mean, a lot of our investors over the years have told me they're like, the reason they know about Klaviyo is because they were just going in

    They looked at what they felt were large market opportunities. They then went and go talk to those customers. They did their own primary research and they said like, what products do you love? And that's how, you know, we get on their radar. And I always thought that was like a wonderful way to go about it is because you're really building it up from first principles. So the trick for us was, you know, when we started, said, we just want to aim at markets that are really big. And, you know, this idea of like, hey, we can deliver, you know, if you're a business, it doesn't matter whether you have

    thousand or a hundred million and consumers we can deliver every part of the customer experience You know as if you know was the founder or you know the you know the best product person the best salesperson talking to them That we just felt like you know not only we thought of that as like the CRM market But that we just felt like is a massive opportunity And then you know when we when we even looked at that and this actually was a big thing of sorts of confusion for a lot of folks that you know not our customers But really like maybe some investors

    which people said like, has the CRM been done? mean, isn't that, know, what HubSpot or Salesforce and you know, what we said is like, no, no, no, we were gonna do it in a very automated way. So we're gonna aim at businesses that frankly need to use software to connect with their customers. That's why we talked about building a B2C CRM. And we said, look, like we think in the future, more and more of the customer experience is going to be more autonomous. Like software is gonna decide what experience to deliver to which person.

    Andrew BIalecki (17:33.231) It's not going to be like you're setting up a human to have a conversation. So that's what we're going to go after. And we think this is going to be a monster category because if you look at like just world GDP, two thirds of it is consumers interacting with, you know, small and large businesses. And yet they sort of don't have the tools to do it. So we're to build the data infrastructure where we started and then we're going to build the marketing infrastructure. And then, hey, by the way, there's these adjacencies, you know, we're going to get into customer service and our customer agent product. And we're just going to we're going to build the entire stack.

    that allows these businesses to operate and because they need software to do this, to drive customer engagement and therefore revenue, we're gonna become very sticky and a must have. when we showed that vision, along with some of the early traction we had, I think that frankly made our experience with fundraising was actually quite pleasant, right? I we mostly got to spend time talking to folks that we felt like would really add value to our thinking, would level us up.

    And yeah, so it's my advice to any founders is always I think you should try to delay fundraising as much as you can You know, not everybody can do that But in general, you know two things two rules we had a clear view is we would never celebrate Fundraising milestones like we looked at those like those are non events. We'd celebrate a lot of customer milestones Maybe revenue milestones, but we never celebrate fundraising milestones. And the other thing is we never celebrated You know how many people worked at Clevio? I talked to some other founders and they celebrate. Yeah, I've got a hundred people

    or I've got a thousand people. And we always looked at that as like, well, that's, I that's cool, but wouldn't it be better if you were like a smaller team? Like, wouldn't it feel more intimate? You know, people bet, you know, people like the communication tax would be less. So I think that's something we've tried to, even as now we're a public company, we really try to stick to is, you know, we don't celebrate fundraising milestones, which to me is like, you don't stare at the stock price. And then we also, we just don't look at like, hey, how many people work at Klaviyo? We think about like, hey, smaller is better. And that's why we really believe in like, you know, the power of small teams.

    Nataraj (19:33.299) I mean, you said you wanted to delay fundraising, which is a little bit counterintuitive to the idea of going public, because most companies go public to fundraise. Obviously, that might not be the reason for you. So talk to me about that position, because I used to work for this company called Epic Systems. don't know if you know about it. It's the health care software company.

    Andrew BIalecki (19:51.085) Yeah.

    It's an incredible company, incredible business.

    Nataraj (19:56.243) Incredible business, pretty much 70 to 80 % of US individuals, healthcare records are managed by that company. The founder, Judy Faulkner, still runs the company. It started in 1979.

    So incredible company, but she had this thesis of like never to go public. And I think she also ensured that the company will not go public posthumously also. So she had a very strong point of view. And now we are seeing like Stripe doesn't go public. There's some other crazy, good companies that don't go public for different reasons. What was your thought process when the time came to decide whether you should go public or not?

    Andrew BIalecki (20:35.064) Yeah, did. I had studied all of the, we felt like all time great companies, software companies, technology companies. And the vast majority of them, but not 100%, because your point, Epic was one of the examples there. There's a couple of others. We look at these very large private technology companies, but most of them went public. And I kind of looked at that as like, OK, it's a bit of a rite of passage.

    There's some other things like it gives liquidity to investors and obviously like folks at Clivia that have worked very hard for many years. This is before the private markets were maybe a little more liquid. So I also think that public markets give you a chance to just tell your story more directly. I guess there's nothing stopping a private company from doing press releases or sharing their numbers, which I know a bunch do. But I also felt it's like there's a bit of trust you can build up when you say like, here, just look at our books. You can tell how healthy our business is.

    there's going be some accountability every quarter, every year. So it's interesting, like now that the private markets have gotten, I think it's they're almost operating as these like pseudo public markets. I think, you know, I don't think it changes our calculus, but I think it sort of it makes things things are almost merging a little bit. But for us, the most important thing was when we knew I felt like we would go public at some point for all those reasons.

    We just didn't want to change the way we were going to operate. And that meant, you know, having a real strong, like long-term point of view, being very like product and customer focused. And, you know, I don't know, frankly, we just felt like because of the strength of our culture, like there just wasn't a lot of risk, you know, in doing that. You know, had a saying back in 2020 when we were thinking about like, Hey, when we would go public and, you know, would we and all this stuff. We said like, look, there's some all time great companies. Like if you'll get like Microsoft or Apple and you know, these companies is like,

    I think everybody knows them as these like, you know, kind of standard candles for tech. And yet if you said like, hey, what year do they go public? Everybody probably has a vague sense of when it was, but nobody knows the date and nobody really cares because a lot of their best work has come since. So clearly they have like, you know, cultures and a way of working that just has stood the test of time.

    Nataraj (22:53.2) I mean, yeah, from perspective of retail investor, think it makes more sense for more companies to go early public. Like, you could invest in Amazon so early, but you couldn't do that now with OpenAI or like Anthropic, right? Which probably will never go public.

    Can you talk a little bit about, I think we are now in December, which is like, we just passed probably the biggest holiday cycle. We often call it as BFCM, Black Friday, Cyber Monday. Can you talk a little bit about how have things been this BMCM for Clavio and in general, any trends that you notice in e-commerce or in generally broad economy?

    Andrew BIalecki (23:34.169) Yeah, yeah. So yeah, we just had a great, you know, so for folks that don't know in, you know, retail and e commerce, you know, the reason they call it, you know, Black Friday is there's a whole bunch of retail businesses that literally go into the black like this is the time of year where they turn profitable for the full year, because sales really ramp up, you know, in October, in November, and then, you know, a lot of businesses then, you know, run promotions leading up into the holidays at the end of the year.

    So it's like a critical, it's a critical moment. think there's something like, it's like 10 or 10 or 20 % of our customers will do, you know, something like 30 or 40 % of their total annual revenue, like in these like, you know, eight weeks. So it's a crazy high percentage. So it's very, you know, it's a little bit like a, we talk about it's like a little bit like a Super Bowl moment for our customers and therefore, you know, for us. So we had a great holiday season, you know, our database, I think, you know, our stats were

    You know at peak we were ingesting more than you know 10 billion These like consumer touch points data points into our database in real time You know, think I was over the course like that was our peak like, you know on a given day And then over the whole holiday weekend, you know, we sent more than 20 billion messages or power more than 20 billion experiences And so that you know ranges from our marketing products. It's now our customer customer service our customer agent products. That's you know on fuels websites

    or that you can email or text with that will give you advice or help you like, you know, troubleshoot problems. So we're really proud of that. And then most importantly, like we've always looked at, know, Klaviyo is like a revenue engine for both small businesses and you know, some of the world's leading brands and you know, across five day stretch, we generated, I think the number was $3.8 billion in sales that were like directly tied back to experiences that Klaviyo powered, you know, that either AI or our users defined.

    And that's like an incredible number. think it's about, you 5 % of like all retail sales, you know, that happened, you know, in the US, you know, globally is because we kind of cut that. That's a global number, but we cut that down. So you think about it, like, I mean, that's a huge chunk of, you know, of revenue that your business otherwise just wouldn't have. And one of the things we love is that it's like, it's highly profitable revenue because it's from businesses and their relationships with consumers. So it was a really great,

    Andrew BIalecki (25:58.607) you know, weak, you know, brands did a great job, our partners, our customers, our team did a great job. I think the most exciting trend, though, to your question is really the rise of the retail brand as not just, you know, place to go buy products, or in some cases to buy services, but really those companies evolving into value add because of the expertise and services that they offer around their products.

    So I'll give you like a couple of examples of things we've seen in the last couple of weeks. So we work with a lot of apparel brands and there's a swimwear brand that I was talking with and they're saying like, yeah, now's the time of year, this and the summer are of, they're two busiest times of year. And they were telling me how they were using our customer agent. So this chat agent that you can put on your website and customers were conversing with it, not just about which swimsuits to buy.

    but literally having conversations about how to plan a great vacation. Because there were a of folks that were buying for the holidays and maybe, you know, if you're here in the U S for like, you know, a kind of spring break trip they were going to take. And they were asking questions like, okay, great. Well, I'm into these patterns, you know, what swimwear do you think I should buy? And by the way, I'm going away for a week. Should I buy one swimsuit or two swimsuits? And by the way, you know, here's a couple of locations I'm going, you know,

    Do you have a perspective on the style of kind of, whether I'm going to Florida, the Caribbean or Hawaii? I mean, it's crazy, but like people were having real conversations, you know, with these agents that these businesses are providing. And all of a sudden now the experience or the value add these brands are, you know, are offering is not just, Hey, we make really high quality products. They look great. But it's literally not just, Hey, what's a great swimsuit to buy? It's how do I have a great vacation? So we're seeing this like,

    everywhere. There's a you know, education company, you know, that I've known for a long time that used to use Klaviyo, you know, kind of caters to helping kids learn like reading and math. And one of the things they built in Klaviyo was this like, awesome sequence of like, almost like worksheets and coloring pages for kids and say maybe between the ages of like five and 10. And I remember they build these in Klaviyo and they generate all this content, you know, by hands.

    Andrew BIalecki (28:20.908) And then they'd sort of set it up in Klaviyo to automatically go to folks, you know, as they became customers or consume products or maybe when they subscribed. And now they're doing the same thing, but it's all personalized based on context they have inside of Klaviyo. And then using our marketing agent, I mean, literally we'll send different content to each person based on, you know, I don't know if you're like my kids, like, Hey, are you into Paw Patrol? Are you into like, you know, the K-pop demon hunters that are really popular right now?

    And all of a sudden, these like themed and even like at the level of what that individual like student wants. I mean, they've transformed themselves from being just purely like, hey, we'll help provide workbooks for your kids to practice into literally we're acting almost as a pseudo teacher and like lesson planner. So I think it's a really interesting thing is the first year that we've seen a lot more of that show up. And obviously one of the cool things for us is we think increasingly those experiences

    rather than our customers, whether they're marketers, or they're product folks, or they're folks in operations, rather than them having to point and click through software to define those experiences, increasingly folks are saying, hey, look, I have a goal, right? My goal is to delight my customers, to have them be engaged. I know that translates with how much time they spend with my products or maybe how much they buy. They're giving that goal to AI and just saying, hey, why don't you come up with the ideas?

    And then letting AI, with some of the brand controls and checks that we have, just test that stuff out. And it's just, I think it's a total mindset shift, right? In terms of, one, how businesses work, in terms of how much autonomy or agency they're giving to these algorithms. But then also this idea that like, it's not enough to just be products on a shelf. You need to be value add in and around the products and services that you offer.

    Nataraj (30:16.218) It almost, that idea can even extend that even the idea is also coming from AI after a point of file based on years of data that a brand has of Gravia. You can just say, at this point of the year, this might work for you. And you basically come up with an idea and suggest the customer that.

    for this idea, these campaigns will work. So you can almost say that the customer doesn't actually have to come up with ideas and Cravia AI will come up with those ideas. I think that's a good segue to talk about more about what you guys are doing with Cravia AI. I think you also launched a marketing agent. Can you talk a little bit about that?

    Andrew BIalecki (30:51.106) Yeah, sure. Yeah, so it's a great example. So when we think about artificial intelligence and machine learning inside of Klaviyo, we have these two agents that we've launched. So we launched back in September. One is our marketing agent, which will help you do marketing. But I think critically for us, we think about our marketing agent is so good that for some use cases, it can just do the entire process of marketing for you. So I'll talk about that a bit in a second.

    And our second is our customer agent, is, you know, hey, it uses all of the context you have about each of your individual customers. So that, you know, when you are not you or I land on the website and we know it's you, that customer agent is ready and primed to answer questions that you have based on your own personal context. You know, in retail, it might be things like product recommendations. Hey, you've seen my past purchase history. What would you recommend for me? Or it might be things as you know, helping with some things operationally.

    Hey, I ordered something, but I actually got the address wrong. you know, it's actually, it's a gift for somebody. So I need to change it from my address to somebody else's and our agent can handle those types of queries. So for our marketing agent, you know, one thing, if you're building agents, I think that we really lean on is this idea of mapping out the actual process of, whatever task you're doing. So one thing that I think we have sort of a unique advantage on is we've been studying how people think about marketing and marketing is this

    It's this great mix of like, it's brand, it's feel, it's also just creativity and the kinds of campaigns you run, but it's also just pure math and performance. Hey, what do people actually, you know, how do I know whether somebody likes this? I can run qualitative surveys, but also just how do I actually see what people are, you know, kind of voting with their feet or their clicks? So what our marketing agent does is it breaks down the process of marketing into a couple of steps. First, it'll go through all the information that's stored in Klaviyo and even some external sources and

    Figure out trends and patterns, we call them data insights, where there's something interesting that might be happening. So I'll give you a couple of examples. It could be things like, hey, we noticed that you just added a bunch of new products, say, into inventory. We should generate a campaign around that. OK, that's probably pretty obvious. People would know that anyways. We're also good at finding things like, hey, here's a bunch of products that aren't selling through. Maybe you should try to clear that inventory because it's taking up space in your warehouse or you're paying for it.

    Andrew BIalecki (33:14.494) And hey, the seasons are shifting, and it's time to move that inventory along. Or we can find things like, hey, here's customers that they loved the starter version of your product. For example, take our educational company. Hey, you sold them a workbook about math that's maybe geared towards first or second graders. But hey, we actually found an opportunity where those kids have probably grown up. It might be time to offer them some of your content that's geared towards older kids.

    So it'll go through and find all of those insights. We then convert those into what we call marketing briefs. So this idea then of like, how would you actually build a campaign around this? And it almost like writes, it's a little bit like a product requirements doc. It kind of writes down what it thinks it should do. And then we have this really great set of algorithms and agents that will then do the actual creative generation. So you can think about we're used to AI these days. It can generate copy, can generate imagery.

    But we do way more than that. We help define larger structure, like, hey, what does it mean to design, say, a whole email or email plus a text message plus a WhatsApp message. And WhatsApp, you can have all these cool carousels. So it's like it can do some of these higher order objects. It'll also go do incremental research, right, where it needs to create content so it's smart about what it's offering. And then finally, we actually use AI to play a little bit, be a little adversarial and say, OK, now that we've created all this content,

    hey, rank it and also make sure that it passes all these brand guidelines that we have. And some of these are basic checks around like kind of content safety and toxicity, but also just correctness. And then like we can actually, we can actually pull together a kind of brand, a brand guide for every business just based on things like their website, internal documentation they have. And you can imagine all of that's happening. And literally what we present to our users is, okay, here's all the ideas we came up with.

    Which of these, you like you get, now get the chance to like go approve them. You know, are these ready to go? And literally in a couple of clicks, have customers are saying, wow, you create a full launch, you know, campaign for me. Like, Hey, I might make a couple of tweaks, but then yes, go schedule that and let's see what happens. And then, you know, using reinforcement learning, we go watch, you know, Hey, which campaigns, which of these, you know, marketing experiences pervert, convert best. And then can feed that back into our algorithms. So I think this is kind of

    Andrew BIalecki (35:37.571) you this is really like the future and it solves one of the number one pain points I hear from our customers. Hey, you know, I feel like, you know, Klaviyo is really great software, but I just don't know what to do with it. Right. I have some ideas, but I wish I had more ideas. And now I think with, you know, LLMs, we're able to basically automate the entire process. And then our customers can kind of decide, I think today they want, you know, a high degree of agency, a lot of checking of the content we're creating.

    But what we're finding is as the quality gets better and better and as they build trust, we're starting to see them almost like auto-approve. We have some campaigns we see users, they'll look at for maybe 10, 15 seconds, and then they'll send out to hundreds of thousands of people. So there's a lot of trust being built up quickly. And I think this is, frankly, it's not just the future of our category in this kind of consumer CRM space, but think all software, a lot more software is just gonna be able to run itself. And then you can choose how much you wanna be in the flow.

    Nataraj (36:34.098) Is there any particular metrics that are interesting or that tell whether all these initiatives are successful? in the case, in the example that you gave, it could be just how much time it took for creating new campaigns. Do you guys track specifically some measurements, like how successful have been these, either the agents or other AI products that we're launching are successful for customers? What is the way you think about that?

    Andrew BIalecki (37:01.272) Yeah, I'll give you two examples. So, first is on the quality. One thing we're just obsessive about is, and we know this, is when we first built our marketing agent, we kind of said, it's okay, it's like good marketing intern. And that's, no disrespect to all the very highly motivated folks that are breaking into marketing, but we just felt like, okay, it's not that knowledgeable about best practices. We're still building that in. And what we feel like is, hey, it's going to go from that of intern level to, you

    Maybe like a senior marketer to like a really advanced marketer and eventually if you think about you know Some of the media types that we're working with but you know text messaging email mobile Social I mean not all these are only like 20 years old We think it's soon gonna be able to beat you somebody who's been spent. know the last two decades Doing nothing even this so in the way we measure quality there is it really does you know we do one is evaluate like hey how much do the folks that were effectively pitching our ideas to so our customers do they like them how much are they adopting them

    But then how are they performing? And what's really cool is we're seeing, you know, in a bunch of cases, our content that we generate is outperforming in terms of engagements. You can think about like click rate, but also just literally then in terms of downstream engagement, in terms of like purchase rate or conversion to customer. So that quality metric is something that we really obsess about. And then we want to show to folks so they can feel confident that like they're not losing anything. They're actually gaining things by sharing their ideas with us or adopting some of the ideas that our AI is producing.

    And then the other part is straight up, how much do people are starting to use our marketing agent and our customer agent as part of their everyday workflow? So the analogy we make is when mobile phones, starting with the iPhone, really got real web browsers, Safari for the iPhone came out. There was really interesting charts. They said, OK, let's look at internet usage and how much of it is coming from desktop, or we think traditional desktop versus mobile.

    And if you look at those graphs, something really interesting is it wasn't, you know, it wasn't that desktop usage, you know, went away. I it's probably declined a little bit since, but it really that was that mobile usage exploded. internet usage went up. So the other thing that we measure is, hey, when you look at the number of ideas, marketing, you know, campaigns that people are running, or the case of our customer agent, hey, how many conversations is this business having with its customers? We track both of those and how much of those are say,

    Andrew BIalecki (39:27.596) human generated, human powered or human initiated versus how many are AI initiated. And what we're finding is actually the exact same thing. Like it actually turns out that when you introduce AI, it's increasing the overall amount of marketing or the number of customer service and number of conversations that businesses are having with consumers. And it actually has this interesting thing where because you can kind of, you can do this hybrid model where you use, know, AI to either make your idea better.

    Or you use AI as maybe the starting point and then you know and then the actual human, know can kind of go edit from there, right? We're seeing that both with marketing and with customer service customer service We these great things where I can do the first bit of like kind of collecting all the information that's maybe relevant to helping solve somebody's problem and rather than that kind of back and forth of like, okay Can you remind me your email address and can remind me what you're talking about like all that just gets automated? We're finding that's driving up the quantity

    of both marketing experiences and customer service experiences. And that's another one that we're really bullish on is that over time, we do think that, know, not like it replaces, displaces what, you know, human thinking is doing. It's actually just, it's literally just growing the overall pie and then the amount of, you know, the amount of engagement that, you know, brands get with their end consumers.

    Nataraj (40:46.066) It's almost, I think there's a term for this, it's called Javan's paradox, when you make things cheaper, the overall usage goes up. And I think you're basically making it less costly in terms of how much time I spent to create.

    campaign so I can quickly create cameras and just verify, okay, this campaign looks I want to experiment and just giving more time to the customer and that's how they're using it more. The other question I had about AI is did anything in terms of how you organize the company or what types of roles you're hiring or how you're managing people change because of AI and because pretty much every startup founder says we are either hiring less people or

    We are doing things differently or we are hiring for a different set of people. We want more generalists, we want specialists. How has CREMEA changed? Or it hasn't changed, basically for AI.

    Andrew BIalecki (41:44.271) Yeah, so it's really interesting. it's fine because we so we bootstrapped our business. Like we're talking about when we started, which meant we didn't raise any capital until we were already profitable. like, again, I would just you know, not every business can do that. But I'd say like, oh, man, you should really try to run lean. And why do that? It's not just a financial construct. We found embedded in our DNA a couple of really great attributes. One, everybody had to be a bit of a generalist. You had to be broad. you know, I

    There's a saying that you kind of look for when you're hiring people, look for people that have this kind of T shapes, you know, kind of skill set. Hey, they're really good at something. They're kind of okay at others. We tell people that we like people that are like, you know, sort of have a very wide T, right? They're almost like a fat T shape. They're like, they're really good at a bunch of things, right? They're not sort of a superstar at one and then okay at others. They're like, they're actually pretty good or great at a couple of skills. And that intersection makes you like even more dangerous. So when it comes to, you know, let's say engineering, you know,

    when I talk to engineers, say, look, you know, I want, it's important to me that you care about design, that you think about the products you're building, by the way, that's way more interesting way to live life. Like if you just want to build really scalable systems, like that's sort of like not interesting all the way, right? So our, you know, all of our engineering team, like, Hey, we're just into, you know, talking to customers and nerding out of that. And actually I think you build better products that way. So generalists matter. And then two is, you know, the constraint we had for bootstrapping was, you know, a job was expensive. Like I remember in the early days,

    My workflow was I'd wake up in the morning, I'd spend anywhere from 30 minutes to a few hours in our help desk, just our inbox, like answering customer questions. And I wasn't allowed to start building, as I thought, making real progress forward work, until we had finished helping all of our customers. And that really forced you to focus on like, okay, what are the real pain points that…

    users are having and how can I automate? In a lot of cases, people were just looking for better documentation. So it got us to write better documentation. So just force this idea of like, how do you automate yourself? How do you automate yourself? How can you give yourself more scale, more leverage so you can not spend your time on tasks that are very repetitive and get to like, know, higher order bits? I think AI is just another, you know, incantation of this. So well, those two attributes have stayed very true to how we work and felt very natural with AI. And we do a little bit to like, you know, we oftentimes will force constraint.

    Andrew BIalecki (44:04.802) you know, on teams and projects will say, look, we're going to ask the small team to do something really big. And it feels hard. And think a lot of other companies may say, well, that's impossible. But for us, it's like, we just have this track record of we've seen it work. Right. So we're like more willing to do that. You know, the one thing that with AI, I think we had to do is we sort of had to force ourselves, say like, OK, we've been pretty successful as this, you know, software as a service, Internet, you technology company. But

    AI is just going to change a lot of workflows. if you sort of feel like, I I'll take myself, I just got so used to it. like, I mean, I love building, love programming. But if I like writing the code so much that I'm sort of not willing to try my hand at letting an AI do that, that's probably a mistake. And, you know, and hey, we really need to lean into that. one of the things that we'd obviously say, hey, look, everybody just needs to be, you need to sort of force using LLMs into your life. And

    This is actually, it's kind of annoying because, you know, as we've seen, like, LMs are not great at everything. You know, they make mistakes, you know, they used to hallucinate a lot. They still do in some cases. And I think there's just, you know, I remember in the early days of the, you know, the internet when I was building websites and web apps, like there was a lot of things where it was hard to do things on the internet, you know, building a highly responsive, you know, website, whether it's cause you're using Ajax, right, or using modern CSS, like it was hard, but these were just men, but we had to figure out some of the frameworks and build some of the tooling.

    I think AI is the same way. So we sort of forced everybody to say like, hey, look, if you're just not naturally the kind of person that wants to go after, you know, building LLMs into your life, then like, hey, that's probably not a good fit. And the analogy we use is like, imagine that, you know, it's the mid 90s, and you're Amazon and you're hiring people to join Amazon. And imagine they said like, yeah, I don't really use the internet. I don't really know what a browser is. You're like, probably the wrong company, right? And

    I'd say even if you're just like, well, somebody showed up to an Amazon interview and said, well, yeah, I use Amazon because I heard about it on the news. You'd probably say, okay, but like a lot of people use Amazon. So the analogy for us is like, well, a of people use chat GPT. Like that's, actually not that novel. What we want is the nineties example would be somebody that built their own website, you know, even maybe a personal website. That was like hard to do back then. I mean, now it's like, Hey, are you building LLM applications? You know, and by the way, you don't even need to be an engineer to do that anymore. You can just go sign up for

    Andrew BIalecki (46:25.038) one of the many, many, many tools out there that allow you to like basically, you know, WYSIWYG, you know, vibe code something. If you're the kind of person that does that naturally, awesome, can't wait to work with you. We're have a great time. If you're not the kind of person that does that, then like, hey, you know, maybe you're not really into, you know, you're not really ready to give this a go. So I think that that's been important, is having density of like this kind of curiosity and, you know, thinking, you know, AI first.

    Nataraj (46:50.172) think the point you made about automating yourself, I think that really hit the nail for me because that's sort of like the most obsession I have right now is…

    How do I automate things? do repeatedly. And AI is sort of like open up and you sort of 10x the amount of automation that you could do as an individual. And that's really amazing. Like how many things you can, it's literally that code of like, give me a lever and give me a truck to place so I can move the earth kind of scenario right now. Because like you can do so many things.

    just one person because of so many tools or you can just, if you don't know a little bit of coding, then you become so much more powerful than you ever were. regarding the, you know, going general this way, one of the things I found out is like, create a really courageous people to have their own sidekicks and hiding people who are like, you know, either influencers or, you know, have their own small business. Tell me about the like logic of that.

    Andrew BIalecki (47:51.407) Yeah, well, I think, so this founder mentality or entrepreneur mentality, first of all, I don't think you have to literally have gone and started your own business to earn that credibility, to earn that title. It's really a mindset. It's certainly a great, I mean, if you do strike it out on your own, and by the way, I think a lot of folks, if you're gonna start something, my big recommendation is do it nights and weekends, because then it's hard, right? It's like extra time.

    tired, you maybe have other things you want to do in your life. And like if you really want it, like, and you put in the hours then then you sort of know that you're ready to do it full time kind of thing.

    Nataraj (48:28.09) Michael Lewis, the author of all the famous finance books said the same thing about becoming an author. You should write it on vegans and see if it actually works. But good.

    Andrew BIalecki (48:40.974) Totally. mean, before we started Klaviyo, mean, one the things I did to kind of steel myself was I was like, okay, I'm going to try building these, you know, little side projects and like, look, I mean, it's no sweat, right? I'm still, you know, I'm still getting paid at my current job. But if I can't find, you know, the 10, 20 hours a week that I'm and I'm not so excited about it, then like, that's probably not a good sign because it's, it's only gonna be harder, right? When you know, that's the only thing. So it is we look for folks that have these entrepreneurial like tendencies.

    And anyways, it's great signal if you know, you've actually you I mean, oftentimes you will show me a side project, you know, hey, here's a GitHub, something I'm working on. Hey, here's this thing I'm doing in my community that I didn't necessarily have to take on. I think entrepreneurial density is really, really important. You know, it offers tend to take high degree of ownership. You know, there's there is nobody else to help them because it's like their thing, right? If they don't solve it, nobody's going to come solve it for them. Yeah. And so we've extended that like so one, mean, we try to make

    Klaviyo a great place for entrepreneurs, either aspiring or folks that have been entrepreneurs in the past. I was talking to the head of our customer agent products, the guy that leads all of product development for that, who is himself a multiple time successful entrepreneur. And he said, he's like, yeah, one of his favorite kind of recruiting pitches is like, hey, Klaviyo is a great place for aspiring or former entrepreneurs that just want to do the same thing.

    but hey, they just want to do it inside of another company, right? And so we've then tried to architect Klaviyo where I think at its best, I was a physics and astronomy major, so I've always liked this analogy of the best companies as they scale, they feel like a constellation of smaller companies, which means a lot of autonomy for teams, right? So it's a big responsibility, but also it means you give them room to, maybe we're all bound by the same kind of gravitational field, so we're all generally going in the same direction, but it's okay, some stars maybe moving in slightly different,

    orbits and that's okay. So we give teams like a lot of autonomy. And then like, you know, we just say like, Hey, like, let's agree on like, sort of what the goal is and where we're aiming. But after that, like you be the entrepreneur. And then, yeah, I think this surprises some folks, but like, you know, I've had over the course of Klaviyo, probably, you know, maybe a dozen, two dozen folks, you know, come to my desk, right? And say, Hey, Andrew, I have some bad news. Like I'm quitting, because I'm just gonna, I really want to go try, you know, building this product, starting this company.

    Andrew BIalecki (51:06.19) And a lot of those are software technology companies, but like we had a couple years ago, we had somebody that like was early at Klaviyo and he said, hey, I really want to try my hand at running a cranberry bog. I said, wow, that's crazy. You want to run a farm and like, you know, specifically for like cranberries, that's, that's like awesome. He's like, yes, I just have this passion. I really want to try it. Right. And of course he was like super entrepreneurial. Right.

    And every time I think, you some folks are like, man, I can't believe it. You know, why would you do that? But I have a hard time feeling that way because I I literally worked at other companies and then eventually left those to go start something. Right. And I think frankly, in the world, we just need more entrepreneurs. You know, we talk about creators, people that are just creating products, services, content. We literally need more people that are right. You're Michael Lewis are willing to are willing to put in that hard work. Like we need more people that want to take those shots.

    And so we think about Klaviyo as a culture, as like, okay, we want to help develop those future entrepreneurs, founders, leaders that think very independently. And then obviously our product mission is, yeah, literally our mission statement is to empower more creators to own their destiny. We literally want to give more businesses the tools to be independently successful. So this idea of like entrepreneurs, it's just like everywhere throughout our culture.

    Nataraj (52:23.826) I think that's a good note to end the conversation. think, know, Klaviyo has been, you know, anyone I ask about Klaviyo when I was researching this, you know, everyone has good things to say and I think your NPS scores must be really high. But, know, Andrew, thanks for coming on the show and sharing audio, all the interesting things about Klaviyo and I'm looking forward to see, you know, where Klaviyo goes next.

    Andrew BIalecki (52:49.176) Cheers, thanks for having me.

  • Transcript: Eilon Reshef on Gong | The Startup Project

    Transcript: Eilon Reshef on Gong | The Startup Project

    This page contains the readable, near-verbatim transcript from this Startup Project episode.

    • Guest: Eilon Reshef
    • Company: Gong

    Full Transcript

    Nataraj (00:01.87) Hello everyone, welcome to Startup Project, where we deep dive into the minds of innovators and entrepreneurs that are shaping the future of tech. My guest today is Ilan Reshaf, the co-founder and chief product officer of Gong.io. Before co-founding Gong in 2014, he was already a seasoned entrepreneur, having co-founded and led WebCollage, a successful SaaS platform, which was acquired in 2013.

    Gong leverages advanced AI to analyze customer interactions and sales conversations and enables teams to boost their productivity, deliver revenue predictably, and drive efficient growth. Under Elon's leadership, Gong has evolved from initial conversational intelligence offering into a sophisticated, derivative AI operating system. It reunifies customer insights through proprietary Gong revenue grasp, diverse actionable intelligence, and automates critical workloads.

    have over 5,000 customers, including prominent names like DocSign, PayPal. Gong's impact is unbind and deniable. It helps companies achieve outcomes like 57 % higher win rates and saves thousands of operational hours. Today, we'll explore the genesis of Gong, how they achieve product market feed, and how AI changed their business and product, and a lot more other interesting things. With that, Elan, welcome to the show.

    Eilon Reshef (01:26.306) Thanks for having me.

    Nataraj (01:28.155) So I think the first question I wanted to ask was what was that initial problem that sort of like caught you and your co-founders attention that led to GUNS?

    Eilon Reshef (01:40.768) It's a good question. And actually Gong is one of those boring companies where not much has changed in terms of the overall vision from when we started and today, despite all of the revolution that happened within the technology world, obviously LLMs and whatnot. When we started, that was about 10 years ago, 2015. And at the time, we're looking at the revenue space.

    And what we noticed was people were, it was treated like an art, right? Sales is art. And we felt strongly that if you can introduce a time, we didn't even call it AI, was data science, data-driven workflows, whatever you want to call it. You can make things much more efficient. again, today everybody's talking about productivity and AI, same idea. And then we realized that in order for quote unquote AI to make sense and help.

    First of all, you have to have like quality data, which hasn't changed. AI is only as good as the data it gets. So what we started as a company was let's capture the core information, which some people might think is the CRM, but in reality, it's actually the conversations that people have with customers. So it could be sales, sales could be post sales, be sales engineers, SDR, whatever. And the idea is if AI has access to those conversations, then we can start making like really, really good decisions, recommendations, and actually carry out actions for you. So that's exactly what we started.

    At the time, we kind of hooked up to WebEx. That was our first video conference system. Zoom was barely starting. And then later we did email and text messages and other data sources, and of course, connecting to CRMs. But the core genesis was let's bring information, put it in some sort of, and we call it a revenue graph, some sort of a graph system, and then apply logic to it to help people be more productive and leaders get more intelligence.

    Nataraj (03:21.805) think in some sense a lot of these ideas are now a little bit more common.

    But I think back then it was not that common. Getting all the call data and transcribing it was not really native to any of these, if you're using Zoom or anything. Transcription was not native. recording calls was native, but we didn't transcribe calls. Even if you use Teams or Google Stack, transcription and getting that data and analyzing that data was not really common. Now there's a lot more competition in that space. What was the tech stack?

    for when you are using, you're trying to solve this problem.

    Eilon Reshef (04:01.282) It's a very good point. Even the idea of recording at the time, people did not want to invest in Gong back in 2015. I mean, obviously we did raise money, but most of the majority of VC firms did not want to invest in Gong because their hypothesis was people will not want to get recorded. Which was of course right now we're entering 2026 and obviously it almost feels like ridiculous at this stage, but it wasn't obvious at the time. And what we had to do was actually invent or develop the core recording technology because most providers

    I don't could not record even calls even if they did it was some clumsy process where maybe a user had to manually click record which of course wouldn't So we started with technology stack that's basically developing a bot that joins the calls now It's so common that yeah, sometimes if you're meeting like four people and an 18 pots or whatnot But at the time we still have a patent on what's joining call. So we developed this back in 2015 And it's joining the calls. It's sort of the automation around starting recording capturing the screen whatnot, of course keep bringing it to the back end of

    Of course, it was a robust kind cloud first, you know, kind of system to begin with, 2015 modern stack. And then in the beginning, we used the third party transcription engine, which really, really sucked at the time. It wasn't because of the system. It was just because technology in 2015 was like, I think something like 30 % word error rate, which means three out of 10 words are actually wrong. So you can even read the transcript. The first versions of GoG, we essentially hit the transcript beyond like four clicks so people wouldn't find it. You could still search it and do statistics and like high level topic detection.

    but like reading it was really really hard and then very quickly moved to a homegrown system which was better and of course nowadays we still use a homegrown system but it's much more much easier to just kind of I don't know take whisper or any of the major providers transcription off the shelf and then kind of use it and get I guess pretty good results.

    Nataraj (05:50.319) Who were the early customer adopters? you mentioned like, transcription would be not that great, but like who, which type of customers really leaned in on this?

    Eilon Reshef (06:01.778) Yeah, so I'm a big believer in sort of the crossing the chasm, me and my founder both are big believers in crossing the chasm model, which means you want to start with a very small niche. And so when we started, we basically said, who's adapting technology fast? It's like technology companies, good. Which technology companies are more likely to use video conferencing? Like guess what software companies, because they don't want to travel to.

    you know, destination because software it's like easier to sell software online and physical goods. then, so we pretty much said, Hey, and enterprise we couldn't sell because we didn't have like a way to set to enterprise. was just like, we didn't have security. didn't have scalability. So we said, let's start with software as a service companies in the United States of America, North America, selling in English over video conferencing, midsize companies selling midsize ticket items. Because if you're just selling, I don't know, $5, you probably the conversation is not as

    important to you. Maybe even the wholesale cycle is more like a B2C type of thing. And then if you're selling, I don't know, if you're buying and selling airplanes, my guess is you're probably not going to have, like you're to have much more a relationship selling and in-person selling, which at the time we couldn't support. So the idea was like, focus there.

    And then afterward we said, hey, it's not only video conferencing, it's phone conferencing. And then you pick an email and then you can start doing more relationship understanding. And then you can have like, now we have an in-person recording and all sorts of other things. But the idea at the time was like, focus, focus, focus. There's probably 10,000 companies in the world that focus on this category.

    But look, to get from seed round to A round or to B round, whatever, you only need like a dozen, two dozen customers. As long as you know there's enough companies in the future, we're very satisfied. And just like starting with a very narrow customer base.

    Nataraj (07:45.007) What kind of early insights was gone providing at that time, like in 2015, 16, whereas adopting, because prescription itself was not accurate, what does was actually find valuable?

    Eilon Reshef (08:00.49) Yeah, so if you think about what can you do with this transcription that is not accurate, it almost lends itself to what are the killer apps for this. And one killer app was Search.

    So even if you don't, like some words are missing, you search for a competitor, you still find them. So we invented this idea called a tracker, which still by the way is available in every Gong, I don't know, wannabe or even like a big, actually even Microsoft has some sort of a conversation that is this product that actually even use the word tracker inside it. And the idea was the tracker was like a saved search or a saved keyword list.

    And then what you could do as an organization say, hey, I want to focus on those conversations that actually use, you know, bring up competitor X or challenge Y or technology Z or whatever. And then you can program the system for this to kind of drive many, many workflows. One is just like, I'm a product manager. want to know what's happening in the field or very common use case. I'm a sales manager. want to coach people, but I don't want to coach them on every call. want to call it just like talks about pricing. Or I want to help my team.

    position against the competitor and I want to find only conversations that are about the certain competitors. So search was a phenomenal use case or narrowing down filtering this side.

    you know, very big graph is sort of like super important. The other one is more of a collaboration use case, which is I'm a salesperson. I got asked the question, I need to bring in more people in the loop and I can start tagging people and just having a very convenient interface where you can bring more people to collaborate. There's a chat window, there's you know, kind of mark moment 27 and it's making it very easy to people to kind of.

    Eilon Reshef (09:44.254) send as a team and that was a very use case. So these are maybe the two main ones, one more AI, one less AI maybe.

    Nataraj (09:51.375) And when was like that sort of like tipping point where you thought, okay, we've achieved product market, but was it like very early on or like a couple of years into the development of the product?

    Eilon Reshef (10:03.106) Yeah, that's one of our funnier stories in Gong. So this is going to be like maybe two different answers. So one is when should we have realized that we product market feed and then maybe, you know, us being a little bit slow when we actually realize it, right?

    So the point when we should have realized it, we raised money in October of 2025. I brought some of the team members who worked with me in my previous life. So January, three months in, we had like a running prototype that could record many calls and transcribe them and do all of the things we just discussed. So we started giving it to customers, again, alpha, beta customers, whatever, just like SaaS companies, know, kind of our size. I mean, not like five people, but whatever, 500 people, maybe 1000 people. And then we gave it to 12 design partners and we

    them, could you please give us feedback? And they started complaining and we fixed things. Of course, nothing worked in the beginning. And then at some point they stopped complaining. And then when they stopped complaining, we were like, why are you stopping complaining? And we watched obviously their behavior and they're like, we're using it. It's fine. Why are we complaining? And then I meet my co-founder basically said, what if they are not complaining and are using it? Maybe we start asking for money. Right. So we had 12 design partners. We called them and said the beta is over. There wasn't any beta, right? It was just like, we just gave them the software to try it out. Right.

    And then 11 out of the 12 paid and that was like May 2016, which is six, seven months into the company, right? And 11 out of the 12 actually went ahead and paid. And we weren't cheap at the time. were like charging initially maybe 750 per individual per year. like this price now is higher, but you know, for a young startup with a dozen employees, that's not small, right? And 11 out of the 12 paid and then the 12 actually paid a year later. I mean bought a year later, the CROs changed jobs and like they couldn't buy.

    And I think that's probably a point where you should stop and be like, shit, this thing is actually working. Cause like 11 out of 12 is like unreal in a way. But we're like, okay, that makes sense. Let's maybe, I don't know, why isn't the 12th buying? And then maybe it's time to hire for a sales rep. But that was definitely the moment where had we been more maybe kind of attentive to the process, we would have said, hey, this is a product market for moments.

    Nataraj (12:08.526) The first time I actually sort of encountered Gong was in 2018-19. I was doing a pitch deck for an Indian company trying to do software for customer support and sort of trying to do similar things of like, you know, collecting all the transcription and trying to improve the efficiency on a call center level, primarily targeting, you know, food delivery companies.

    And then I was evaluating who are the bigger players internationally who are already doing some version of this. And that's my first encounter of Gong. And since then, I kept tracking of what is Gong doing. Can you talk a little bit about what types of different customers are using Gong? Because I can easily imagine all the enterprise sales organizations.

    but what are the types of customers you generally can sort of categorize them into.

    Eilon Reshef (13:07.115) I would say these days, obviously, the Gong has evolved from Wudang.

    we started using kind of conversation intelligence and then revenue intelligence, which we added more capabilities. I'll touch on this a little bit in second. And then now kind of we call it AIOS for every teams. And obviously as the, as the sort of the capabilities were strengthened, also the types of customers that you can serve is growing. So the type of capabilities we added is pipeline management, forecasting, sense engagement, which is prospecting, coaching, enablement, these sort of things. So as you expand those suddenly more and more companies need, they might not need a whole shop bank. They might not be doing,

    forecasting using software, but this might still be prospecting over software or coaching using software. So nowadays, I'd say we serve companies anywhere from a small company of like 50 people all the way up to the world's largest organizations. Five out of the top fortune 10 companies are gone customers. Cisco is one of our kind of, you

    public references, they are deploying it to 20,000 sellers, which I think is the largest revenue I deployed in the world, I don't know, but obviously large scale as well. So I would say nowadays it's less, the industry we're much more diverse now, so there's financial services companies, there's healthcare companies.

    of course, technology companies, even like telecommunication companies, AT &T and such. And then nowadays we also cover, people think of Gong sometimes as selling, but we really kind of try to help everybody along the customer journey for anybody who's creating pipeline prospecting in tech, it's called SDRs, selling team pre-sales, solution architects, whatnot, implementation, post sales, people responsible for retention and expansion. And then sometimes it's

    Eilon Reshef (14:56.003) of the more strategic level even product managers and sort of like non revenue related. So if you sort of look at these type of personas almost every company in the world has them. We're still a lot of our business in North America just as this is where we started. Still a lot of our business maybe 50 % is still tech or tech related just with this where we started but very diverse nowadays.

    Nataraj (15:17.966) What do you think about this idea that anyone can build for wipe code or stuff like that? We're sort of seeing the commoditization of writing code in that era. Getting an MVP version of Gong might be easy.

    are easier than what it would be like eight years or nine years back when you started. Then we are seeing transcription software companies everywhere. There's so many of them. But my general question is how do you look at the commoditization of software in that scenario? What is the edge? What is the mode? How do you approach just building companies? You are an established player, but someone who's starting now, how would you think?

    What are your general thoughts on this?

    Eilon Reshef (16:08.097) Yeah, maybe I'll answer that a little bit at a zoom out level and maybe even a little bit provocatively, right? I think we are, as a universe, we're a little bit in a sort of a, you know, post-truth world. And what post-truth means also is you get a lot of incentive of just like coming up with very bold, not necessarily true claims because they get you publicity and sometimes recognition. And the press loves this because it gets them whatever call them clicks, right?

    I think some of the discussion around like, we're going to vibe code everything kind of comes from.

    There's the idea of, what if I just told you AI is going to make your engineers 30 % more effective? This is boring, right? Who cares about 30 % more effective, right? So now people are much more excited about talking about maybe engineers is going to go away. Maybe the PM can code. Maybe there's no need for SAS. I think these are widely exaggerated. think AI is phenomenal for engineering productivity. I don't know if it can save. I don't think anybody is saving even 50 % these days, but I think 50 % is within reach. 10 % people are getting today.

    I think maybe even more. And I think it's really good for software companies. It's good for the universe because you can get more value by getting more software. I think, yes, you can definitely do LavaBull or use any other tool for prototype very, very quickly. But once you want to get like real software with all of the infrastructure, security, functionality, iterations, enterprise quality software.

    Yes, it could be cheaper, but I don't see majority of functions going away. And then you also need salespeople to sell it and you need marketing people to market it. 100 % sure you can do it more effectively, but I don't think that fundamentalist change. I also don't think that organization should be bothering with.

    Eilon Reshef (17:52.384) I don't know, coding their own software, they're get into the same cycle of like, gotta maintain it, I gotta change it, it's not working, who's gonna support it? Same challenge that people have had for maybe 40 years, don't know, 30 years for sure, which to me, like, doesn't make any sense,

    Nataraj (18:07.468) Yeah, mean, think the people who try web coding is you can get into production, but…

    once you get into production and people start asking, want this, I want that, and I want to improve this, and then you don't know what you have written, then maintenance and improvement really becomes a challenge. But that's actually probably what he said is right in terms of the post-truth one. It's easy to make an exaggerated claim and discuss it and promote it. I think it has a morality inherent built in it.

    and which everyone is craving for.

    Eilon Reshef (18:49.215) Yeah, I'll even say more. Now I'm going to insult a little bit my VC friends here, but you know, since, I mean, we are on record. I mean, I'm not going to beat me for it, but I sometimes tell our marketing people when they read some of those very, very bold claims. If you see them coming from VCs, for example.

    mean, Gong does content marketing, right? We tell people about how sales should behave because we assume that if people read about how sales should work, eventually they're going to look and see who Gong is and then maybe they come to us as customers or as prospects. Right? Everybody else got the marketing. VC is content marketing, which is used for deal flow, right? It's basically come up with very, very bold claims about AI because that gets you the very eager entrepreneurs. So I think it should also like reverse engineer who says one into what are they kind of, are they actually writing objectively or is there a goal behind it?

    all be more particular about how we interpret what's written out there in the media because that's the 2026 work, right? We can't change it, we can just be more aware of it.

    Nataraj (19:47.535) Talk to me about this, you know, from conversational intelligence to this, now you call yourself as like a revenue operating system, AI revenue operating system. Like, what is the difference, you know, what are the features that make it different?

    Eilon Reshef (20:03.881) Yeah, so when we started the whole notion was we're to start with analyzing a single conversation. And there's lots of stuff to be said about a specific conversation. Did you set up next steps?

    Did you just talk yourself to death? The gong is probably invented to get you off measuring a talk ratio for the rep, right? Should be obvious, but people say there's a reason why we have two ears and one mouth. You should just listen more than talk. But there's so much value it can bring by just focusing on the conversation. So very quickly in the road, we said we don't want to be the experts in how to handle a specific conversation. That might be good for consumers and support over the phone. We want to help people really kind of realize their full potential in terms of being

    revenue professionals and revenue organizations. So we started understanding what are the key workflows that people have within revenue organization and started rethinking about them in what I would now call AI before it was just like data and data science ways. So I'll you an example. Every revenue organization on earth, there's some sort of a cadence where somebody reviews their pipeline and decides what to do next. Sometimes it's the rep, sometimes it's a one-on-one meeting, sometimes it's a big forecast call, right? So we said, what does this process look like in an AI-centric world?

    AI actually shows you which deals are more relevant than others. So it shows you, have you had a conversation, what was said in the conversation, when an event came out, you can ask a question about that deal or account or whatnot. And then it summarizes for you and it just helps you kind of do that job. So we gradually built more and more workflows. So revenue intelligence and then revenue AIOS is basically a pipeline management. And then we grew this to forecasting. What if AI can help you focus where you are, which is super important.

    And then we took another key workflow, which is making people better. So we have an enablement product that says, hey, I'm going to actually help you coach the team. And of course, nowadays AI can actually coach for you to a certain degree, right? Score calls, understand the facets. And then now we're launching a trainer, AI trainer, which is going to talk with you and help you simulate the customer and coach you, right? So this is another angle. And then we looked at how do you prospect? And the idea is like, what if you can prospect to people, but actually leverage your history with the accounts and be like, yeah, AI is going to write the emails for you. It's going to do most of the…

    Eilon Reshef (22:09.123) majority of the boring work for you, right? So as you look at all of those things together, if you're a revenue organization, be like, yeah, I want a single platform that does all of the things for me. I don't want reps to kind of go between systems and a different data layer that I need to sort of reconcile and then have several contracts and whatnot. then, Gong nowadays is the position where we're like, yeah, we're a single OS for everybody. So you're still gonna need like a CRM. You might still need some sort of data from somewhere and other things. It's not like, I don't think there's a single contract

    in the world where you could just use that company's products and nothing else. But it is a sort of a central place where revenue professionals and leaders can do the majority of their high quality work nowadays. And of course more comings.

    Nataraj (22:54.126) You talked about AI training or coaching. You also mentioned like in the starting of our conversation, sales is sort of like an art. Does the coaching really make or give you the best salesperson?

    Do you see that pattern as an outcome or are you saying that if someone is 50 % effective, now we making them 80%, but really the 80 to 20 is still hard? What is your take on that?

    Eilon Reshef (23:24.865) Yeah, so there's effectiveness and efficiency both contributed productivity efficiency I does this all the time perhaps you write an email and all of these things just takes takes time off Effectiveness on is more subjective in some ways because how can you prove that somebody's better Gong has been I think traditionally we've been able to show that you can move the curve

    You're not going to make the excellent people, excellent plus plus. Yes, you have them around 10 % for sure, but like they're already excellent. You're never going to make the C players A players. This is not going to happen, but you can make the C player C plus, the C plus B and B is the A's kind of thing. So you see the whole curve moving and you see it pretty consistently. You just learn new skills, things they weren't aware of and just become better. I don't think you can expect everybody to be an A player. By the way, you should also replicate the A players. If you're going to try to shoot a three pointer for more.

    like Steph Carey, you're just not gonna make it because you know, he shows from I don't know wherever, like weird places. You should probably, you know, you can do better than what you're doing right now, but the A, like the A plus players sometimes have such a unique pattern that you don't even wanna replicate it.

    Nataraj (24:28.846) I think Gong is also the intersection of this, I would assume, like some sort of AI agent phenomenon that's happening. I've seen a lot of demos and products out there, which are sort of AI taking a call for the customer and sort of doing better or sometimes same quality output.

    What is your general take on that kind of sort of like when AI is actually talking to customers and now AI agents basically taking over real sort of responsibilities? Like where are we in the curve of like an option of those kind of farm factors?

    Eilon Reshef (25:11.841) We're getting close. I don't think those AI agents are going to replace a B2B setter. Most of the customers are B2B setters. There's an element of relationship. There's an element of knowledge. There's an element of just continuously understanding what's going on with your customer. AI, customer-facing AI can do a…

    sometimes a good job in sort of like especially like one and done B2C phone calls, know, kind of anywhere from like a glorified IVR to just like, hey, let me qualify a little bit and sell you something. This is already happening. I think the equivalent in B2P might be inbound leads, especially ones that you don't have capacity to deal with. And the other thing is, this is kind of where I think the market will be heading is

    Outdoor specific tasks from your day. So let me give you an example, right? I'm a salesperson and I want to walk my customer through my proposal, right? As an example, right?

    you can send the quote-unquote agent to do this for you. If the agent's been trained enough to sort of understand what does your contract look like and trained enough to understand what the customer need, they can probably do a reasonable job in walking you through the contract, saves you like 30 minutes, and to be honest, probably the customer's gonna be happier, because now they can do it whenever they want, like 6 a.m. Pacific, you know, whatever the thing, 6 a.m., whatever, some time where the rep is not even available. So I think it's gonna be carving out those tasks, making sure that you can train

    the agent to do a good job for this particular task versus everything and chop away pieces from the rep's work. I think the way we're looking at it, what's really nice about it is when we're starting to provide those things, including the trainer, the way, we train the system based on actual conversations. So we go, have a tool called AI Builder, which basically says, hey, let me look at historical patterns, identify what's working, build me something. It could be a document for humans. It could be a script for agents.

    Eilon Reshef (27:10.945) and we like take based on that, which is really huge because if you started like program this from scratch, you're probably not going to get much. And then how are you going to conceive of all of the issues? It's going to take a month and a month of training. Whereas if you have all of this history, you can very quickly like iterate.

    Nataraj (27:24.908) mean, it's also possible like if I'm purely dealing with customer support calls, know, hey, I'm trying to do a refund or like a very specific small problem. think it's.

    much easier for someone like you who has all the training data available to create an agent targeting a specific problem. And every time a customer asks some type of question, you can route it towards that particular agent. I think that's a pretty possible scenario that I can see playing out. Do you have any thoughts of how this AI agent space will evolve? The AI agent space evolve?

    Eilon Reshef (27:57.088) Big what, sorry?

    Eilon Reshef (28:03.137) How will it evolve in the future? I think it will gradually take away, start from…

    taking away very specific, like you and I mentioned, right? Just like start taking away those tasks that are very specific, you know, like you said, ask for refund or help me understand how my account works or maybe helping book-driving for all I know, and then gradually take more and more and more responsibilities. It might take away some of the…

    maybe kind of more junior jobs, which, you know, kind of AI usually is where it shines. And then outsource pieces of the more advanced jobs. So even if you're a B2C seller, B2B seller, sorry, and you're selling very high end equipment, you might still use an AI seller as like a sidekick. Go for it. Hey, go do this for me, especially if it's certainly, you know, tasks and chatbots and whatnot for sure. But even like go talk to the customer about this particular thing and then send them off. It's not a high,

    It's not a critical conversation and it's focused and it's like repeatable. So you were able to train some of the AI on it. You'd probably be able to do it with AI, right?

    Nataraj (29:18.68) Do you see, so one of the patterns that I've seen is like companies that are really benefiting from.

    you know, like post-HGPT sort of like the breakout, know, LLM usages, companies which have been in the play targeting a specific sector, they've already, you know, have great software. Now they can use AI to build and like sort of compound that which you sort of like fit in. Like, is that what happened when you first like saw LLMs coming out popular? I'm assuming like you were much more closer to the ecosystem. But what was your thought when like, I think in 2022 or

    when our chat GPT came out, like, did it realize like, okay, now we have a really advantageous position, like what were your thoughts when that happened?

    Eilon Reshef (30:05.227) So first of all, we have been using GPT 3.0 even before ChatGPD, had our own models to generate next steps. You can believe it, like we had the next steps model before and all of the things that are nowadays just like a simple prompt to say even like a, I don't know, a very, very weak model, right? So there are basically two options, right? Option one, which is the one you mentioned, which is the one we believed in is…

    AI needs to be embedded in workflows, which typically tend to be kind of implemented using software. And this is kind of where real efficiency comes in. And then as an incumbent, you can come in and be like, hey, I own this coaching workflows. And in 2016, there was a manual scorecard, 2020 something, you can have like an automated scorecard or, you know, same idea with forecasting, right? had to a number and IOS is going to help you input a number or field in the CRM or whatnot, right? That was our belief and hope.

    And it kind of, our thought process was this is where the world is heading. In all fairness, it was so new that we also had some percentage where we said, hey, weird stuff would happen. So yeah, I mean, you asked about vibe coding, maybe vibe coding is going to happen like in a way that I didn't think that was going to happen, but maybe, or maybe it's just going to be easier for new players to come in just because they can rethink about software in the sense of like, maybe you don't need a UI or whatever the thing is.

    I think more and more it became clear that it's just like, yeah, you still need a UI, still need data, you still need a database, you still need connectivity integrations and reporting and whatnot. And AI could be an important piece or a critical piece or just a piece depending on the domain or the vertical. But you still need to have the whole shebang. If you do need to have the whole shebang, go on this position itself. think, like you mentioned for other companies in a place where we had so much IP already in this like revenue operating system that it's very, very hard for a new player to go be like, Hey, I'm going to do pipeline.

    management, I'm going to do forecasting and you can't just prompt your way into this because you know when you review your book of business you don't want to see like a chatbot you want to see like visually what's happening.

    Eilon Reshef (32:09.629) And the other thing that happened for Gong, which is nice, of course, is there's much more demand right now for AI. So every CEO in the world is like, hey, how can I do AI? And then ask the CRO, how can I do AI? And then they look around and they see us. But it was definitely not obvious when ChatGPT went out. were like, yeah, we're going to bet that this is the right direction. But we had no certainty that this is where the world would go.

    Nataraj (32:32.194) Can you talk a little bit about fundraising and how you guys approached?

    fundraising and you've been an entrepreneur before, Tegamit has been twice entrepreneur. What was your approach to fundraising? You guys have raised significant amounts of money over the course of eight years. What was your general approach and what was the fundraising journey like? Was it obvious that people like, it is generally easier for repeat entrepreneurs at least to raise the first round. What was the overall like?

    thesis or approach towards fundraising.

    Eilon Reshef (33:06.953) Yeah, when people ask me about advice, which generally I'm not a big fan of just asking people for advice, I usually tell them it's my biggest advice is start with the second gig. It's always much, much, much easier. You learn from all the mistakes in the first gig. But I think we did a pretty traditional route. We basically said, Hey, we need to start with a product, you know, the goal of a seed round to get you to an A round. So we need to hire this size of a team. And then get to a point where I mentioned before, like we have a decent number of customers and conviction about product market fit. So we kind of did a spreadsheet and said, Hey, we can be

    $6 million, we went and raised $6 million, we bought however many people and got to a point where I think we had like $2 million of revenue and then we said, now we need an A-Round and so on and so on. There's probably some exception where I guess in the, what was it, maybe 2021-ish, when sort of valuations were at the all-time high, we basically said we don't need money.

    given that the market is sort of valuing companies, we are a very high growth company at the time we still are, but like at the time it was like glowing. We basically, since the market is valuing companies of our profit very, very, in a very good way, we'll raise a little bit more money. So we have money for rainier days, which we didn't have like really rainy days, but you know, a couple of years later, obviously the market fell and it was good that we had as much money as we ever needed for.

    to make mistakes, right? It's good to have spare for making mistakes. Now we're cashflow positive, so obviously we're doing nothing with the money, but I think it's always good if you can afford it and if you kind of run into an era where money is relatively cheap, buy yourself some insurance.

    Nataraj (34:46.126) You've been the Chief Product Officer. What's your approach to general product leadership? Like how do you look at your own job and responsibilities as?

    Eilon Reshef (34:57.889) Yeah, product is a very multifaceted role, or function in the organization. So I don't like the term CEO of a product, but it's basically saying, hey, you've got to be a little bit.

    Obviously, you got to own the roadmap and what's being developed. You got to be close to engineering, you to be close to marketing, got to be close to customers and the business and whatnot, all of that stuff. At Gong, probably I, we tend to be maybe more focused on customers than anything else. So some product people tend to be close to engineering. Let's build and manage the development lifecycle. Engineers are our…

    and tend to be pretty like autonomous. I mean, of course they work with product, but where I asked my team to be kind of at their A game and also how we hire is a product manager who can talk to customers and kind of reverse engineer the customer's problem. Because one of our company's motto is raving fans. We try to have customers be truly like supportive of the company, not just like customers. And for this to happen, you want to have like product managers who very close to customers. We have tons of design partners at every point in time, every feature.

    we launch everything we do is like close to design partners. And if you are working close to a design partners, it's also like an insurance policy, right? You're not going to go wrong. Maybe you go slow. Maybe you're going to do stuff that's like meh, but you're not going to be like developing stuff that nobody cares about. For us, every feature we launch usually has a couple of dozen design partners at least.

    So it kind of gives you confidence what you're developing actually adds value. I would say both myself and people I hire, what we hire in the team is customer centricity. Sometimes people call it customer empathy. don't necessarily like that particular term. And sometimes I err on the side of favoring this over things that are also important for product people like come up with a great solution, be a great project manager. Cause I know that if you know the customer needs and you kind of really, really internalize that, even if you can't do it

    Eilon Reshef (36:52.403) job and execution and whatnot you're still going to get to where you need to be whereas if you don't know that you're just going to execute flawlessly in the wrong direction.

    Nataraj (37:00.758) Yeah. Do you have any thoughts on like trends or whether it's AI or in general enterprise that you are looking at and think that will affect either Gong or in general the industry?

    Eilon Reshef (37:17.417) and many, course, AI is like, you know, everybody talks about AI. think AI morphs every single job. and each person should be thinking about, you know, how should, what are the things I'm doing right now that I can do better with AI? And I think just like, again, we talked before about the hype in the media, just ignore the hype in the mini like, Hey, AI is going to replace you. is going to be like, yeah, there's not going to be no more, whatever product managers, engineers, whatnot. But do spend your time thinking where it can do your job better. So obviously for products like, I can create a prototype.

    I don't need anybody's help to do a product. That's an important thing. It can help me write documents. Just please don't write generic documents and expect them to be the MRDs, PRDs of the word, right? It can help me be a body for thought process. It's really hard as a product manager to be like, yeah, I need to sort of like learn a field and iterate on a solution, right? It can help me come up with naming and terminology for things. Just don't rely on it, but it's a good thought partner. So I think every piece of your work should be, of course, influenced by.

    I think the software products themselves, it's a big question. I don't think the world has yet figured out where does AI…

    play out within software applications. Every vendor has got its own take. know, there's Microsoft Copilot and there's Google This and there's, of course, vendors. But I think every software is going to have, of course, some component of like any assistant within it. But even beyond this, I think this is just V1, it's like where does AI really help? And beyond obviously being the superficial chatbot layer that everybody's going to have. And I don't think the world has cracked this just yet. I think this is probably one of the more fascinating problems in

    software right now, which we're gonna replace your people here.

    Nataraj (38:55.854) Are you guys doing anything internally to sort of encourage the audience to leverage AI effectively?

    Eilon Reshef (39:07.509) Doing lots of things. don't think anything that people haven't heard before. So on the engineering end, we actually have an evangelist team who owns this developer experience and evangelizes using tools like, of course, the cloud cards of the words.

    I just heard that our bill for engineering coding has went up, I don't know, over 50 % last month, which I guess should be good, but nobody can measure the impact. So it's like for now we're just paying money and hoping it gives you effectiveness. On the product end, we're doing more of a kind of this, I guess, regular meetings where people present findings and learnings and then what have they done with AI? But I think also,

    I think we kind of try to do what we preach is I would much rather bring in software that uses AI and just have it solve the job for me. So we just brought this like, it's a small startup called Bagel and they kind of do like a request management using AI. So basically if something comes up on a gone call, they bring it in. If a ticket comes in, they bring it in many, many other things. And it basically kind of use LLMs to kind of cluster it and assign it and give you some insight around what our customer is asking for now.

    We don't just develop what customers are asking for. Maybe they're asking for crazy stuff, but it's really going to take away a lot of manual labor that we need to put in place to organize the data. Of course, you can look at much more data. So this is using AI, but we aren't using quote unquote AI in the sense of like, you're chatting between doing things. We just bought like an AI powered piece of software that's a pain for us. And the more people come up with these things, I would much rather have them than training people to copy and paste.

    things into chat GPT or I'm hearing people are like asking people to build MCP servers and all of this stuff like an engineering job on like, just come on, just give me the software that's already been built. So we're definitely looking for things that are creating help centers more efficiently guided towards many, many kinds of pieces of software delivery, GI, those kind of things in the product.

    Nataraj (41:10.67) I these are the most overhyped things, either in AI or in general right now.

    Eilon Reshef (41:20.161) I think the most overhyped things is agents replacing people. Obviously, it's like agents will replace people. I've actually had a conversation with a customer today that replaced these very junior, inbound salespeople with AI. But people saying, hey, lawyers are going to get replaced. Engineers are going to get replaced. I feel like it's, again, to our point before, it's like a media thing. It's like, why would you even care? So it's like.

    Let's say you have 10 engineers, why do you want to replace five of them? Just make them twice more effective. So yes, maybe it would, if you got nothing else to develop, yes, maybe you could reduce your workforce, but it's not going to be like an AI.

    developer, it's going to be an AI sidekick, who's going to make every one of your developers faster. So just the thought process of replacing 100 % of your job, to me this doesn't make any sense. Just think about what is the 50 % you can replace and 50 % is quite a bit. mean, usually it takes a decade to reduce 50 % out of every job. And then if you insist on the last five…

    5 % you're going to spend a lot of time on, know, in the case of engineering, it's like, how do I ensure quality in the case of product? How do I provide insights in the case of salespeople? How do I create a relationship? Whereas if you focused on just augmenting people, you're going to get much more back for the buck and leave people to do what they actually good at versus necessarily trying to take that and kind of automate, which again, to me, doesn't make any sense.

    Nataraj (42:41.358) In some sense, it's similar to the self-driving problem. You can easily get 5 % efficiency and then spend a decade in solving the rest of the 10%, 5 % and 2%.

    Eilon Reshef (42:51.937) Exactly, exactly, exactly. And I think it's kind of exactly mirrors that eventually yet. Obviously they are already kind of self driving kind of pilots in various cities in the United States, but it took them a couple of decades. And I think maybe for car, there's a maybe a great reason because like you either own a car, you don't own a car. There's like a big, there's a big binary thing around like, you know, is the car. Yeah. And then you can kill people. Yeah. There's so.

    Nataraj (43:12.064) There's a real danger of, there's a real physical danger.

    Eilon Reshef (43:17.921) The idea is exactly the same, the dynamics might be different, look, if I'm an organization, got whatever, a hundred people and they can reduce 50 % of the workload, I'm not looking to replace one individual. I'm actually trying to make everybody more effective. And yes, if I don't, if my business doesn't justify, in fact, I got nothing else to do with those people. Sure, I'm going to replace some of the people, but many, many businesses on earth get much more to do than they actually.

    get to do. And probably the other thing that people maybe overlook, this may be interesting for the listeners here, I think people overlook the fact that the bar also changes with technology, right? So when I started engineering, I was programming as a kid, it was the 1970s or 80s. And the expectation was like a terminal that's going to ask you questions.

    And engineering became easier and now it becomes this gooey thing. And engineering became further helpful. Now you're expecting to see animations and screens and helps and like, all this wizardry that software does today. And in essence, like writing software is not even cheaper now. You just write much better software, just more, but you could write software in four weeks in the past and you can still do it now, except it's different animals, right? I think for many, many, many domains, just the bar is going to get higher. So the domain we're in, like revenue or sales.

    I think the expectations are going to be much higher from salespeople. So now we're going to expect you to actually know what the product you're selling and to come more prepared for meetings to create more detailed presentations and documents. And suddenly, yes, we've taken away 50 % of your work, but then we replace it with something that adds value to the customer versus something things they do today like filling in CRM fields, which is complete drudgery, right? It doesn't add value to anybody.

    Nataraj (44:55.288) Yeah. I mean, it also means that companies like yours or your generation of like SaaS startups will know how potentially fast to attract the profitability and higher margins. Like, because if you're operating on lesser employee number, you're not really replacing humans, but you don't need more humans.

    to get to the same amount of revenue and same amount of profits. I think that's one of the sort of like 100 or 12 stories. Like what will happen to that companies which already have some sort of product market fit and all you add top AI and you're still growing, you still have product market fit, you still have new customers coming in. And would that mean that you'll go IPO at a much better margin versus what you could have gone?

    Eilon Reshef (45:43.06) I agree. think it's our generation can really kind of leverage those efficiencies.

    And I think the expectation for companies like Gong is to reroute some of those efficiencies saving into higher growth. If you can't, you're still going to get the efficiencies. If not, or not if not, actually what all of us would want is probably higher growth. The nice thing about the market today is there's lots of demand to AI software. So for us, if we can get more efficiencies, we're going to always try to sort of route it towards fitting this demand. Of course, as long as there is like very, very high demand for AI kind of software. So, but definitely the

    the factors change. So if the expectation was like, you're an engineer, you can do X, now we can do X plus 30, X plus 40, makes a big difference.

    Nataraj (46:28.398) What are your favorite AI use cases that you personally sort of like use them or like interesting or unique?

    Eilon Reshef (46:40.277) Yeah, personally, you know, as an executive, know, our lives are pretty boring because we don't do much. Therefore, AI can't, know, can't like, we don't have like specific tasks. I use AI a lot and I hear other executives do this as well because a lot of our work as executives is in communication. Not necessarily like form of communication, right? This like email to the company, but like, hey, coming up with the right terminology, coming up with right name, coming up with, you know, kind of organizing certain narratives around whether it's stuff we're building or…

    products we're looking at and whatnot. So my main use case for AI is kind of this thought partner around positioning, wording, communication, and such. I think it's very typical for executives. I think other people may not be spending as much time in this kind of engagement.

    Yeah, that's going to my main use case for sure.

    Nataraj (47:33.166) What are the most exciting things that you are excited for going in the next couple of years?

    Eilon Reshef (47:41.986) It's a little bit along the lines of what we discussed. So think being an AIOS is just being able to do more and shave more pieces of revenue professionals types of, for example, we're launching.

    an orchestration product and what the orchestration product does is two things, automates tasks, of course. But the other thing, also like orchestrates people. So it tells them, Tuesday morning, what about this upset opportunity that you kind of forgot about? Or this customer churned three months ago, what if you reached out back to them and here's an email you can be sending them just hit the approve button, right? So the idea is like cover more and more of the revenue professionals work, both from a productivity perspective, which is what I just mentioned, as well as from an intelligence perspective, intelligence, meaning how

    CRO, I want understand what's going on. I go to AI and then be like, hey, tell me what's working. Why are we not converting from stage two to stage three, which is like industry revenue jargon for steps of the sales cycle? What are we doing wrong? And in the past, these things used to take months. And the fact that you can do it in minutes, sometimes seconds, and then you can kind of as intelligence, like now I go orchestrate a change.

    And the other thing is as part of this cycle is like as a leader, can do intelligence. I think the orchestration piece is also maybe underestimated because in most organizations, not just revenue organization, change management is like super, super hard. You want to do something, it takes forever, right? You got to make a decision. You got to tell everybody, blah, blah, blah, blah, takes forever. If you're using a system such as Scom, any system that kind of drives your workforce essentially.

    what we're starting to do, and I think this is kind of super exciting as the next phase, we're gonna let companies kind of vibe code their business in the sense of like, you know, tell me what are the reps doing in Australia that's different than New Zealand? And be like, okay, so that's what they should be doing. You click a button, suddenly it rolls out different plays to the people in New Zealand.

    Eilon Reshef (49:42.306) And then otherwise it would have taken you months to understand what's going on, months to activate. And you're also limited in your capacity because you have to sort of train the people. There's a whole motion around how many initiatives can you run in a month. So the idea of like AI learning from what works and what doesn't and the idea that you as the operator can vibe code your.

    I call it vibe code. It's not vibe code. It's like use AI to sort of build those plays, right? But you don't have to sort of like go to a UI and just like be like, do this, do this, do this. And then with the click of a button, roll them out to the field. And it's not just for revenue for me, it's revenue is exciting. Makes this whole optimization cycle super, super fast. And then it can help to also like hyper personalize it because now we can roll a specific play to Chicago versus Denver, right? Why would you do this? I don't know. But you just cannot do this today just because of scale, right?

    There's no way for you to go to do that.

    Nataraj (50:34.552) think what you're basically saying is some version of every platform, be it GOM or some other product. think every product is now sort of almost…

    logically forced to have an orchestration plus some version of app platform created on their platform because now it's easier and it allows a lot more people to create things that are useful and share with other people. think that's what you're basically, you know, sort of like a meta trend that is happening. And you'd also already see across like Microsoft products and some other AI products that everyone has some version of orchestration that sort of like obviously needed because that sort of empowers the product.

    ecosystem a lot.

    Eilon Reshef (51:18.301) Exactly. And if you have not every company does, but we have a revenue graph, which is like really, really deep. If you have data around historical performance and what works and what doesn't, not just numbers, numbers usually kind of hard, but like if the data layer is rich enough and you can have AI mine it for, for what is the right play, then you actually can close the loop. Right. So now it's like AI gives you ideas and like what you could be doing. And then AI helps you do the things using an orchestration layer and then it executes it using the orchestration layer. It really transforms the way you kind of.

    changes happen. And a lot of why people quote about hire us as software vendors is to drive change, right? I mean, nobody buys software to the same things in this same way that you can just use pen and paper, right? And the ability to kind of iterate very, very quickly on those kind of changes is, think, you, people don't even realize it's possible, but certainly it's going to take probably a few years until people just like execute this. And I think it's as transformative as the idea of AI agents.

    but again, maybe not as sexy to the press, but probably as transformative from a business perspective.

    Nataraj (52:21.12) In some sense, actually, white coding is needed within the enterprise platforms rather than like this individual, like make a static website, which is actually very impressive, but also not useful because we need more stuff on the enterprise layer where you can wipe code stuff. mean, which actually becomes much more powerful because, we spend more of our time at work and not like, making hobby blogs.

    in some sense.

    Eilon Reshef (52:51.489) 100%. And then you're integrating with the data, you're integrating into a workflow. It all comes because otherwise people are still going to be copying pasting stuff from people. You're not going to get efficiencies if you keep copying pasting stuff. It's just not going to happen.

    Nataraj (53:03.714) I think that's one of the reasons why actually the adoption of AI tooling is actually lower on enterprise side because most of the tooling is being done on the B2C hobby prosumer side and that has not still like created in the enterprise ecosystem. think we still need that shift happen when actually people get more productive.

    Eilon Reshef (53:24.929) 100%. I think prosumer is where it shines right now, which is phenomenal. I'm sure you use it. We, use it. Everybody uses it. But I think AI has been a little bit slow to sort of.

    people to get their hands around how do I connect it to my data sources, how do I connect my data flows to my existing applications and so on. But obviously it's coming. Look, I'm super excited about this whole idea of AI centric or first applications. Because to me, the infrastructure layers, I don't want to say software, but API, know, so like OpenAI and Cloud and all of these companies are like a drop in, I mean, Google, all of these guys, they're going to, I mean, obviously the LLM kind of infrastructure is.

    I don't want to say sold, but it's being handled by so many companies and it's a lot of white space on the layer on top of it. And I think for Gong it's a big opportunity, but for many, many other companies there, I think there's still a huge opportunity.

    Nataraj (54:14.744) think that's a good note to end our conversation. Thanks, Ilan. Thanks for coming on the show and sharing all the interesting things that you guys are doing at GAN.

    Eilon Reshef (54:23.852) Thank you for hosting me. Have a good conversation.

  • Andrew Feldman on Cerebras’ Wafer-Scale AI & Taking on NVIDIA

    In a world captivated by AI, the race for faster, more efficient compute has become the new frontier. At the heart of this race is Andrew Feldman, co-founder and CEO of Cerebras, a company challenging the giants of the semiconductor industry with a radical new approach. While NVIDIA and others scale out with clusters of GPUs, Cerebras scales up, building the world’s largest single AI chip—the Wafer-Scale Engine. This groundbreaking architecture allows Cerebras to deliver orders of magnitude more speed for AI inference and training, tackling problems that were previously intractable.

    In this conversation, Andrew joins Nataraj to discuss the journey of building a deep tech company from the ground up. He shares the initial thesis behind starting a chip company in 2015, the immense engineering challenges they overcame, the shift towards sovereign AI and open-source models, and how Cerebras is redefining performance benchmarks for the entire industry.

    → Enjoy this conversation with Andrew Feldman, on Spotify or YouTube.

    → Subscribe to our newsletter and never miss an update.


    Nataraj: You started Cerebras in 2015. There are a lot of neo-clouds coming up now, but back then, trying to create a chip company with existing players like Nvidia, Intel, and AMD was not an obvious thing to do. What was the thesis there back then?

    Andrew: I think there are two parts. As a chip and system guy, that’s all we know. So it was obvious to us. We weren’t going to build a web app; that’s not who we are. I think entrepreneurship, like many things in life, pays dividends if you stay true to who you are. The founding team and the people we know are infrastructure builders. That’s what we love and what we’ve done our whole careers. For me and the founding team, it’s building chips and systems and the software that runs them, such that other people’s ideas can run on our machines and take flight. This is my fifth startup, and all the previous startups were building systems. All my co-founders were with me at the last startup I founded. So it was obvious to us that we were going to build a chip in a system.

    I think what wasn’t obvious was AI. In 2015, NVIDIA was a $20 billion company, not a multi-trillion dollar company. The world looked very different. We had a meeting with Sam Altman and Ilya Sutskever, and the things they were saying sounded crazy. “Oh, we’re have to worry about safety and AI agents are gonna take over.” You’re looking at them saying this is crazy talk. But what Ilya said was true; it happened. What we saw was a new type of compute, and we thought AI would usher in a new compute workload in the same way that cell phones did, or switches and routers in the late 90s. When a new workload occurs, a new computer architecture emerges and new great companies are born.

    When there’s an existing dominant design, as there’s been in x86, there’s been some change in market share between Intel and AMD, but there have been no meaningful entrants in two and a half, three decades. Whereas at this inflection point, at the rise of a new workload, there is tremendous opportunity. When the cell phone workload emerged, who was better positioned to take advantage of that than AMD and Intel? Both failed completely, zero share. New ARM emerged, Apple emerged as a major player, and Samsung, and they’d never been in the chip-making business before. So when you see a dislocation, that’s what we predicted. Now, we clearly didn’t think it would be this big, or we would have raised at a higher valuation. We had no idea that 10 years later, people would be spending $400 billion on CapEx.

    In 2016, AI was finding a cat in a picture and making sure it was not a chair. That’s where AI was. But we saw a trajectory that would be big. We didn’t see that it would be this big, but we came to believe that we could build a new type of computer, beginning with the chip and the processor and the system, that would be really good at this workload—not a little bit better or a little bit cheaper, but orders of magnitude faster. And that’s what we did.

    Nataraj: What does an iteration cycle look like when you’re trying to achieve that? If you’re building a software product, it’s pretty straightforward. But how do you do that for a hardware product?

    Andrew: We chose to build the largest chip in the history of the computer industry. A typical chip is the size of a postage stamp, your thumbnail. This is 56 times larger than the largest chip that had ever been built before. We set out to do fundamental design, creativity, engineering, innovation, and invention. We spent about three years and, when all was said and done, about half a billion dollars to make the first one. Nobody in history had made one this size. By being bigger, we could keep more data on-chip. We could move it less often and less far. We could use less power because moving data is really expensive in power and time. So we could be much, much faster.

    The dividend of doing this was huge, but it took us years and a great deal of internal fortitude because it wasn’t right the first time or the second time or the eighth time. In fact, we had about a 15-month period where we were spending eight million a month, and we couldn’t make one. You’re going to a board meeting every six or eight weeks saying, “Nope, still can’t make it.” And then in July or August of 2019, we made one. The founders just stood in a tiny little lab and we watched a computer run, which is about as exciting as watching paint dry. It’s just a big metal box with some lights flashing. We looked at each other and we were stunned. We’d solved a problem that nobody in the computer industry had ever solved before. A few months later we had our first customer, and we have been on a tear since then.

    Nataraj: Who was your first customer?

    Andrew: Our first government customer was Argonne National Labs, one of the Department of Energy labs in the US. Our first commercial customer was GlaxoSmithKline, the large pharmaceutical company.

    Nataraj: You build the largest AI chip and it can have four trillion transistors. What does that mean when you compare it with a thermal chip? Can you pack more SRAM on top of it? How much bandwidth do you get?

    Andrew: That’s exactly right. Memory has two types: there’s slow memory that can hold a lot, called DRAM—HBM is a flavor of DRAM—and that’s what GPUs use. They’re called graphics processing units for a reason; they were designed for graphics. Graphics was a problem where you’d move data once, do a lot of work on it, and then bring the results back into memory. There’s a different type of memory called SRAM. Historically, SRAM was extremely fast but with relatively low capacity. By going to a big chip, we could stuff it to the gills with SRAM, overcoming its capacity limitation by using a lot of it. The result is we have both capacity and speed. That’s why we’re 15, 20, 25, 30 times faster—and in some problems thousands of times faster—than B200 GPUs.

    Nataraj: So if you have more memory, when you’re doing pre-training, you don’t have to break down the model as often as you would on a GPU. Is that the advantage?

    Andrew: That’s right. Let’s look at inference because it’s a neater example. In generative inference, to generate a single word or a token, you have to move all the weights from memory to compute to do a giant matrix multiply. For a 70 billion parameter model, which is not very big, you’re going to move about 100 full-length movies’ worth of data to generate one word. If your memory is off-chip, you’ve got a thin little pipe to the GPU. You’ve got this slow memory with a thin pipe, and you’ve got to move a hundred movies’ worth of data across it to generate one word. That pipe is what we measure in memory bandwidth. By putting the SRAM right next to the compute core on the same silicon, we move more than 2,600 times more data more quickly. As a result, the inference results come out faster. It’s just that simple. Memory bandwidth is a known Achilles’ heel of GPU architectures, and it was one of the things we saw in our design that we could do vastly better.

    Nataraj: Right now it seems it’s all about compute and pre-training, but everyone is now scaling inference. Two or three years down the line, do you think we’ll spend more on inference than compute as an ecosystem?

    Andrew: Inference and training are different. Until about early 2024, AI was mostly a novelty. It was cool, but it wasn’t doing real work. During that time, everybody focused on training because training is how we make AI, but inference is how we use AI. When AI was a novelty, nobody was using it. What’s happened since mid-2024 is the use of AI has exploded, and that’s the inference explosion people talk about. Not only have more people been using it, but they use it more often and to do more complicated things. Each of those increases the compute needed, which is a product of three rapidly growing dimensions. That’s exponential growth, and that’s why inference has just exploded.

    Nataraj: What is it like for a new chip company to compete in this space with AMD and Nvidia?

    Andrew: Hard. Look, Jensen Huang and Lisa Su—who bought my last company—are two of the three great CEOs over the last 10 or 15 years. If you throw in Hock Tan at Broadcom, those three leaders have outperformed just about everybody else in the world. They’re dazzling. But their size also creates opportunity for us. They can’t move as quickly as we can. They can’t take the type of engineering risks we take. They can’t hire the caliber of people we can who don’t want hierarchy and structure. So there’s tremendous opportunity for the bold entrepreneur. You both have to take the giant in the field seriously and know that they can be beaten.

    Nataraj: I think what you said about sticking to your strengths resonates. But as a strategy, is it better to take on a very big, high-stakes, hard problem than something that sounds a little easier?

    Andrew: It depends on your passion and where you are in your career. Chip projects are enormously expensive and historically have not been a good place for young, first-time CEOs. There are a lot of returns to experience in the chip business. Other parts of the entrepreneurial ecosystem have been extraordinarily good to young CEOs, particularly where they and their friends look like their customers. There, they have unique expertise that experience can’t replicate. When they’re coding for their friends or building tools they want to use, that entrepreneur has an advantage. Obviously, the entire wave of social networking companies were like that, and many AI companies are like that now.

    But in the chip business, you have to design the logic, have relationships with a fab, use EDA tools that cost millions a year, and do back-end and physical design. There are very few great hardware teams—maybe eight or 10 in the world—and we have one of them. They’ve been with me for 20-plus years, which made it easier for us to raise money for a big idea.

    Nataraj: What do you think about the narrative that compute or inference will become a commodity?

    Andrew: It’s an irony because Nvidia’s gross margins are 73%. Everybody says it could be a commodity, yet they have the highest gross margins of any hardware company in history. They look like a software company. As you know less about something, the details and complexity go away. From the moon, the Earth is just a blue and white orb. Get up close, and there are religious, political, and technology battles. People who don’t really know a domain often say it will be commoditized. AI compute is not looking like it’s going to be commoditized. Andrew Ng said the other day he’s never met anybody in AI who feels they have enough compute. That’s not a market driving toward a commodity.

    Nataraj: Can you talk about your product strategy? You have the physical chip, data centers, and your own cloud, plus on-premise deployments.

    Andrew: That’s pretty close. We don’t sell the chip; we sell a computer—the chip in a system. It’s parallel to, say, the NVL-72. We sell a whole solution that comes in a rack, fully delivered with everything you need. We will deploy that on your premise, or you can buy cycles on it from our cloud or our customer’s cloud. You can buy it through Amazon Marketplace, Microsoft Marketplace, from OpenRouter, Hugging Face, Vercel, or lots of other places. We have both cloud and on-premise offerings. You can bolt on to us via an industry-standard, OpenAI-like API. Finally, for large on-premise customers, we offer forward-deployed engineering, where our engineers collaborate with yours to design models, clean data, or work on a data pipeline to accelerate solution delivery.

    Nataraj: You decided not to sell the chips directly. Why not, considering Nvidia’s high margins?

    Andrew: I’m a believer in the system business. It’s very hard to get paid for software when you’re selling chips. Historically, there are very few examples of a successful entry strategy when you sell the chip on a PCI card. Then you’re dependent on Dell or Supermicro for your I/O and power. At large volumes, everybody is buying either DGXs, NVL-72s, or Cerebras boxes. AMD was so far behind in building systems that they had to buy ZT for billions because they didn’t have the expertise. In this market, building and delivering systems is how consumers, and even cloud providers, wish to consume.

    Nataraj: You mentioned being available on marketplaces. When customers go to the Azure marketplace, are they hitting your data center?

    Andrew: It depends. When you use Condor Galaxy from G42, you’re generally hitting their equipment. When you go to Microsoft Marketplace or AWS Marketplace, you are hitting our equipment in our data centers. We’ve made it easy for you to purchase, but their tokens have been directed to us.

    Nataraj: One of the biggest challenges for a new chip company is CUDA. How do you handle that?

    Andrew: When you do inference, you don’t use CUDA. The rise of inference weakens CUDA. That’s a really important observation. Developers who want to bring a cool chat application into their app don’t need to know any CUDA; they just need an API to bolt on to a chatbot. While CUDA is a moat in the training business—and we’ve developed compilers that take in PyTorch to get around that—in the inference business, CUDA is irrelevant.

    Nataraj: If someone is already running inference on an NVIDIA GPU cluster, how easy is it to switch over?

    Andrew: Ten keystrokes. If you’re using OpenAI on Azure, your API says something like `get_OpenAI_something`. You just change it from OpenAI to Cerebras and pick your model. That’s it. It’s literally 10 keystrokes.

    Nataraj: You have a big strategic partnership with G42. Can you explain what G42 is?

    Andrew: You should think about it as a sovereign institution and a national champion for the UAE. They are the AI national champion, and the country’s leadership has decided to make AI a priority. They are building data centers, investing in AI technologies globally, and building large partnerships. We’ve been working together for several years, and it’s been extraordinary. We have built out for them some of the largest AI data centers in the world, located in the US. We recently received licenses to deliver equipment in the UAE. It’s a huge partnership. Together, we’ve trained models in Arabic, done genomic research, and we serve customers throughout the world.

    Nataraj: I heard one of your engineers mention that you discovered how much faster inference was while solving for training. Can you talk about that?

    Andrew: Between 2000 and 2024, there wasn’t an inference business out there. Until ChatGPT, nobody was doing large-scale inference in production, so all of us were doing training. We saw that our architecture had enormous advantages not just for training, but for inference. I was probably too slow in recognizing that and adding resources to build out our inference program. I wish I had done that six or eight months earlier, but right now that business is on an absolute tear. There is no substitute for working with customers and building things for learning and for product strategy. It’s very hard to do it in a conference room.

    Nataraj: You filed an S-1 to go public and then decided not to. What happened there?

    Andrew: We didn’t decide not to go public. We decided we needed to revise the material. The data in our S-1 had gotten stale. We took it down and told everybody we would put it back up after we cleaned it up and updated it. It was from the summer of 2024, and it no longer was a good picture of our business. The numbers were too small. Imagine how much changes in 15 months in the AI business. We’re doing more business and have more customers. We took it down, and we’ll put it back up once we’ve cleaned and improved it.

    Nataraj: Why can’t Nvidia build a wafer-scale chip like you did?

    Andrew: First, anybody can try anything. Second, you have to ask, why didn’t they? We have a huge number of patents, and many of the most obvious approaches are foreclosed. But companies work around patents all the time. The truth is, it would take them five or seven years and two or three billion dollars, and we would still be 10 years ahead. For five years we’ve been delivering, and for 10 years we’ve been building and inventing. We knew exactly why the B200 would be late. It was for a problem we’d solved in 2018. Seven years earlier, we solved the problem that caused them to be 18 months late: the coefficient of thermal expansion mismatch between the chips and the interposer.

    Nataraj: Do you have a thesis on AGI?

    Andrew: There’s a huge amount of productivity gain in business to be done that doesn’t need anything close to AGI. For those who’ve ever worked in a large company, the amount of time spent cutting and pasting data from Salesforce to Workday is not superhuman work. Auditing is subhuman intelligence. Machines could do a better job. I can list dozens of functions that would make us vastly more efficient without needing gold-medal math capabilities. Many of the most frustrating things in day-to-day life are friction, not a need for superhuman intelligence. So there is tremendous opportunity long before we get to AGI. My experience in technology is that the last 10% takes 50% of the time. Look at self-driving. We’ve been 90% of the way there for a decade. We’re sort of there in a few cities, but not generally. The last 5% is really hard.

    Nataraj: What are some underrated things in AI right now?

    Andrew: I think underrated things are doing simple things. Everyone’s talking about doing hard things, but we should focus on the simple things that cause friction and frustration—payroll, tracking headcount. HR tools are horrible. AI can do a way better job. Also, on the other end of the spectrum, I think inference at the edge in tiny little sensors is very interesting. The sensor by the brake of your car, on the barrel of a gun for the military, or on a machine at a manufacturing plant. This isn’t big AI, this is tiny AI that does a little bit of inference to be sure that the data sent back is useful, not garbage. In the sub-milliwatt and milliwatt category of sensors with a little bit of AI, I think that’s something people aren’t talking about that will be very interesting.

    Nataraj: Where should young founders look for ideas?

    Andrew: In my experience, people are best at the things they really enjoy. I would begin in domains that you really enjoy. If you’re passionate and you write code to solve these problems in your free time, it’s not work. You’re just pursuing your passions. I would stay in that area. And take seriously what your friends believe they need. Some of the best ideas came from young entrepreneurs seeing holes in the tools and apps available to them. How come I can’t tell if she’s got a boyfriend? Well, there’s Facebook. I’ve been married 30 years; that’s not a question I ask. I could never come up with that idea. But at 19 or 20 in college, it’s really important. Exploring around the things you know well is the best advice.

    Nataraj: What is the most contrarian belief that you have these days?

    Andrew: I think we’ll have peace in the Middle East sooner than people think. The returns to being moderate have been demonstrated by the UAE. In 2005, the GDP of Dubai was the same as Gaza. In 2010, nobody had heard of Dubai. They chose a moderate path and have built an extraordinary nation. You’re seeing that movement in Qatar and Saudi Arabia. In those regions, people are too busy to hate right now; they’re busy working and building cool things. I got an email from a Hindu manager reminding me, a Jewish guy, to wish a Muslim team member Happy Eid. That happens in Silicon Valley because we’re all working together to build stuff. We don’t care. The only question is, can they build something cool? If we trade, do business together, and build things together, then there’s nothing to hate.


    This deep dive with Andrew Feldman reveals the immense challenges and contrarian thinking required to build a category-defining hardware company. His insights on tackling monumental engineering problems, navigating the competitive AI landscape, and the future of compute offer a masterclass for any founder in deep tech.

    → If you enjoyed this conversation with Andrew Feldman, listen to the full episode here on Spotify or YouTube.

    → Subscribe to our Newsletter and never miss an update.

  • Joseph Krause on Radical AI & the Future of Materials Discovery

    AI is rapidly transforming every industry, and deep tech is no exception. In this episode, we sit down with Joseph Krause, co-founder and CEO of Radical AI, a company poised to revolutionize how we discover and develop new materials. Radical AI is building a self-driving lab where AI autonomously designs, tests, and discovers materials, accelerating R&D at an unprecedented scale. This groundbreaking approach has attracted a historic $55M seed round from major investors like NVIDIA and Raytheon. Joseph, a US Army National Guard veteran with a PhD in materials science, shares his unique journey from military service and academia to deep tech investing and finally, entrepreneurship. We dive into Radical AI’s “materials flywheel” concept, how they are increasing the speed of discovery by 370x, and why New York City is the perfect place to build a world-changing company.

    → Enjoy this conversation with Joseph Krause, on Spotify, Apple, or YouTube.
    → Subscribe to our newsletter and never miss an update.

    Nataraj: You have a diverse background from materials science, working in the Army National Guard, and a little bit of venture capital at AlleyCorp, who I’m assuming is also an investor in Radical AI. How did these different experiences converge and lead to starting Radical AI?

    Joseph Krause: Great question. I always love the Steve Jobs quote about how you can never connect the dots looking forward, only looking backward. For myself, I very much feel this is true today. Going back to when I was in graduate school, I was pursuing a materials science PhD at Rice University. I was working on a lot of what we call functional materials, or materials that can be used in real applications, and I was frustrated with my lack of ability to push things into industry. That actually isn’t poor performance from academia; it’s just not what academia is truly focused on. Academia’s job is to drive our fundamental understanding of science. For me, I wanted to take that fundamental understanding, work on applied applications and materials, and really put them into products in the world.

    At a similar time as I was going through my PhD, I was separately serving in the US Army National Guard. It’s a part-time military service where you can get called up into active duty if needed. I’ve always been a big fan of the military and wanted to serve, so it was a very important thing for me. I had this opportunity come up from the Army Research Lab, which is the US Army’s corporate development research lab, where they were doing materials research but thinking about problems for the Army and how these things can impact the future warfighter. I got really interested in doing science that I care about the impact of. I’m separately serving in the National Guard and I care about the warfighter having the best technology. So I took a journeyman fellowship at Army Research Lab, working on neuromorphic computing.

    Even here, it’s still fundamental because it’s at the core, cutting edge of research. I still felt this lack of ability to execute on translation and pushing things out of research into the market. So I took a leave of absence from my PhD, joined AlleyCorp, which is a VC based here in New York City, and I started investing in material science, semiconductors, 3D printing, and informatics to understand how to commercialize materials. While at AlleyCorp, me and one of my other co-founders, Jorge Colindres, came up with this inception for Radical AI. We found our third co-founder, Gerbrand Ceder, and we rolled out and started the company from there. Looking back, it’s a very linear trajectory, but at the time, I had no idea what the next step was.

    Nataraj: Looking back, what were the traditional ways of doing things that were preventing this commercialization of new materials or discoveries?

    Joseph Krause: I think two things. One is focus. The primary purpose of academic research is not to commercialize applications. It can be an outcome, which is why there are tech transfer offices, but it’s not the core driver. The driver is discovering new fundamental phenomena.

    Number two is that the commercialization of science typically ends up in a product. If you think about semiconductor transistor technology, the materials research behind that goes into a chip which is then put in advanced packaging so we can use it in a smartphone or a computer. If you work on transistor technology, you’re still a couple of steps removed from the product that will actually utilize those materials. That disconnect is hard to bridge. Because you don’t have this focus and you’re not building the end product, you can’t really solve commercialized problems. The ones that do, like nuclear fusion, take that scientific invention and start using it in a real product, like a reactor. Bridging that gap, in my opinion, really requires private enterprise. That’s why you’ve seen companies from fusion to space with SpaceX and Blue Origin building private companies on really advanced scientific capabilities.

    Nataraj: When you have a good founding team and some funds, what’s the next step? If it’s an Uber for something, I would go and create a new app. What was the product that Radical AI created first?

    Joseph Krause: The most important thing to think about is technology, and then from that technology, what is the product you develop? We create new materials, and we want to commercialize those materials by actually selling them at scale. The area that we are working in is very much in aerospace, defense, and energy, where we use these structural metals to build components. Our product, the output of our technology, is novel materials that enable new applications.

    The technology piece is how we do that. I love using SpaceX as an example. They focus on bringing things to orbit and eventually getting to Mars. The way they do that is with advanced rocket capabilities that are reusable and more cost-effective. That technology is the means of how they get things into space. If our end goal is to develop incredible materials that enable new applications, the way we do that is our materials flywheel. This is where our AI modeling, engine, and fully robotic self-driving lab come in. They allow us to do materials discovery 370 times faster than a human scientist, bring in different data sets like patents or scientific literature to make new hypotheses, and build a high-throughput capability where we test tens of thousands of materials per year, not just a few. When you bring all those together, you have this materials flywheel that allows you to very quickly discover novel materials and then go into applications that will directly use them for new advancements.

    Nataraj: Who are the typical customers looking for new materials? And secondly, you mentioned a 370x speed advantage. Where in the flywheel does that come from?

    Joseph Krause: Great questions. First, who buys materials? My favorite question to answer, because it’s every single company on earth. It doesn’t matter if you are in aerospace, automotive, manufacturing, defense, climate, energy, semiconductors, or even athletic apparel. Every single industry in the world is a direct result of novel material advancement. But all of them feel this problem of very long timelines—typically 10 to 20 years—and incredible cost, north of a hundred million for a single material system. So you arguably have one of the largest markets in the world, we just cannot execute on it fast enough.

    On the second piece, digital research is challenging in materials, and we cannot only do digital-based research. So much of the know-how in making a material at scale is not just how to make it, but how to make it in large quantities with the properties you need—what we call the processing of a material. Most of the IP of the biggest material companies is trade secrets around processing. So if you never make it in a lab, see its properties for an application, and understand the conditions to scale it—temperature, pressure, oxidation, concentration—then you truly can’t capture the value. We do a lot on the digital side with machine learning and generative technology for inverse design, but at the end of the day, we still need to make that material in a lab. This gap is the hardest thing to understand and where all the value sits in the materials industry today.

    Nataraj: How do you pick which direction to pursue? Are you only going in directions where you have pre-commitments from customers, or do you have your own ideas of what new material to find?

    Joseph Krause: Great question, because focus is very important for an early-stage startup. It’s actually both. The classes of materials we work in are directly driven by customers. We do not want to randomly work on materials with no identified customer problem. That’s the first thing. The second reason it’s both is that it’s up to us to come up with the new material to solve those problems. A perfect example is our work in the hypersonic space. Hypersonic missiles have a problem where the alloys they use today cannot keep up with the composites inside. The problem is how to drive higher heat and better mechanical performance at high temperatures. That problem is from the end customer, but the solution we’ve come up with is a high entropy alloy. We are designing novel high entropy alloys for this application. We take the problem from an end customer, and then we think about what material system can solve that problem in a way that creates a huge market opportunity.

    There’s a third option, which is something like a room-temperature superconductor. The industries that would benefit are endless. There’s not really a market for that today, with customers saying, “I want to solve this in the next six months.” But if you had one, every company would be interested. We call those enabling technologies. However, focus is key. Radical AI needs to show the world that we have the best scientific discovery flywheel and can drive real commercial value. Once we do that, then we can focus on more enabling things like a room-temperature superconductor.

    Nataraj: A couple of years back, there was a viral paper about reproducing a superconductor at room temperature. Are you talking about a similar thing?

    Joseph Krause: Yes, LK-99 specifically. And that’s a perfect example of where experimental validation of a material is very important. This paper came out claiming room-temperature superconducting ability. People ran simulations to try to confirm it, and some thought they had. But when it came to reproducibility and experimentation, we could not replicate that material. It’s a perfect example of where experimental work is equally as important as simulation work.

    Nataraj: What type of people are you hiring? And what type of AI are you using? Are you building your own models?

    Joseph Krause: The people we hire span five technology buckets: machine learning and AI, software engineering, automation/robotics, mechanical engineering, and materials science. Building an interdisciplinary team is imperative. On the AI front, we use a multitude of different AI technologies in a multimodal approach. We have an agentic system that sits at the top layer using LLM technology. We don’t build LLMs from scratch; we fine-tune models from the big AI labs. Those agents have access to different tools we’ve built, like graph neural nets and generative-based models using diffusion for inverse design. We also use older AI like computer vision in our lab to track experiments and Bayesian optimization in our design of experiments loop. It’s a mix of technology across the board, from cutting-edge LLMs to well-understood architectures.

    Nataraj: DeepMind has AlphaFold for predicting protein structures. Is there a similar opportunity in materials for predicting crystal structures?

    Joseph Krause: Absolutely, that’s a very real thing and it’s being deeply explored. We do that internally with our own models. Big Tech also has models: Microsoft has MatterGen, Google has GNoME, and Meta has models from their Open Catalyst Project. The field is growing rapidly. There’s been some pushback that some predictions are not novel or not valid—meaning we can’t actually make them in the lab. These constraints of novelty and validity are really important. A big problem with these models is a lack of experimental data; they are trained almost exclusively on computational data. You need the ground truth from the experimental lab. The problem is that 90% of the work a scientist does doesn’t work, and we don’t capture that negative data. It lives in a scientist’s head or lab notebook. That’s why we build self-driving labs. We capture every single data point and feed that experimental data back into our AI engine to build the most proprietary dataset in the world.

    Nataraj: Why pick New York?

    Joseph Krause: New York is one of the capitals of the world. We love it for two reasons. One, the talent density is very high and continues to grow. There are not many elite performers who would not want to move to New York City. Two, if you want to work on advanced scientific discovery and deep technology, there are not many places to do that in New York. We might be one of the only places where you can do AI and science. If you want to be in New York and work on really hard problems, Radical is one of the few places you can go. Those two reasons are linked, and that’s why New York is super important to us.

    Nataraj: What is your best AI use case for you personally or for managing your company?

    Joseph Krause: For me personally, it’s research and knowledge gain. I might talk with a customer who has a very specific materials problem in their parts, for example, in the automotive industry. I’m a material scientist, not an automotive expert. I will use AI to take what would be a three or four-month learning process and get up to speed very quickly on the history of materials used in that application, the research areas pursued, and the current state of the art. I can use AI to get knowledgeable enough in an hour. It’s for longer, extended research settings where I need quick, actionable information that I can learn today and use this afternoon.

    Nataraj: And are you using ChatGPT, Deep Research, or something else?

    Joseph Krause: To be honest, I use all of them because I’m still in the discovery phase of seeing who’s the best. I’ve used a lot of Grok from xAI; for science, they seem very good and pull references well. I use ChatGPT a lot for idea generation. I will also use Gemini in direct comparison. Sometimes I’ll have both open in a tab, paste the same prompt in both, and see what different information I can pull. I don’t have a single winner yet.

    Nataraj: Every time I see Apple’s new demo, they talk about redesigning every material. Where does that material research happen?

    Joseph Krause: Apple is an incredible materials science company. A majority of their advancements come from new materials. The battery life, the colors, the aluminum and titanium they use—it’s all a materials science problem, down to the semiconductor technology. They do research internally; they have a very good materials research team. They also work with outside parties. Corning with Gorilla Glass is the one most people know. That was a material discovered 20 years before it had an application, until Steve Jobs came knocking and needed a scratch-resistant glass for a touchscreen. Apple is very good at indexing information quickly and pushing material advancements.

    Joseph Krause, co-founder and CEO of Radical AI, joins the Startup Project to discuss how his company is revolutionizing materials R&D. He shares how integrating AI, robotics, and engineering into a “materials flywheel” accelerates discovery by 370x, attracting a historic $55M seed round from investors like NVIDIA and Raytheon. Joseph details the journey from his PhD and military service to building a world-changing deep tech company in New York City.

    Joseph’s vision for Radical AI highlights the immense potential of integrating AI and robotics into fundamental scientific research. By creating a closed-loop, autonomous system for materials discovery, his team is not just building a company—they are building a platform to solve some of the world’s most critical challenges across aerospace, defense, and energy.

    → If you enjoyed this conversation with Joseph Krause, listen to the full episode here on Spotify, Apple, or YouTube.
    → Subscribe to ourNewsletter and never miss an update.

  • Statsig Founder Vijaye Raji on Building a Data-Driven Platform

    Introduction

    After a decade at Microsoft and another at Facebook, where he served as VP and Head of Entertainment, Vijaye Raji took the leap from big tech executive to startup founder. In 2021, he launched Statsig, an all-in-one product development platform designed to empower teams with experimentation, feature management, and product analytics. Built on the principles he learned scaling products for billions of users, Statsig helps companies like OpenAI, Notion, and Whatnot make data-informed decisions and accelerate growth.

    In this conversation with Nataraj, Vijaye shares his journey, the tactical lessons learned in hiring and scaling, and the cultural shifts required when transitioning from a corporate giant to a lean startup. He dives deep into how modern product teams are leveraging rapid iteration and experimentation, and offers his perspective on what the future of product development looks like in an AI-first world.

    → Enjoy this conversation with Vijaye Raje, on Spotify, Apple, or YouTube.

    → Subscribe to ournewsletter and never miss an update.


    Lightly Edited Transcript

    Nataraj: As I was researching this conversation, I found out that when you were considering leaving Facebook, Mark Zuckerberg tried to convince you to stay at the company. What was that moment like?

    Vijaye Raji: This was not the first time. I tried to leave the company a couple of times before then, and every single time it was a conversation that convinced me to stay. There’s a lot to be done here, a startup is a small company, the impact you will have is not that big. For all the good reasons, I stayed back at Facebook. When it was 10 years, I knew something new had to be done for my own personal sake. I felt like I needed the startup in my life, so I left.

    Nataraj: You’ve been in big tech companies for almost two decades at that point. What was the personal motivation? There’s always a personal calculus. I’m doing all this work for a company, can I own more equity? What was your thinking at that point?

    Vijaye Raji: I started at Microsoft and spent about 10 years there as a software engineer. To be completely transparent, I loved Microsoft. I enjoyed it and learned a lot. Everything that I know about software engineering, Microsoft is the best place. This was back in the early 2000s, where you learn how to build software and predict something that’s going to happen two years down the line. It’s like a science, and there’s so much to learn from so many good people, so I had a lot of good time learning all of that stuff. At some point, I had thought about building something different. This is probably something that is very common nowadays, where you’re in a holding pattern for your green card. You can’t really leave or reset your green card, so you don’t really explore other options when you’re in that situation. I had to be like that for a little while. Once I got my green card, the first thing I did was look around, and luckily for me, Facebook was starting up an office in Seattle. That was my first jump from what I thought was a really good learning experience for a whole decade. I went into Facebook at that time, and Facebook was a startup. It was a late-stage startup, not quite ready for IPO, so I thought I was joining a very small company. Leaving behind a company that was 100,000 people to join a company that was only 1,000 or 1,200 people at that time was incredibly different and a good learning experience. I thought I was learning a lot at Microsoft, and then I went to Facebook. There was a completely new world. That’s how I went from one big company to what I thought was a startup, and then eventually, you know the story of Facebook. It grew so fast, and by the time I was there for years, it had grown to 65,000 people or something. That was a lot of good learning because when you’re in a company that is growing that fast, you learn a lot and you get exposed to a lot.

    Nataraj: By the time you left, you were leading entertainment at Facebook and also leading the Facebook Seattle office.

    Vijaye Raji: Yeah, one of the things that I generally do is every couple of years or so, I try something completely new. Even at Microsoft, I started with Microsoft TV, which was a set-top box, and then moved on to developer divisions doing Visual Studio, building compiler services. After that, I was working on SQL Server, building databases, and then Windows operating systems. Even within Microsoft, I did various little things. Then at Facebook, I started out as an engineer and worked on Messenger and some ads products, and then I worked on Marketplace, and then gaming and entertainment. Each one of them is pretty different. They don’t have much correlation or continuation, and that’s how I’ve always operated in my career. When I left, I was the head of entertainment, which included everything from videos, music, and movies, and also the head of Seattle, which when I joined was about a couple dozen people. When I left, we had about 6,500 people spread across 19 different buildings.

    Nataraj: What were some of the interesting problems that you were working on as head of entertainment, and what was the scale of those problems?

    Vijaye Raji: As Head of Entertainment, if you think about Facebook’s apps, there’s a social aspect to it—your friends and your community—and then there’s an entertainment aspect, which is you just want to spend time and be entertained. The kinds of stuff you do for entertainment could be watching videos, listening to music, watching music videos, playing games, or watching other people play games. You watch short clips from TV shows and so on. Another huge area is podcasts. Anything that is not pertaining to your social circle belongs to this entertainment category, and that was my purview. The problems we were trying to solve were about how to make the time people spend of high quality. What do they gain out of it, and how do they get high-quality entertainment? That includes everything from acquiring the right kind of content, understanding what people want, and then personalizing the content to them. It also includes removing content that is not great for the platform, anything violating policy. So you invest quite heavily in the integrity of the platform as well. On the engineering side, scale is a very important problem. When you’re delivering video at 4K, high quality, high bit rate to networks that may not be reliable, you have interesting engineering problems that you have to go solve. Those are all super exciting.

    Nataraj: Were you primarily focused on the technology of getting the entertainment on the different Facebook platforms or also part of dealing with the business side of it, like licensing and acquiring content?

    Vijaye Raji: It was part of that too. When you have a product that is observing what people watch, you know what people want. You then want to go and buy more of that content. We had a media procurement team, and you could go to them and say, this is the kind of content that people consume on Facebook, so let’s go get more of those. That plays into the decision of where the company would go invest.

    Nataraj: So you were doing some exciting stuff at Facebook at scale and then you decided it’s time for you to leave and start your own company. Did you evaluate a different set of ideas, or was the idea for Statsig brewing in your mind while you were at Facebook?

    Vijaye Raji: It’s a little bit of both. The first part of the journey was deciding to go start a company. The second part was, what do I go build? Deciding to start a company had been brewing for a long time. It was one of those things that I would regret if I didn’t do it. As for what to go build, because of my varied experience doing everything from gaming to ads to marketplaces to videos, I had lots of ideas. When you’re evaluating an idea, you want to take into account what the market size could be, what the propensity of a buyer is to pay a dollar for you, and what you are good at. Sometimes you’re going against a lot of competitors, so what are we really good at? And what could I bring that could be an advantage? Those are all the factors that go into it. If you think about it, your passion is driven by your heart, but this logical analysis is driven by your mind. If you’re entirely driven by passion, you may build something that may not be sellable. Those were the kinds of considerations that went into deciding to go build a developer platform that includes everything from decision-making and empowering everyone to make the right decisions using data.

    Nataraj: So, once you decided on this particular experimentation developer platform, how did you go about getting those first couple of customers?

    Vijaye Raji: It’s a good journey and a good lesson for everyone building startups. Usually, when you have a founder with an immense amount of faith and conviction that this is what I’m going to build, you are very mission-driven. While you’re building, you’re talking to a lot of people. This is the part where I made all kinds of mistakes. You go to someone you know who is willing to spend 30 minutes with you and say, ‘I’m going to build this developer platform, it’s going to be pretty awesome, it’s going to have all kinds of features.’ What are they going to say? Chances are, they’re going to say, ‘That sounds like a great idea, you should go do it.’ You talk to enough people, and you build this echo chamber where you are now even more convinced that everybody needs this platform. Then you go build it in a vacuum. We did this for about six months. At the end of six months, we went to the people I talked to before and asked if they were willing to start using this product. And you know what? You go talk to them and they say, ‘Let me think about it.’ ‘Let me think about it’ means they’re not really that interested. It’s much harder to have them integrate it into their existing product, and much harder to have them pay a single penny. You learn that lesson. This is one of those things where I was talking to one of my co-founders at that time, and a person said, ‘You’ve got to go read this book, The Mom Test.’ I went and read it and realized all the mistakes I was making when talking to customers. The point is, first, you need to understand what problems people are facing and if you have a solution for that problem. To even get to that stage, you need to know who your ideal customer profile is. Then you talk to them and make sure the product you’re selling actually solves the problem. Not only that, you have to be the industry best for somebody to even care about your product and then open up their wallet. Those are the kind of hard lessons that I learned over the course of the next few months.

    Nataraj: What was the value proposition of Statsig at that point, and why was it different from what already existed in the market?

    Vijaye Raji: The value proposition has not changed since the day we started. It has always been the same: the more you know about your product, the better decisions you’re going to make. What we’re doing is empowering product builders, whether you’re an engineer, data scientist, or product manager. The idea is to observe how people use your product, what they care about, and where they spend more time. All of those are important for you to know how your product is doing. Number two is what features are not working as intended. And number three is using those two insights to know what to go build next. That’s literally what we sell as the value from day one. The differentiation from existing products is that previous products were all point solutions. For feature flagging, you need a separate product. For analytics, a separate product. For experimentation, a separate product. We’re bringing them all together. The benefits are that it consolidates all data into one place, so you don’t have to fragment your data. Number two, because you’re not fragmenting data, it all becomes the source of truth. And number three, it opens up some really interesting scenarios. If you combine flagging with analytics, you can get impact analysis for every single feature. That’s something you can’t do if you have two different products.

    Nataraj: Can you explain flagging for those who might not be aware of feature flags?

    Vijaye Raji: Feature flags are ways for you to control where to launch, how to launch, and to whom. You can decouple code from features so that when you want to launch new code, you can send it to your app store and get it reviewed. Once it’s live, you can turn on features completely decoupled from code releases. It’s extremely powerful. Number one, it’s powerful to know when to launch your product. Number two, when something goes wrong in real-time, you can turn it off. There are lots of other reasons, like doing rollouts at scale. You don’t turn on a feature for 100% of users everywhere; you can do 2%, 4%, 8% just to know that all the metrics you care about are still sound.

    Nataraj: Are there any specific trends across big and small companies on how they’re approaching experimentation or product validation?

    Vijaye Raji: The trend that we are betting on is that more product development is going to be data-driven. That’s the reason why we’re here, and what we’re doing in the industry is accelerating that trend. The education we do for prospects and the industry is basically catalyzing and accelerating this. Product development used to be this siloed thing where a product manager comes up with an idea, engineers code it, testers test it, you ship it, and then you wait for two years for another release cycle. Now it’s a very iterative process, and people ship weekly, daily, and sometimes even hourly. To get to that level of speed, you need controls in place. To allow people to make distributed decisions, you need the ability to know how each line of code you wrote is performing. These tools are getting more adoption day by day, and people moving from the traditional way of development to this modern way all need it. Our bet is that AI is going to accelerate that movement because you’re going to have lots of rich software built from blocks that you’ve just assembled. You need to know if your hypothesis or your original idea actually turned out to be the product. You need these observability tools built into your product to be able to know. Generally, this trend is moving towards data-driven development.

    Nataraj: What is the right time for a company to adopt Statsig? What is the ideal customer profile for you?

    Vijaye Raji: You should start on day one. Every feature that you’re building… I remember the early days of us building software. The first thing we launched was our website, and I’m refreshing that page all the time. Whenever somebody visited the website, I was looking at them, seeing the session replay. I was literally spending all my time figuring out how people were using the product. That is an important step in the journey of your company. So start on day one. Integrate with feature flags, product analytics, session replay, and all of that stuff that will give you insights into how users are using your product. Eventually, you’ll get to the place where you want to run experiments. You don’t have to do experiments on day one. When you get there, you have the same tool with all the same historic data now capable of running experiments. There’s no early time; you just start right away.

    Nataraj: I’ve used different experimentation tools, and one of the negative side effects I see is this idea that we’ll just A/B test everything. It can lead to a little bit of incrementalism and experimentation fatigue. Do you have a take on when to use experimentation versus when to use your gut and your product sense?

    Vijaye Raji: You remember the famously misattributed quote from Henry Ford that said if I had asked people what they needed, they would have said faster horses, not a car. Experimentation is not a replacement for product intuition. You’re not going to get rid of product intuition. To make those leaps from a faster horse to a car, you cannot experiment your way there. You need people with lots of good intuition and drive to get to that kind of leap. But then once you had your first version of the car, from the Model T to where we are now, there have been thousands, if not millions, of improvements made. Those are all things you can experiment with to find better versions of what you currently have. My philosophy is that product intuition and experimentation go hand in hand. Some of these things, you have product intuition, you have conviction, you go do it. But when you’re about to launch it, just make sure that there are no side effects—things that you have not thought about. Products have gotten so complex nowadays, I don’t think anybody out there can meaningfully understand ChatGPT in its entirety. When you’re in that kind of situation, it’s only going to get harder for any one person to fully grok your product. In those cases, observability, instrumentation, and analytics are extremely valuable. Then you have experiments. They don’t have to be just testing two different variants. It could literally be, ‘I believe in this feature. I’m going to launch this feature.’ That is a hypothesis. Validate it. Put it out there and measure how much it’s actually improved in all of those dimensions that you thought about.

    Nataraj: Let’s talk a little bit about growth. You started in 2021. Can you give a sense of the scale of the company right now?

    Vijaye Raji: We have about a couple thousand users, most of them come self-serve. They pick up our product, we have all kinds of open-source SDKs. We have a few hundred enterprise customers that are using our product at scale. And we have massive scale in terms of data. If you think about the few hundred enterprise customers, they all have big data. We process about four petabytes of data every day, which is mind-boggling. Last September, we announced we were processing about a trillion events every single day. Now we’re processing about 1.6 trillion events every single day. To put that in perspective, that’s about 18 million events every second. That’s what our system is processing. That’s been our growth in terms of customers, scale, and infrastructure.

    Nataraj: How are you positioning Statsig? Is it primarily a developer tool, and you’re using that to drive enterprise growth?

    Vijaye Raji: We position ourselves as a product development platform. It caters to engineers, product managers, marketers, and data scientists. There are parts of the product for each individual. If someone comes to us to solve an experimentation problem, it’s usually a data scientist team. But once we’re in that company, our product can grow organically. We don’t charge for seats. The engineering team will adopt Statsig for feature flagging, and the product team will adopt it for analytics and visualization. This organic growth happens within a company, and this is how we have grown even within our existing customer base.

    Nataraj: Where do you spend most of your marketing efforts for the highest ROI?

    Vijaye Raji: There are different outcomes for our marketing efforts. Some are direct response, where we feed people into our website for self-serve sign-ups or talking to our sales team. We track that, and it’s a very seasonal thing. Then there are aspects of marketing that are more brand-related. We want to be out there and build brand awareness. We invest in things like that. One of the fun things we did last year was a billboard in San Francisco. That was really cool because we got a lot of brand awareness from that. We also partner with people like you and some podcasts that we work with who have great reach and brand awareness.

    Nataraj: In terms of culture, you mentioned you’re a completely in-person company. Why does that matter?

    Vijaye Raji: Our product teams are all in-person. Our sales team is spread out in the US, with some in the Bay Area, some in New York, and a couple of people in London. But everyone else—engineering, product, data science, design, marketing—we’re all in-person in Bellevue. It started out with eight of us on day one. There were so many decisions we had to make, all this whiteboarding. It would be very hard to do all of that over Zoom. We naturally gravitated towards doing this in person, and I saw how fast we were able to move by not having to document all the decisions. The clock speed was extremely high, and I was reluctant to ever lose that. It’s a trade-off. We’ve had so many really good people we had to say no to because they were not willing to do this in person. That’s a painful trade-off. But four years in, we’re still in person. We’re about 100 people showing up five days a week, and it’s a self-selection mechanism. There are a lot of positives. We’ve built a really good community, and I like it and want to keep going as long as I can.

    Nataraj: You were in Seattle in 2021 when it was hard to hire talent. How were you figuring out how to hire great people?

    Vijaye Raji: That’s a very good question. You want to hire great people and retain great people. After managing large orgs, the realization I came to is that it’s not the intensity that matters, but what proportion of the time you spend doing things you don’t enjoy. If you’re doing intense work but you love what you’re doing, you have autonomy, and no overhead or friction, people love that. They come in excited, leave exhausted, but they are gratified. As long as you can provide that environment, the intensity can be high, and you don’t have to worry about burnout. To me, it’s always been about how I can remove friction, overhead, and process. These are creative people; I want them to be in their element. Can I provide the best working environment for them? In a startup, if you’re doing a 10-to-5 job, it’s not going to work. People that come into Statsig are already self-selecting into our culture. We’re not doing anything crazy like six days a week, but we are a hardworking group of people, and I like to keep the talent density extremely high because it affects the people that are already here.

    Nataraj: Let’s talk a little bit about AI. How are AI companies using Statsig?

    Vijaye Raji: Absolutely. If you’re a consumer of LLMs, we have an SDK that you can use to consume these APIs, where we will automatically track your latency, your token length, and how your product is doing. We tie it all back to the combination of the prompt, the model, and the input parameters you used, quantifying the lift or regression. We also have prompt experiments in the product. There will be a lot of people building different kinds of prompts and wanting to validate how each one is performing. We have a very specialized experimentation product just for prompt experiments. The rest of it is just a very powerful platform you can use for anything. If you’re OpenAI running experiments on ChatGPT, that’s going through Statsig. Or if you’re Notion, a consumer of AI and LLMs, you pass it through Statsig. Statsig powers you to determine which models work, which combination of all the parameters yields better results. Then there’s how Statsig uses AI to make our customers’ experience better. There’s a lot we’re doing there that I’ll be announcing in the next few months.

    Nataraj: In terms of Statsig, do you have a favorite failure or a deal that fell through that changed things?

    Vijaye Raji: A lot. But one thing I want to call out is the humbling experience when you realize you will never be the first one to come up with the best ideas. Part of it is learning to acknowledge that’s a good idea, give them credit, and then quickly follow on. If it is a great idea and you believe it, without any ego, just go and build it. If you can build it better than anybody else, then you continue to live on for a couple more days. We famously didn’t take data warehouses seriously in the first year or so. We built everything on the cloud without really taking into consideration warehouses like Snowflake or Databricks. Then we started seeing customers come in with things like, ‘Hey, I have Snowflake. Could you operate on that?’ And we would be like, ‘Well, you can ingest data from there, but I can’t operate on top of it.’ You start to believe in your current products. Then you realize they start to leave, saying Statsig is not the right product for us. After a couple of those humbling moments, you realize your position is not right. So we spent the next three to four months building a warehouse-native product, and then we came back to the industry and started selling our product. That was a very good failure, realization, and then action from the realization.

    Nataraj: What are you consuming right now? It can be anything—books, movies, or TV series.

    Vijaye Raji: I’m a big Audible guy, so I listen to books on my way to work and back. Right now, it’s Brian Greene’s ‘The Elegant Universe.’ I think this is the second time I’m reading it. I just wanted to listen to it all over again, and every time I feel like I catch on to something new from that book.

    Nataraj: What do you know about founding a startup that you wish you knew when you started?

    Vijaye Raji: Thousands of things. I wish I had spent more time with my sales and marketing friends back at Facebook. They’re all good friends, and I’m still in touch with all of them. We used to sit in meetings every week, but I never once thought to drill down deeper. How is your team structured, how are they incentivized, what kind of commissions do they get, how do you think about the different types of marketing? I wish I had learned all of that stuff so I could have saved a lot of failures in the early days.

    Nataraj: For you, I have a special question: what is a big company perk that you miss?

    Vijaye Raji: The recruiting team.

    Nataraj: What is it that you don’t miss?

    Vijaye Raji: A lot of things. In a big company, you’re sitting in review after review after review. Those are not just product reviews; you’re looking at privacy reviews and security reviews, things that are important but end up being overhead. At startups, you can move extremely fast by bypassing a lot of that, or even if you have to take care of them, they are much smaller deals.


    Conclusion

    Vijaye Raji’s journey from scaling products at tech giants to building Statsig from the ground up offers a masterclass in modern product development. His insights underscore the power of combining deep product intuition with rigorous, data-driven validation to build products that customers love. For any founder or product leader, this conversation is a valuable guide to navigating the complexities of hiring, scaling, and maintaining velocity.

    → If you enjoyed this conversation with Vijaye Raje, listen to the full episode here on Spotify, Apple, or YouTube.

    → Subscribe to ourNewsletter and never miss an update.

  • Molham Aref on Building RelationalAI: An AI Coprocessor for Snowflake

    In this episode of The Startup Project, host Nataraj sits down with Molham Aref, CEO of RelationalAI. With over three decades of experience in enterprise AI and machine learning, Molham shares his journey from pioneering neural networks in the ’90s to founding his latest venture. RelationalAI is tackling a fundamental challenge for modern enterprises: the sheer complexity of building intelligent applications. By creating an AI coprocessor for data clouds like Snowflake, RelationalAI unifies disparate analytics stacks—from predictive and prescriptive analytics to rule-based reasoning and graph analytics—into a single, cohesive platform. Molham discusses the evolution of the modern data stack, the practical applications of GenAI in the enterprise, and offers hard-won advice on founder-led sales for B2B startups. This conversation is a masterclass in building a deep-tech company that solves real-world business problems.

    → Enjoy this conversation with Molham Aref, on Spotify, Apple, or YouTube.
    → Subscribe to ournewsletter and never miss an update.

    Nataraj: My guest today is Molham, the CEO of Relational AI. He was the CEO of Logicbox and PredictX before this. Relational AI recently raised a Series B of $75 million from Tiger Global, Madrona, and Menlo Ventures. They’re widely known for solving use cases like rule-based reasoning and predictive analysis for large enterprise customers. So this episode will try to focus on what the modern data stack looks like, what applications can be built, and a lot more interesting things around those topics. With that, Molham, welcome to the show.

    Molham Aref: Thanks, Nataraj. Pleasure to be here. Looking forward to the discussion.

    Nataraj: I think a good place to start would be to talk a little bit about your career and all the things that you’ve done before this. How did you end up with RelationalAI?

    Molham Aref: Great. I have been doing machine learning and AI-type stuff in the enterprise under various labels since the early 90s, so it’s over 30 years now. I started out working on computer vision problems at AT&T as a young engineer coming out of Georgia Tech and worked there for a couple of years. Then I joined a company that was commercializing neural networks, a company called HNC Software, and worked on some of the early neural network systems, particularly in the area of credit card fraud detection. I focused on retail, supply chain, and demand prediction. I was very fortunate to have a wonderful experience there learning about the technology and all the things you have to do when you put together what today we would call an intelligent application. We had a very nice IPO in 1995. We bought a small company called Retek that was working specifically in the retail industry and learned a lot from that experience. We grew Retek quite substantially and spun it out in another IPO in 1999. You start thinking, this is easy. Anyone can do this.

    So in the early 2000s, I started getting involved in startups myself. One was also trying to apply computer vision technology to a brick-and-mortar environment, a company called Brickstream. Unfortunately, Brickstream was a good idea too early. I learned a little bit about how too early is indistinguishable from wrong in the startup game.

    Nataraj: Was it similar to what Amazon Go stores was like? What were you trying to do?

    Molham Aref: Not nearly that sophisticated, but yes, the idea is that you can put stereo cameras in the ceiling of a retail environment—a retail bank, a retail store—and start collecting information about what consumers are doing, where they’re dwelling, what products they look at, within certain error tolerances, and then connecting the whole customer journey because you would get handed off from camera to camera as you walk through the store. Anonymously, we didn’t know who people were; we were just looking at the top of their heads. You build a picture of what the brick-and-mortar experience is like. At the time, this was before deep learning, and everything was handcrafted computer vision algorithms. It worked, but it was expensive. A lot of the challenge was what do you do with that data? So we weren’t just solving a problem around computer vision; we had to justify the data. It was a good experience, but not a good commercial outcome.

    Then I helped start a company in the wireless network space where we built network simulation optimization systems for AT&T, Cingular, and American Mobile, a bunch of wireless operators. We helped them migrate from the 2G systems they were on to 2.5G and 3G systems and helped them manage infrastructure and spectrum and a whole bunch of other stuff that you couldn’t deploy without the benefit of very sophisticated intelligence.

    Nataraj: How did you go from machine learning and vision to wireless networks? How did that idea come up?

    Molham Aref: At the core, a lot of what we do in machine learning and AI is about modeling. We have deployed a handful of modeling techniques: simulation, machine learning, GenAI more recently, rule-based reasoning, mathematical optimization, things like integer programming, and graph analytics. The AI toolbox has half a dozen tools that you deploy in a variety of contexts, and it’s perfectly normal for the domain to vary. Whether you’re modeling a wireless network or a retail supply chain, there are entities that are involved. In a retail supply chain, you might have raw material in the form of fabric. In the wireless industry, you have raw material in the form of spectrum. In the retail industry, you might change that fabric through manufacturing to make it into a t-shirt. In the wireless industry, you take that raw spectrum and, using Ericsson and Nokia equipment, you convert it into a wireless minute or a wireless kilobyte. Then you manage supply and demand accordingly. So in both contexts, you have to predict how many customers are going to want this wireless minute or this t-shirt. You try not to make too much of your product because if you make too much, it’s perishable. It loses value.

    I’m not a wireless expert, but you learn enough about the core concepts in that domain. That company did reasonably well and was acquired by Ericsson. After that, I went back into retail and helped build one of the first companies that built retail solutions on the cloud. We went all in on the cloud in 2008-2009, when that wasn’t an obvious choice, and leveraged the cloud to do a new class of machine learning. My whole career was spent working at companies focused on building one or two intelligent applications, and in every situation, it was a mess. You had to combine different technology stacks: the operational stack, with the BI stack, with the planning stack, the prescriptive analytics stack, with the predictive analytics stack. You end up spending a lot of time and energy just gluing it all together. That’s fundamentally the reason building intelligent applications is hard. I thought it would be cool to build a generic technology on a popular platform to make it so that you don’t need so many components and so much duct tape.

    Nataraj: It would help for the listeners if you can talk a little bit more about what predictive analytics, prescriptive analytics, or rule-based reasoning mean.

    Molham Aref: Broadly speaking, you have descriptive analytics. It answers the question of what happened in my business. Business intelligence is a form of that. You’re looking backward and saying, what were the sales of flat-screen televisions in Boston last year? You can aggregate the data by region, by different time granularity, by different types of products. If you just have descriptive analytics, it’s up to the human to look at that and then project forward as to what you’ll sell in January in Boston or Philadelphia. There’s a ton of data to look at, and you can improve that with a variety of modeling techniques, everything from time series forecasting to today’s graph neural networks to help you understand what drives demand. If you can now predict what the demand is going to be in January or February next year under various promotional and pricing scenarios, you can now leverage prescriptive analytic technology. Descriptive, predictive. Prescriptive is, I know the relationship between, say, price and demand. What should I set the price to to maximize revenue or profit or market share? The technology you use for each of these tasks is different. GenAI, of course, is a very powerful new technique that we have in our toolbox. But at its core, it’s predictive because it’s trying to predict the next word in a sentence.

    Nataraj: You figured out it’s very hard to deploy these solutions and you started RelationalAI in 2017, before GenAI. What were the primary use cases and types of customers you were targeting at that point, and how did it evolve?

    Molham Aref: My area of expertise and our team’s area of expertise is in the enterprise, deploying all these different techniques, including rule-based reasoning, which is symbolic reasoning. The idea was to take all of these techniques because for any hard problem in the enterprise, you can deploy all of them to solve it. We help build applications today that have elements of GenAI, graph neural networks, integer programming, graph analytics, and rule-based reasoning. I strongly believe that the combination is what wins. My view is AI and machine learning, in particular, drive us towards data-centricity because the datasets involved are big. The old architectures that use the database in a dumb way and then pull data out to put in a Java program stop working when you have to pull a terabyte of data out. We wanted to move all the semantics, all the business logic, all of the model of your business as close to the data as possible. We built a system that we take to market as an extension to data clouds like Snowflake. We call it a co-processor. We plug in inside Snowflake and build a relational knowledge graph that facilitates queries that do graph analytics, rule-based reasoning, predictive analytics, and prescriptive analytics. It’s all in one place: your data, your SQL, your Python, and all of these capabilities. We see a 10 to 20x reduction in complexity and code size.

    Nataraj: What forced you to build on top of, or as you call it, a co-processor on Snowflake? There are other platforms like Databricks. What pushed the edge in Snowflake’s direction?

    Molham Aref: This is a very important decision. In the 90s, I was at a company that picked Oracle and Unix as a platform. We were competing against companies that picked Informix or some other relational database. If you don’t get the platform decision right, you can jeopardize your go-to-market motion. From my perspective, we talked to a lot of enterprises, and what we see in practice in the Fortune 500 and the Global 2000 is for SQL and data management, Snowflake is by far the leader. We see them first. We see BigQuery a distant second. Databricks, until recently, didn’t have a SQL offering. They’re everywhere with Spark, but we still don’t see them that much for SQL. For us, it was a really obvious choice to build on Snowflake because they’ve got the traction. Now, there’s a lot of competition, and we’ll see how it all evolves, but my bet is still on Snowflake.

    Nataraj: Can you tell us a little bit about your journey of finding your first five customers and what it took to get them?

    Molham Aref: It gets progressively easier as you work in the B2B space more. When I was earlier in my career, I didn’t source the customer at all. At HNC, we were selling neural networks. You go to a bank and say, ‘Buy my neural networks.’ The bank goes, ‘What’s a neural network and why would I buy it?’ At some point, they realized that wasn’t effective. It was better to go to a bank and tell them we’re solving a problem they have in their language. If you say, ‘You’re losing $500 million a year in credit card fraud, and if you use our system, you’ll only be losing $200 million,’ any banker’s going to understand that. Then it just becomes a matter of proving you can create those savings. I learned the importance of learning the language of the industry I’m selling into. The folks that are most effective in sales are not the slick talkers; it’s the people who can bring content to a conversation so the prospect doesn’t feel they’re wasting their time. Fortunately, after being at it for a while, you get a multiplicative effect. We had some early customers at Relational AI that used to be customers of mine 15 or 20 years ago at a different startup. Because of the good work we did for them then, there was a level of trust.

    Nataraj: Which field did you pick initially and what was the value that you were offering them?

    Molham Aref: Starting from graduating college, I liked computer vision and stumbled into an internship at AT&T. Then when I went to HNC, it was because I was interested in neural networks, not particularly in industries. The group I was attached to was selling into retail, manufacturing, and CPG. So you start learning about forecasting problems, supply chain, and merchandising. You learn the language of the industry that way. You have to do the hard work of seeing it from the eyes of your customer.

    Nataraj: Right now, what type of customers are you mainly catering to? Is there a sweet spot?

    Molham Aref: RelationalAI is more of a horizontal infrastructure play. We are a co-processor to Snowflake. Instead of going to the line of business executive, we’re targeting the Snowflake customer. There’s usually a CTO, CIO, or someone senior who understands infrastructure and data management. To that person, we seem like magic. They spent the last two or three years moving all their data into Snowflake. The last thing they want to do is take that data and pull it back out from Snowflake to put it in a point solution for graphs, rules, predictive, or prescriptive. We come along and say, keep it all there. We plug into that environment. We run inside the security perimeter, same governance. You don’t have to worry about data synchronization because our knowledge graph is just a set of views on the underlying Snowflake tables. We’re relational, which is the same paradigm as Snowflake. So you get something that feels like magic.

    Nataraj: When you’re building on top of Snowflake, how do you think about competing or getting cannibalized by Snowflake itself?

    Molham Aref: It’s not a new phenomenon. It used to be hardware was the platform. Then operating systems came along. Then Oracle came along. There’s always this tension between the platform and the thing running on it. If the thing running on the platform starts to generate a lot of value, the platform provider can try to make it a feature. You see this all the time. You have to be really good and have a substantial enough solution where it’s either very difficult or very expensive to copy. Look at vector databases. Very hot for about six months, but now it’s a feature in everything. With us, our technology has deep moats. We have a new class of join algorithms, new classes of query optimization, new relational ways of expressing certain things. It creates deep enough moats where I think everyone will have an easier time working with us than trying to compete with us, at least the platform providers.

    Nataraj: Can you talk a little bit more about this concept of the modern data stack and where RelationalAI fits into it?

    Molham Aref: The modern data stack is a term that came into existence about 10 years ago. It was about the unbundling of data management. There are two ways to make money in our industry: bundling and unbundling. The modern data stack was basically a term used to describe an unbundling of data management so that you could pick different things. You can pick your cloud vendor, your data management platform, your semantic technology, your BI technology. They weren’t coupled together. From that, you had certain things emerge, like Snowflake, DBT, and Looker. It continues now with Open Table Formats and Open Catalogs. I think the next big fight is going to be around semantics and business logic.

    Nataraj: What do you mean by business logic and semantics?

    Molham Aref: It’s like DBT makes it possible to express semantics in SQL in a way that you can then pick whatever SQL technology you want to run it on. With business logic, you’re kind of tied into certain stacks. A lot of the business logic people write is in procedural programming languages that are not open. If you can capture the semantics of your business logic in a generic, declarative way, then you can map that to whatever is popular that day. A lot of energy is spent managing accidental concerns, not fundamental concerns. If you had your semantics defined in a way where they were not platform dependent, then whatever replaces cloud computing, you would just target that. You separate the business logic from the underlying tech.

    Nataraj: As someone who saw machine learning and AI evolve, how do you see the current GenAI hype cycle? What are you excited about?

    Molham Aref: GenAI is super exciting. For the first time, we have models that can be trained in general and then have general applicability. Up until GenAI, you built models for specific problems. Now you have models that just learn about the world. I think we are a little bit past the peak of the hype cycle. In the enterprise, what people are finding out is having a model trained about the world doesn’t mean that it knows about your business. What I see happening now is a bit more sobriety and the realization that to have GenAI impact, I need to be able to describe my business to the GenAI model. It doesn’t come with an understanding of my business a priori. We’re starting to see a lot of appreciation for combining that kind of technology with more traditional AI technology like ontologies and symbolic definitions of an enterprise.

    Nataraj: Are you leveraging GenAI for your own company? If so, in what ways?

    Molham Aref: We don’t build models; we’re not an Anthropic or an OpenAI. Our core competency is how you combine a language model with a knowledge graph to answer questions that can’t be answered otherwise. We’ve been doing work with some of our customers to show how much more powerful GenAI can be if it’s combined with a knowledge graph. Internally, all our developers have the option of using coding copilots. We are exploring some new companies that will make all our internal information searchable via a natural language interface. But we’re still a relatively small company.

    Nataraj: You emphasized how sales is perceived in B2B. Can you talk more about your framework for approaching B2B sales?

    Molham Aref: I think it’s a mistake for the founders of the company not to take direct responsibility for sales. You have to go out there and do the really hard work of customer engagement and embarrassing yourself to see what really works, what really resonates, and where the pain is. Trying to hire a salesperson to do that for you early on is a huge mistake. Once you’ve figured out what works, now you have the challenge of simplifying it and establishing proof points so it becomes easier for someone who is not as close to the problem or technology to come in and sell it. But even then, you want that person to be able to have a content-rich conversation with a buyer. People are worried about their jobs, their careers, their companies. They want to spend time with people who can really help them.

    Nataraj: Where do you see RelationalAI going next?

    Molham Aref: We just launched our public preview last June. It’s been amazing, all the inbound customer interest from the Snowflake ecosystem. We have a GA announcement coming out next week. There’s just so much alignment between us, our customers, and our partner Snowflake, that I think we will spend a lot of energy in the next two or three years building a very sizable business there. Beyond that, we will see. I do think intelligent applications represent a great opportunity because they’re still hard to build. If the world starts to appreciate the value of this data-centric, knowledge-graph-based way of building applications, I think we will enjoy serving the market as it figures this out.

    Nataraj: What do you consume that forces you to think better?

    Molham Aref: I really enjoy reading about history and listening to various historians characterize history at many different scales. There’s a lot to learn from history. It does repeat itself. There’s so much to learn in terms of human beings, our behavior, how we organize, how we get excited about pursuing certain ideas, and how ideas emerge and die. I think there are analogs of that in the enterprise because our job is to motivate a group of people around a mission to go do something great.

    Nataraj: Who are the mentors who helped you in your career?

    Molham Aref: Many people have been kind and generous. I’ll call out two people. One is Cam Lanier. He was just an amazing guy who passed away earlier this year. He was a great role model of someone who became very successful in business because if he shook hands with you on something, you could take that to the bank. He understood how integrity and trust really drive profit. I’m forever indebted to him for his mentorship. Another person is Bob Muglia. I met Bob after moving to Silicon Valley. He and I connected very much on what we do at RelationalAI. He’s studied the history of the relational movement and how it became dominant. Bob is just an amazing product person and an amazing human being.

    Nataraj: What do you know about being a founder or CEO that you wish you knew when you were starting?

    Molham Aref: It’s hard. It’s very difficult. This will probably be the last time I do this. I’ve been very fortunate to be part of some very successful ventures. I couldn’t not do this because I’m on this mission to make this kind of software development possible, and I work with some amazing people. But this stuff ages you. It’s really difficult, and you have to be ready for a lot of difficult times. Also, working well with people. A lot of the challenges you have are with people dynamics, creating an environment where you can have a diversity of expertise and skills and have people work together and appreciate each other. That’s super challenging.

    Nataraj: Well, that’s a good note to end the podcast on. Thanks, Molham, for coming on the show and sharing your amazing insights.

    Molham Aref: Thanks. Thanks for having me.

    This deep dive with Molham Aref highlights the shift towards data-centric architectures and the immense opportunity in simplifying complex enterprise workflows. His insights provide a clear roadmap for leveraging modern data platforms to build truly intelligent applications.

    → If you enjoyed this conversation with Molham Aref, listen to the full episode here on Spotify, Apple, or YouTube.
    → Subscribe to ourNewsletter and never miss an update.

  • Read.ai’s Growth to $50M: Founder David Shim on Building an AI Co-Pilot

    David Shim is no stranger to the startup world. A repeat founder, former Foursquare CEO, and active investor, he has a deep understanding of what it takes to build and scale a successful tech company. His latest venture, Read.ai, is on a mission to become the ultimate AI co-pilot for every professional, everywhere. Starting as an AI meeting summarizer, Read.ai has rapidly evolved, leveraging unique sentiment and engagement analysis to deliver smarter, more insightful meeting outcomes. The company’s product-led growth has been explosive, attracting over 25,000 new users daily without a dollar spent on media and recently securing a $50 million Series B funding round.

    In this conversation, David shares the origin story of Read.ai, detailing how a moment of distraction in a Zoom call sparked the idea. He explains their unique technological approach, which combines video, audio, and metadata to create a richer understanding of meetings than traditional transcription. David also dives into his philosophy on building for a horizontal market, the future of AI agents, and his journey as a founder and investor.

    → Enjoy this conversation with David Shim, on Spotify or Apple.

    → Subscribe to ournewsletter and never miss an update.

    Nataraj: You’ve worked and created companies before and after COVID, and you mentioned a lot of your team is remote. Do you have a take on whether remote or hybrid work is better? What is your current sense of what is working best for you when running a company?

    David Shim: I’d say hybrid is the future; it’s what works. Where I would say hybrid doesn’t work as well is if you’re really early in your career. It is harder to build those relationships and get that level of mentorship on a fully remote basis. It’s not to say that it’s impossible, but it is a lot more difficult. When you’re in an office, you have that serendipity of meeting people and building connections. When you’re fully remote, especially early in your career, you don’t know who to ask beyond your manager and your immediate team.

    That said, once you’re more senior, I think it becomes easier to be fully remote. You know what to do, who to talk with, and you’re not afraid to break down walls. I think Reed’s is the same way. We’ve got a third of our team fully remote, and then people come into the office Tuesday, Wednesdays, and Thursdays. We let people come in on Mondays and Fridays, but they don’t have to. What’s really happening is people actually like that level of interaction, so they’re coming in without being required.

    Nataraj: Let’s get right into Read.ai. Great name, great domain name. Talk to me a little bit about how the company started. What was the original idea?

    David Shim: The original idea started when I was in a meeting. After I’d left Foursquare as CEO, I had a lot of time on my hands. It was still peak COVID, so no one was meeting in person. I was doing a lot of calls, either giving advice or considering investments. What I started to realize was within two or three minutes of a call, you know if you should be there or not. I’d think, ‘I should not have been invited to this meeting. Why am I here?’ But you can’t just leave. So, like most people, I’d surf the web or answer emails.

    One time, I noticed a color on my screen that matched someone else’s screen. I looked closer and saw a reflection in their glasses. It was the same image I could see on ESPN.com. That triggered an idea: can you use not just audio, but video to understand sentiment and engagement? Can I determine if someone is engaged on a call? It wasn’t about being ‘big brother,’ but about identifying wasted time. In a large company, you can invite 12 people to a meeting, and they’ll all accept, potentially wasting 12 hours if they didn’t need to be there. So, the idea started to form around using this data to optimize productivity.

    Nataraj: So were you analyzing video and text at that point, or just text?

    David Shim: Video and text. Transcription companies already existed, as well as platforms like Zoom and Microsoft that had transcription built-in. I didn’t want to build something that everybody else already had and just make it a little bit better. I wanted something that was a step-function change. So we said, let’s take the existing transcripts but apply sentiment and engagement to them. Think about it: David said this, but Nataraj responded this way. That narration piece is missing. Our AI can go in and say, ‘This is how the person reacted to the words.’ Now, an LLM not only has the quotes that were said but how individual people reacted. It could say, ‘The CEO was really skeptical based on his facial expressions and became disinterested 15 minutes into the call.’ You can’t pick that up from quotes, but you can from visual cues.

    Nataraj: What’s the main value that the customers got from Read.ai at that point?

    David Shim: In 2022, we launched the product with real-time analytics, showing scores for sentiment and engagement. People found it interesting and valuable. But what was missing was stickiness. People would say, ‘You’re telling me this call is going really bad, but what do I do?’ You’re not giving me advice. That’s when the larger language models came out at scale in late 2022. We tested them and wondered if our unique narration layer, when applied to the text of the conversation, would create a materially different summary. The answer was yes. Comparing a summary with our narration layer to one without, it was totally different. You want to put what everyone is paying attention to at the top of the summary, and getting that reaction layer really changed the quality of a meeting summary.

    We started 2023 as the number 20 meeting note-taker in the world. Now we’re number two, and we’re within shooting distance of number one. To go from number 20 to number two in less than 18 months highlights that there’s a difference in our approach, methodology, and the quality of the product.

    Nataraj: And how did you acquire your customers in these 18 months? Was it inbound, outbound? Did you target a certain segment?

    David Shim: Many VCs say to pick a specific niche and go vertical. My take was that this is such a big market. If this is a seminal moment where everyone’s going to require an AI assistant, it means everyone from an engineer at Google to a teacher to an auto mechanic will need it. So we went horizontal versus vertical. That has helped a lot from a product-led growth motion. We’re adding over 25,000 to 30,000 net new users every single day without spending a dollar on media. It’s pure word of mouth. If you see the product, people will use it, talk about it, and share it.

    Nataraj: Is that because if you’re on a meeting with someone, the other people see it being used and then they buy it? The product inherently has that viral aspect, right?

    David Shim: 100%. Meetings are natively multiplayer. The problem now is, ‘How do I get access to those reports?’ We’ve made it really simple for the owner to share it just by typing in an email address, pushing it to Jira, Confluence, or Notion. We’re not trying to be a platform where everyone has to consume the data. This is where ‘co-pilot everywhere’ comes into play. We want to push it wherever you work. You can see the data on a Confluence page or a Salesforce opportunity that has better notes than the seller ever created. At the bottom, it says, ‘Generated by Read,’ and you wonder, ‘What is this thing?’ That bottoms-up motion has driven a lot of our growth.

    Nataraj: I can almost see this becoming an ever-present co-pilot in a work setting that will change productivity for knowledge workers. Is that the vision where you’re going?

    David Shim: That’s exactly what we’re thinking from a ‘co-pilot everywhere’ perspective. When you think about the current state, it’s about pushing data to different platforms. But you also need to pull data down. For example, Read doesn’t treat your first meeting on a topic and your tenth meeting as silos; it aggregates them to give you a status report on how a topic is progressing. Three months ago, we introduced readouts that include emails and Slack messages. Now it’s truly a co-pilot everywhere, not just for your meeting. The adoption becomes incredible because you don’t have to log into Gmail, Salesforce, and Zoom separately. It’s just right there.

    Nataraj: You’re still thinking breadth-first, or are you now targeting what a Fortune 500 company wants versus an SMB?

    David Shim: We’re still horizontal, but we’re picking specific verticals based on customer demand, like sales, engineering, product management, and recruiting. That’s why we did integrations with Notion, Jira, Confluence, and Slack. Another way to look at ‘co-pilot everywhere’ is agents. Everyone’s talking about AI agents, but in reality, you want your Jira to talk with your Notion, to talk with your Microsoft, to talk with your Google. That’s the push and pull of data between integrations. I think that is going to be the next big space.

    Nataraj: What about the foundation models you’re using? Are you held in control by their pricing?

    David Shim: We are not held in control. Last month, 90% of our processing was our own proprietary models. We use large language models for the last mile—taking the interesting topics and reactions we found and putting them into a readable sentence or paragraph. But we’re building our own models that cluster groups of data together, identify the subject matter, and then we go to the LLMs and say, ‘Summarize this for us.’ 90% of our processing cost is our own internal models. We have five issued patents now with more pending. We’re not just a wrapper layer with good prompts; that’s not a defensible moat.

    Nataraj: What do you think about the whole trend of agents? Are we seeing any real use cases?

    David Shim: I think it’s early. It’s the same way with voice. Voice is interesting, but if you look at Alexa or Siri, they had massive early scale and then kind of dropped off. It’s an important play, but it’s a feature, not the whole product. With agents, it’s the same thing. It’s not that simple to say, ‘Go search for flights and find me the best one.’ You need to know what to ask for. What dates? Are you using miles? An agent in theory could do that, but you still need to upload the training data. I think the agents working in the background are more likely to succeed, where someone has trained data on how to handle specific use cases. But for the consumer, they’re not going to know what an agent is, just like most consumers don’t know what an API is.

    Nataraj: What is the vision for Read.ai for the next couple of years?

    David Shim: In the next one to two years, it’s diving further into ‘co-pilot everywhere.’ We’re adding more native integrations with both push and pull capabilities. Where we want to get to is optimization. I’ve got your emails, messages, and meetings. If you’re a seller, as you have each call, I can go into Salesforce and update the probability of a deal from 25% to 50%, then 75%. We can push a draft to the seller saying, ‘We think this opportunity should go to 75%,’ and include the quote from the meeting that justifies it. Now, what was the most hated function for a seller—updating Salesforce—becomes an automated process where they just swipe right or left. That’s the level of optimization people will ask for.

    Nataraj: I want to slightly shift gears and talk about your investing. What’s your general thesis?

    David Shim: On the venture side, my thesis is if you believe in me enough to invest in my company, I should have the same belief to invest in your VC fund. If you’re a portfolio company, they’ll often give you access for a lower amount. I think every founder should take advantage of that. When I do angel investing, it’s one of two things. One, anyone I know that I’ve worked with before who asks me to invest, I’m more likely to say yes. It’s about giving back that same level of trust. The second is for more interesting opportunities that come up on my radar where I feel they have something novel that can deliver outsized returns.

    Nataraj: What do you know about being a founder that you wish you knew when you were starting?

    David Shim: At my first company, Placed, I was a solo founder. That is very expensive on your time, stress level, and relationships. You have nobody else to go to. I would say, don’t force it. If you can find co-founders that you trust and work with really well, do it. With Read.ai, my co-founders Elliot and Rob have been incredible. It distributes the work, stress, and knowledge. When you have three really smart people coming back together with different ideas, you can ideate better. So for any founders out there, if the opportunity exists, go with a co-founder versus solo.

    From an investor standpoint, outside of your own startup, don’t over-index on anything. Whatever is hot will stay hot for a little bit, but it will almost always drop off. Be careful about over-indexing. A lot of times, just put it in an index fund. The S&P 500 is up 25%—that’s better than most VC IRR on a yearly basis, and it’s liquid.

    This conversation offers a masterclass in building a modern AI company, highlighting the importance of a unique technological moat, a powerful product-led growth engine, and a clear vision for the future. David’s journey provides valuable lessons for any founder navigating the AI landscape.

    → If you enjoyed this conversation with David Shim, listen to the full episode here on Spotify or Apple.

    → Subscribe to ourNewsletter and never miss an update.

  • Glean AI Founder Arvind Jain on the Future of Enterprise AI Agents

    Arvind Jain, CEO of Glean AI and co-founder of the multi-billion dollar company Rubrik, is a veteran of Silicon Valley’s most demanding engineering environments. After a decade as a distinguished engineer at Google, he experienced firsthand the productivity ceiling that fast-growing companies hit when internal knowledge becomes fragmented and inaccessible. This pain point led him to create Glean AI, initially conceived as a “Google for your workplace.” In this conversation with Nataraj, Arvind discusses Glean’s evolution from an advanced enterprise search tool into a sophisticated conversational AI assistant and agent platform. He dives into the technical challenges of building reliable AI for business, how companies are deploying AI agents across sales, legal, and engineering, and his vision for a future where proactive AI companions are embedded into our daily workflows. He also shares valuable lessons on company building and fostering an AI-first culture.

    👉 Subscribe to the podcast: startupproject.substack.com


    Nataraj: My wife’s company actually uses Glean, so I was playing around to prepare for this conversation. But for most people, if their company is not using it, they might not be aware of what Glean is and how it works. Can you give a pitch of what Glean does today and how it is helping enterprises?

    Arvind Jain: Most simply, think of Glean as ChatGPT, but inside your company. It’s a conversational AI assistant. Employees can go to Glean and ask any questions that they have, and Glean will answer those questions for them using all of their internal company context, data, and information, as well as all of the world’s knowledge.

    The only difference between ChatGPT and Glean is that while ChatGPT is great and knows everything about the world’s knowledge, it doesn’t know anything internally about your company—who the different people are, what the different projects are, who’s working on what. That context is not available in ChatGPT, and that’s the additional power that Glean has. That’s the core of what we do. We started out as a search company. Before these AI models got so good, we didn’t have the ability to take people’s questions and just produce the right answers back for them using all of that internal and external knowledge. In the past, I would call ourselves more like Google for your workplace, where you would ask questions, and we’ll surface the right information. But as the AI got better, we got the ability to actually go and read that knowledge and instead of pointing you to 10 different links for relevant content, we could just give you the answer right away. That’s the evolution of how we went from being a Google for your workplace to being a ChatGPT for your workplace. We’re also an AI agent platform. The same underlying platform that powers our ChatGPT-like experience is also available to our customers to build all kinds of AI agents across their different functions and departments and ensure that they’re delivering AI in a safe and secure way to their employees.

    Nataraj: You started in 2019 as an AI search company. Now, it feels very natural to build a ChatGPT-like product for enterprise because the value is instantaneous. But why did you pick the problem of solving enterprise AI search back then? It was not the hot thing or an obvious problem. What was your initial thesis?

    Arvind Jain: For me, it was obvious because I was suffering from that pain. Before Glean, I was one of the founders of Rubrik. We had great success and grew very fast; in four years, we had more than 1,500 people. As we grew, we ran into a productivity problem. There was one year where we had doubled our engineering team and tripled our sales force, but our metrics—how much code we were writing, how fast we were releasing software—were flatlining. We just couldn’t produce more, no matter how many people we had.

    One key reason was that the company grew so fast, and there was so much knowledge and information fragmented across many different systems. Our employees were complaining that they couldn’t find the information needed to do their jobs. They also didn’t know who to ask for help because there was no concept of who was working on what. When we saw this as the number one problem, I decided to solve it. My first instinct as a search engineer was to just go and buy a search product that could connect to all of our hundred different systems. That revealed to us that there was nothing to buy. There was no product on the market that would connect to all our SaaS applications and give people one place where they could simply ask their questions and get the right information. That was the origin. I felt nobody had tried to solve the search problem inside businesses, even though Google solved it in the consumer world. That got me excited. At that time, we were not thinking about building a ChatGPT-like experience; nobody knew how fast AI would evolve.

    Nataraj: I think almost pre-ChatGPT no one called AI as AI; it was called ML or some other technical term. I remember watching Google’s Pixel phone launches in 2020-2021, and they were doing a lot of work creating AI-first products very early on. But for some reason, the tragedy is Google is considered as not doing enough with AI. That was a narrative versus experience difference.

    Arvind Jain: In 2021, we launched our company to the public and we called ourselves the Work AI Assistant. We didn’t call ourselves a search product because we could do more than search. We could answer questions and be proactive. But it was a big problem from a marketing perspective because nobody understood what an assistant was. Nobody had really seen ChatGPT. It was a big failure, and we rebranded ourselves as a search company. Then, of course, with ChatGPT launching, people realized how capable AI is and that it can really be a companion, which is when we came back to our original vision.

    Nataraj: One CEO I spoke with mentioned that when you pick a really hard problem to work on, a couple of things become easier. It’s easier to convince investors because the returns will be very high if you’re successful, and you can attract people who want to solve hard problems. What’s your take on picking a problem when starting a company?

    Arvind Jain: I agree with that assessment. It’s not that you’re just trying to pick something super hard to solve as the main criterion. The main criterion still has to be that you add value and build a useful product. I’m always attracted to working on problems that are very universal, where we can bring a product to everybody. I like it both because of the impact you’re going to make and because building a startup is a difficult journey. You have to have something that makes you go through that, and for me, that something is impact—solving a problem that builds a product useful to a very large number of people.

    Second, when you think about solving problems, you have to think about your strengths. If you are a technologist, it’s a gift if the problem you’re trying to solve is a difficult one because you’ll be able to build that technology with the best team, and you won’t get commoditized quickly. With Search, I knew how hard and difficult the problem is. That was definitely an exciting part of why I started Glean—I knew that if we solved the problem, it would be a super useful product and a technology that others wouldn’t be able to replicate quickly.

    Nataraj: One thing I often see with tools like ChatGPT or Glean AI in the enterprise context is that when you’re working on certain types of data, it’s not enough to be 90% accurate. If I’m reporting revenue numbers to my leadership, I want it to be 99.9% accurate. Can you talk a little bit about the techniques you are using to reduce hallucination?

    Arvind Jain: AI is progressing quite quickly. There’s a lot of work that the platforms we use, like OpenAI, Anthropic, and Google, are doing. The models today are significantly different from the models we had last year in terms of their ability to reason, think, and review their own work, giving you more confident, higher accuracy answers. There’s a general improvement at the model layer, which is reducing hallucinations significantly.

    Then, coming into the enterprise, none of these models know anything about your company. When you solve for specific business tasks, the typical workflow is that you have a model that is thinking and retrieving information from your different enterprise systems. It uses that as the source of truth to perform its work. It becomes very important for your AI product to ensure that for any given task, you are picking the right, up-to-date, high-quality information written by subject matter experts. Otherwise, you end up with garbage in, garbage out. That is what most people are struggling with right now. They build AI applications, dabble in retrieving information, and then complain to their customers that their data is bad. That’s not the right answer because AI should be smart enough to understand what information is old versus new. As a human, you have judgment. You look for recent information. If you can’t find it, you talk to an expert. AI has to work the same way, and that is what Glean does. We connect with all the different systems, understand knowledge at a deep level, identify what is high quality and fresh, and ensure that models are being provided the right input so they can produce the right output. Our entire company is focused on that.

    Nataraj: You mentioned an AI agent platform. What are the typical use cases for which enterprises are creating agents?

    Arvind Jain: I’ll pick some key ones across a few departments. For sales teams, much of their time is spent on prospecting and lead generation. You can build a really good AI agent that does that faster and with higher quality than a human in many cases. People have built an agent on Glean where a salesperson says, “I would like to prospect these five accounts today,” and Glean will do a good amount of research, identify the right contacts, and generate personalized outreach messages. Our salespeople then review the work of AI with a thumbs up or thumbs down, and the messages get sent out. They can now prospect at a rate five times greater than before. Similarly, after a customer call, an agent can generate the meeting follow-up with action items and supporting materials, a task that used to take hours.

    For customer service, the job is to answer customer questions and help with support tickets. AI is pretty good at that. People have built agents to auto-resolve tickets. For engineering teams, AI can be a really good code reviewer. The Glean AI code review agent is quite popular; it’s the first one to review any code an engineer uploads and can handle basics like following style guides. The use cases are exploding. Last year it was all about engineering and customer support, but now it’s all departments. Legal teams are using a redlining agent that automatically creates the first version of redlines on third-party papers like MSAs or NDAs. It’s a huge time and cost saver. The democratization is happening now.

    Nataraj: It feels like a better way to describe agents is as ‘workflow agents,’ similar to Zapier but with an intelligence layer. This can only work if you’re integrated well with different apps, and today every company uses hundreds of SaaS tools. Can you talk about that challenge?

    Arvind Jain: You’re spot on. Agents have to work on your enterprise data, use model intelligence to mimic human work, and take actions in your enterprise systems. There’s a strong dependence on your ability to both read information and take actions. The good news for Glean is that we’ve been working on that for the last six and a half years. We have hundreds of these integrations and thousands of actions we can support, which becomes the raw material for building these agents.

    It’s interesting how hard it is to get that to work because enterprise systems are very bespoke. One major challenge is security and governance. You can’t have an agent platform where agents just read any data from any system. You have to follow the governance architecture and rules inside the company, like permissions and access control. You have to not only build these integrations but also work upwards from that to handle agent security and ensure you deliver the right data to these agents, not stale or out-of-date information.

    Nataraj: We’ve seen a few form factors: the chat bar, then RAG on the engineering side, and now everyone is talking about agents. What is the next form factor or use case you see coming up?

    Arvind Jain: One big shift from the initial ChatGPT-like experience, which is very conversational and reactive, is that agents are becoming more proactive. You can build an agent that runs every day or when a certain trigger condition is met. The next big thing I see is AI becoming even more proactive and embedded in your day-to-day life. You won’t think of AI as a tool you go to; it will just come to you when it detects you need help.

    Our vision for the future of work is that every person will have an incredible personal companion. A companion that knows everything about you and your work life: your role, your company, your OKRs, your career ambitions, your weekly tasks, your daily schedule. It’s walking with you, listening to every word you say and hear. With all that deep knowledge, it’s ready to help proactively. For example, imagine I’m commuting to work. My companion detects I’m unprepared for my meetings. It knows the commute is 38 minutes, so it can offer to brief me as I drive, summarizing the documents I need to read so I feel prepared for my day. That’s where we are headed. AI is going to become a lot more proactive.

    Nataraj: Does that mean Glean is going into cross-platform and cross-application to make us more productive? I can imagine a floating bubble on my mobile where I can just hit a button and narrate a task.

    Arvind Jain: Absolutely. We already have these different interfaces. Glean works on your devices—we have an iOS app and an Android app—and it gets embedded in other applications. If you’re building the world’s best assistant or companion for everybody at work, you have to travel with them. From a form factor perspective, you’re going to see more interesting devices, whether it’s a smartwatch or a smart pen. Our goal would be to make sure we’re running on them.

    Nataraj: I want to shift gears and talk about the business. You mentioned a marketing failure pre-ChatGPT, then a rebrand. Now that you’re a fast-growing company, with AI increasing productivity, does that mean you’re hiring less? If you had X salespeople at Rubrik, are you hiring fewer now for the same level of growth?

    Arvind Jain: First, a company is a group of people building something together. I firmly believe the scale of your business is proportional to the number of people you have. I don’t personally believe I can have a five-person company and generate a billion dollars. The productivity per employee is going to grow at a relatively linear pace. It’s just that to survive as a company, you have to do 10 times more work than you did before with the same number of people, because everyone is benefiting from AI.

    You have to be able to build products and experiences we couldn’t dream of before. You shouldn’t be thinking, “Can I have fewer people?” You have to think, “How do I achieve more with the number of people I can absorb?” You don’t have a choice. If you deliver the same kind of products as pre-AI, you won’t survive. We are growing very fast and investing in our people. We fundamentally believe the larger we are, the more we’ll be able to do. But at the same time, I’m a minimalist. I always try to ensure we are enabling every employee with the right tools and that they are fully capitalizing on AI to deliver way more than expected in the pre-AI world.

    Nataraj: What does it mean to be more AI-first? Do you do more AI education or align incentives?

    Arvind Jain: We started by just talking about the importance of AI in town halls. I don’t think we saw the results because people were too busy. Then we tried setting goals like “get 20% more productive,” which was a complete failure. Our third iteration was to just do one thing with AI. We don’t care about the ROI; just show that you’re trying to learn and get one meaningful thing done. That’s the top-down approach. From a bottom-up perspective, we allow people to bring in the right AI tools and we celebrate wins. We created a program called “Glean on Glean.” Every new hire, for their first month, ignores their hired role and instead plays with AI tools to build one workflow or agent. It’s been very effective, especially for new grads who don’t know the traditional way of working and are more well-versed with AI.

    Nataraj: What are one or two metrics you consistently watch that tell you whether you’re going in the right direction?

    Arvind Jain: For us, number one is customer satisfaction. We look at user engagement—how often our users use the product on a daily basis. That’s the most important metric. Number two, on the product side, we look at the type of things people are trying to do with it and if that set is expanding. For example, are more people becoming creators on Glean and building different sets of agents? From the business side, we look at standard metrics like retention rate and tracking our pipeline for demand. But as a CEO, probably the most important thing to watch is how our organization is feeling internally. What are the signs from the team? Are we ensuring we have mission alignment? Are people committed and motivated? Are we creating the right environment for them to grow and succeed? Those are the top-of-mind things for me.


    This conversation with Arvind Jain offers a clear look into how enterprise AI is moving beyond simple chat interfaces to create tangible value through sophisticated workflow agents. His insights provide a roadmap for how businesses can leverage AI to solve core productivity challenges.

    → If you enjoyed this conversation with Arvind Jain, listen to the full episode here on Spotify, Apple, or YouTube.
    → Subscribe to our Newsletter and never miss an update.

  • Decagon’s Ashwin Sreenivas: Building a $1.5B AI Support Giant

    At just under 30, co-founders Jesse Zhang and Ashwin Sreenivas have built Decagon into one of the fastest-growing AI companies, achieving a $1.5B valuation in just over a year out of stealth. Backed by industry giants like Accel and Andreessen Horowitz, Decagon is redefining enterprise-grade customer support with its advanced AI agents, earning a spot on Forbes’ prestigious AI 50 list for 2025. In this episode of the Startup Project, host Nataraj sits down with Ashwin to explore the secrets behind their explosive growth. They discuss how Decagon moved beyond the rigid, decision-tree-based chatbots of the past by creating AI agents that follow complex instructions, how they found product-market fit by tackling intricate enterprise workflows, and the company’s long-term vision to build AI concierges that transform customer interaction.

    👉 Subscribe to the podcast: startupproject.substack.com

    Nataraj: So let’s get right into it. What is Decagon AI? What does the product do, and talk a little bit about the technology behind Decagon.

    Ashwin Sreenivas: You can think of Decagon as an AI customer support agent. For our customers, Decagon talks directly to their customers and has great conversations with them over chat, phone calls, SMS, and email. Our goal is to build these AI concierges for these customers. This idea of AI for customer support isn’t necessarily new; you’ve had chatbots for 10 years now, probably. But the thing that’s really different this time is if you look at the chatbots from as late as three or four years ago, it wasn’t a great experience. The reason it wasn’t a great experience is because you had these decision trees that everybody had to build, and it was a pain to build them and a pain to maintain. From a customer perspective, if you have a question or a problem that is one degree off from the decision tree that was built out, it was completely useless. That’s when you have people saying, “agent, agent, agent.” The thing that’s changed, and a lot of the core of what we’ve built, is a way to train these AI agents like humans are trained. Humans have standard operating procedures that they follow, and our AI agents have agent operating procedures that they follow. We’re able to essentially build these AI agents that can have much more fluid, natural conversations like a human agent would.

    Nataraj: Talking a little bit about the products, you mentioned chat, phone calls, emails. Do you have products for everything? If a company is coming to adopt Decagon, are they first starting with chat and then expanding to everything else? How does the customer journey look?

    Ashwin Sreenivas: This is actually very driven by our customers. For a lot of the more tech-native brands, think like a Notion or a Figma, you would never think about picking up the phone and calling them. You’d want to chat or email. Whereas some of our other customers like Hertz, you don’t really email Hertz. If you need a car, you’re going to call them up on the phone. So a lot of our deployment model is guided by our customers and how their customers want to reach out to them. Typically, most customers start with the method by which most of their customers reach out, and then they expand to all the other ones. It’s very common to start with chat and then expand to email and voice, or start with voice and expand to chat and email.

    Nataraj: I want to double-click on the point you mentioned about the decision tree model. I think around 2015, during the Alexa peak, everyone was building chatbots. I remember the app ecosystem where you had to build apps on Alexa or Microsoft’s Cortana. Conversational bots were the hype for two or three years, but they quickly stagnated when we realized all we were doing was replicating the “press one for this, press two for this” system on a chat interface. You define a decision tree, and anything outside of that is basically an if-else command line that ends with a catch-all driving you to a human. There are obviously a lot of players in customer support with existing tools. Do they have a specific edge on creating something like what Decagon is doing because of their existing data?

    Ashwin Sreenivas: No, I actually think, interestingly enough, because these customer service bots went through a few generations of tech, the tech is different enough that you don’t get too much of an advantage starting with the old tech. In fact, you start with a lot of tech debt that you then have to undo. Let’s say 10 years ago, you had to start with explicit decision trees where you program every single line. Then about five years ago, you had the Alexas of the world. It was a little bit of an improvement, but essentially all it did was allow a user to express what they want. They could say, “I want to return my order,” and the models were good at detecting intent—classifying a natural language inquiry into one of 50 things it knew how to do. But beyond that, everything became decision trees. The thing is now with these new models, because you have so much flexibility and the ability for them to follow more complex instructions and multi-step workflows, you can actually rebuild this from the ground up. It’s not just classifying an intent and then following a decision tree; we want the whole thing to be much more interactive for a better user experience. We had to rebuild it to ask, how does a human being learn? You have standard operating procedures. You say, “Hey, if a customer asks to return their order, first check this database to see what tier of customer they are. If they’re a platinum customer, you have a more generous return policy. If they’re not, you have a stricter one. You need to check the fraud database.” You go through many of these steps and then work with the customer. The core of what we’ve done is build out AI agents that can follow instructions very well, like a human does.

    Nataraj: This whole concept of AOPs (Agent Operating Procedures) that you guys introduced is very fascinating. You mentioned SOPs, which humans read, and then you have AOPs, which is sort of a protocol for the agent. Who is converting the SOP into an AOP? How easy is it to create this agent? Are you giving a generic agent that adapts to a customer’s SOP, or do I as a customer have to build the agent?

    Ashwin Sreenivas: The core Decagon product is one agent that is very good at following instructions and AOPs. We built this for time to value. If you have to train an agent from scratch for every single customer, it’s going to take a lot of time for that customer to get onboarded. And two, it’s very difficult for that customer to iterate on their experiences. If you build one agent, like a human, that’s very good at following instructions, they can come to that customer and say, “Here are the instructions I need to follow,” and you can be up and running immediately. In terms of how these AOPs are created, most customers tend to have some set of SOPs already, and AOPs are actually extremely close to these. The only thing you need to change is you need to instruct it on how to use a company’s internal systems. It’s 99% English, and then there are a few keywords to tell it, “At this point, you need to call this API endpoint to load the user’s details,” or “At this point, you need to issue a refund using the Stripe endpoint.” That’s the primary difference from SOPs.

    Nataraj: If you talk about the technology stack, are you using external models, or are you training your own models? What is the difference between a base model and what you’re delivering to a customer?

    Ashwin Sreenivas: We spend a lot of time thinking about models. We do use some external models, but we also train a lot of models in-house. The reason is, if you’re using external models, most of what you can do is through prompt tuning, and we found that models are only so steerable with just prompt tuning. We’ve spent a lot of time in-house taking open-source models and fine-tuning them, using RL on top of them, and using all of these techniques to steer them. To get these models to follow instructions well, you have to decompose the task. A customer comes in with a question, and I have all of these AOPs I could select from. The first decision is, is any of these AOPs relevant? If a user is continuing the conversation, are they on the same topic or should I switch to another AOP? At every step, there are a hundred micro-decisions to make. A lot of what we do is break down these micro-decisions and have models that are very, very good at each one.

    Nataraj: The industry narrative has been that only companies with very large capital can train models. Are you seeing that cost drop? When you mentioned you’re training open-source models, is that becoming more accessible?

    Ashwin Sreenivas: We’re not pre-training our models from scratch. We take open-source models and then do things on top of those. The thing that has changed dramatically is that the quality of the open-source models has gotten so good that this is now viable to do pretty quickly.

    Nataraj: Which models are better for your use case?

    Ashwin Sreenivas: We use a whole mix of models for different things because we found that different base models perform differently for different tasks. The Google Gemma models are great at very specific things. The Llama models are great at very specific things. The Qwen models are great at very specific things. Even for one customer service message that comes in, it’s not one message going to one model. It’s one message going to a whole sequence of models, each of which is good at doing different things to finally generate the final response.

    Nataraj: It’s often debated that as bigger models like GPT-5 or Gemini improve, they will gain the specialized capabilities that smaller, fine-tuned models have. What is the reality you’re seeing?

    Ashwin Sreenivas: I would push back against that argument for two reasons. Number one, while the bigger models will all have the capabilities, the level of performance will change. If you have a well-defined task, you can have a model that’s 100 times smaller achieve a higher degree of performance if you just fine-tune it on that one task. I don’t want it to code in Python and write poems; I just want it to get really good at this one thing. When measured on that one task, it will probably outperform models a hundred times its size. Number two, which is equally important, is latency. A giant model might take five seconds to generate a response. A really small model cuts that time by a factor of 10. Over text, five seconds might not matter, but on a phone call, if it’s silent for five seconds, that’s a really bad experience. For that, you have to go toward the smaller models.

    Nataraj: Can you talk about why you and your co-founder picked customer service as a segment when you decided to start a company?

    Ashwin Sreenivas: When we started this company, it was around the time when GPT-3.5 Turbo and GPT-4 were out. We were looking at the capabilities and thought, wow, this is getting just about good enough that it can start doing things that people do. As we looked at the enterprise, we asked, where is there a lot of demand for repetitive, text-based tasks? Number one was customer support teams, and number two was operations teams. As we talked to operations leaders, the number one demand was in customer support. They told us, “Look, we’re growing so quickly, our customer support volume is scaling really quickly, which means we need to hire a lot more people, and we can’t afford to do that. We are desperate.” Initially, it looked like a very crowded space, but as we talked to customers, we found it was crowded for smaller companies with simple tasks, where 90% of their volume was, “I want to return my order.” But for more complex enterprises, there wasn’t anything built that could really follow their intricate support flows. That was the wedge we took—to build exclusively for companies with very complex workflows. The other thing that was interesting was our long-term thinking. If you build an agent that can instruction-follow very well, you enable businesses to eventually grow this from customer support into a customer concierge.

    What I mean by that is, let’s say you want to fly from San Francisco to New York. You go to your favorite airline’s website, type in your search, and it gives you 30 different flights to pick from. That’s a lot of annoying steps. A much better experience would be to text your airline and say, “I want to go to New York next weekend.” An AI agent on the other side knows who you are, your preferences, and your budget. It looks through everything and says, “Hey, here are two options, which one do you like?” This AI agent also knows where you like to sit and says, “By the way, I have a free upgrade available for you. Is that okay?” You say yes, and it says, “Booked.” The big difference is this is a much more seamless experience. Most websites today shift the burden of work onto the user. Now, it shifts to a world where you express your intent to an AI agent that then does the work for you. That was a really interesting shift for us. Building these customer support agents is the first step to building these broader customer concierges.

    Nataraj: How did you acquire your first five customers? What did that journey look like?

    Ashwin Sreenivas: Early customer acquisition is always very manual. There’s no silver bullet. It’s just a lot of finding everyone in your networks, getting introductions, and doing cold emailing and cold LinkedIn messaging. It’s brute force work. But the other thing for us is we never did free design pilots; we charged for our software from day one. This doesn’t mean we charged them on day one of the contract. We’d typically say, “There’ll be a four-week pilot. At the end of four weeks, we’ll decide upfront if you like it, this is what it’s going to cost.” We never had an open-ended, long-term period where we did things for free because, in the early days, the number one thing you’re trying to validate is, am I building something that people will pay money for? If it’s truly valuable, you should be able to tell your potential customer, “Hey, if I accomplish A, B, and C, will you pay me this much in four weeks?” If it’s a painful enough problem, they should say yes. This helped us weed through bad business models and bad initial ideas quickly.

    Nataraj: What business impact and success metrics do your customers look at when using Decagon?

    Ashwin Sreenivas: Customers think about value in two ways primarily. One is what percentage of conversations we are able to handle ourselves successfully—meaning the user is satisfied and we have actually solved their problem. If we can solve a greater percentage of those, fewer support tickets ultimately make their way to human agents, who can then focus their time on more complicated problems. The second benefit, which was a little counterintuitive, was that a lot of these companies expanded the amount of support they offered. It’s not that companies want to minimize support; they want to give as much as they can economically. If it cost me $10 for one customer interaction and all of a sudden that becomes 80 cents, I’m not just going to save all that money. I’m going to reinvest some of that in providing more support. We’ve noticed that their end customers actually want that increased level of support. So now, instead of phone lines being open only from 9 a.m. to 5 p.m., it becomes 24 hours a day. Instead of offering support only to paid members, we offer support to everybody. There’s this latent demand for increased support, and by making it much cheaper, businesses can now offer more. At the end of the day, this leads to higher retention and better customer happiness.

    Nataraj: You also have support for voice agents, which is particularly interesting. What has the traction been like? Do customers realize they’re talking to an AI?

    Ashwin Sreenivas: In general, all our voice agents say, “Hi, I’m a virtual agent here to help you” or something like that. But the other interesting thing is most customers calling about a problem don’t want to talk to a human; they want their problem solved. They don’t care how, they just want it solved. For us, making it sound more human is not about giving the impression they’re talking to a human; it’s to make the interaction feel more seamless. You want responses to be fast. At the end of the day, the primary goal is, how can we solve the customer’s problem? Even if the customer is very aware they’re talking to an AI agent, but that agent solves their problem in 10 seconds, that’s a good experience. Versus talking to a human who takes 45 minutes, which is a bad experience. We have several customers now where the NPS for the voice agents is as good or higher than human agents because if the AI agent can solve their problem, it solves it immediately. And if it can’t, it hands it over to a human immediately. Either way, you end up having a reasonably good experience.

    Nataraj: Has there been a drop in hiring in support departments? Are agents replacing humans or augmenting them?

    Ashwin Sreenivas: It really depends on the business. If AI agents can handle a bigger chunk of customer inquiries, you can do one of three things. One, you can handle more incoming support volume. You put it on every page, you give support to every member, you do it 24 hours a day. Your top-line support volume will go up, but your customers have a better experience, and you can keep the number of human agents the same. Other people might say, “I’m going to keep the amount of customer support I do the same. There are fewer tickets going to human agents, so now I can have those agents do other higher-value things,” like go through the high-priority queue more quickly or move to a different part of the operations team.

    Nataraj: Can you talk about the UX of the product? People have different definitions of agents. What kind of agent are we talking about here?

    Ashwin Sreenivas: Interacting with Decagon is exactly like interacting with a human being. From the end user’s perspective, it’s as though they were talking to a human over a chat screen or on the phone. Behind the scenes, the way Decagon works is that each business has a set of AOPs that these AI agents have access to. The AOPs allow the agents to do different things—refund an order, upgrade a subscription, change billing dates. The Decagon agent is just saying, “Okay, this question has come in. Do I need to work through an AOP with the customer to solve this problem?” And it executes the AOPs behind the scenes.

    Nataraj: Before your product, a support manager would look at their team’s activities. How does that management look now on your customer’s side?

    Ashwin Sreenivas: There’s been an interesting shift. Rather than training new human agents, I’ve trained this AI agent once, and now my job becomes, how can I improve this agent very quickly? We ended up building a number of things in the product to support this. If the AI agent had one million conversations this month, no human can read through all of that. We had to build a lot of product to answer, what went well? What went poorly? What feedback should I take to the rest of the business? What should I now teach the agent so that instead of handling 80% of conversations, it can handle 85%? The primary workflow of the support manager has changed from supervising to being more of an investigator and agent improver, asking, “What didn’t go well and how can I improve that?”

    Nataraj: Are the learnings from one mature customer flowing back into the overall agent that you’re building for all companies?

    Ashwin Sreenivas: We don’t take learnings from one customer and apply them to another because most of our customers are enterprises, and we have very strong data and model training guarantees. But the learning we can take is what kinds of things people need these agents to do. For instance, we learned early on that sometimes an asynchronous task needs to happen. Decagon didn’t have support for that, so we realized that use case was important and extended the agent to be able to do tasks like that. It’s those kinds of learnings on how agents are constructed that we can take cross-customer. But for a lot of these customers, the way they do customer service is a big part of their secret sauce, so we have very strong guarantees on data isolation.

    Nataraj: How are you acquiring customers right now?

    Ashwin Sreenivas: We have customers through three big channels. Number one is referrals from existing customers. Support teams will often say, “Hey, we bought this thing, it’s helping our support team,” and they’ll tell their friends at other companies. Number two is general inbound that we get because people have heard of Decagon. And three, we also have a sales team now that reaches out to people and goes to conferences.

    Nataraj: Both you and your co-founder had companies before. How did the operating dynamics of the company change from your last company to now? Did access to AI tools increase the pace?

    Ashwin Sreenivas: A lot of things changed. For both of our first companies, we were both first-time founders figuring things out. I think the biggest thing that changed was how driven by customer needs we were. We didn’t overthink the exact right two-year strategy or how we were going to build moats over three years. We said, the only thing we’re going to worry about now is, how do we build something that someone will pay us real money for in four weeks? That was the only problem. That simplifies things, and we learned that all the other things you can figure out over time. For instance, with competitive moats, when we sold a deal in the early days, we would ask, “Why did you buy us?” They would tell us, “This competitor didn’t have this feature we needed.” And we were like, great, so we should do more of that because clearly this is valuable.

    Nataraj: It’s almost like you just listen to the market rather than putting your own thesis on it.

    Ashwin Sreenivas: Yeah. I think there was a very old Marc Andreessen essay about this: good markets will pull products out of teams. The market has a need, and the market will pull the product out of you.

    Nataraj: What’s your favorite AI product that you use personally?

    Ashwin Sreenivas: I use a number of things. For coding co-pilots, Cursor and Supermaven are great. For background coding agents, Devin is great. I like Granola for meeting notes. I used to hate taking meeting notes, and now I just have to jot down things every now and then. I think that captures most of what I do because either I’m writing code or talking to people, and that has become 99% of my life outside of spending time with my wife.

    Nataraj: Awesome. I think that’s a good note to end the conversation. Thanks, Ashwin, for coming on the show.

    Ashwin Sreenivas: Yeah, great being here. Thanks for having me.

    This conversation with Ashwin Sreenivas provides a masterclass in building a category-defining AI company, highlighting the power of focusing on genuine customer pain points and the massive potential for AI to create more seamless, personalized business interactions. His insights reveal a clear roadmap for how AI is moving from simple automation to becoming a core driver of customer experience.

    If you enjoyed this conversation with Ashwin Sreenivas, listen to the full episode here on Spotify, Apple or YouTube.
    Subscribe to our newsletter: startupproject.substack.com