Category: Podcast Episode Transcript

Full transcripts of the Startup Project podcasts.

  • Web AI Founder David Stout on Offline AI & the Edge Computing Revolution

    As the artificial intelligence landscape becomes increasingly dominated by massive, cloud-based models, some innovators are looking in the opposite direction. David Stout, founder of Web AI, is a leading voice in the movement to bring advanced AI directly onto everyday devices. In a recent conversation, Stout details his journey from a farm in Michigan to pioneering the infrastructure for offline AI, a technology that prioritizes privacy, efficiency, and user ownership. Valued at over $700 million, Web AI is challenging the status quo with its vision of a decentralized “web of models”—millions of specialized AI systems working together across phones, laptops, and other hardware. This approach not only keeps sensitive data secure but also unlocks real-time AI capabilities in environments where cloud connectivity is impossible or impractical, from aircraft maintenance to personal health monitoring. Stout’s perspective offers a compelling look at a more distributed, accessible, and secure future for artificial intelligence.

    → Enjoy this conversation with David Stout, on Spotify or Apple.
    → Subscribe to our newsletter and never miss an update.

    Nataraj: To set the context, could you give us a brief overview of your journey in the field of AI? When did you start working in AI or machine learning, and what was your journey like before founding Web AI?

    David Stout: My background, as you mentioned, I grew up on a farm. When I was studying it, AI was very much vapor; machine learning was the actual field of study. NLP was progressing, but it was very early, even in regards to convolutional neural nets. I think this is important because my research started in a very much yet-to-be-defined space that was incredibly esoteric. There was no LLM to help you research; there were no AI tools. This was very much first-principles design.

    We were looking at ways to bring convolutional networks like Darknet and YOLO to low-energy devices. At the time, these object detection or computer vision models were some of the most sophisticated and heaviest in terms of compute. They showed the most promise, in my opinion, of being truly disruptive. Having visual intelligence in spaces was going to be incredibly powerful. My research started there, and I was able to bring some of the best computer vision, object detection, and masking models to devices like iPhones and their Bionic chips.

    Nataraj: And this was through your research at Stanford or at Ghost AI?

    David Stout: This was through Ghost AI at the time, right around when I dropped out of school and started pursuing this full-time. We were bringing Darknet models to an iPhone. This got the attention of a lot of outside investors and technologists because it was the first of its kind. There was no TensorFlow Lite or PyTorch Lite tools that were bringing AI frameworks to devices. We wrote the whole thing from scratch, talking directly to shaders and primitives using the MPS framework on these devices. What we found, as in any moonshot, is you discover other things along the way. We realized that to bring these models to devices, we were discovering incredible compression and architecture techniques. This ultimately led to WebFrame today, which is our own in-house AI library and framework. Those early days mattered because it shaped what we ended up building. We had this desire to run models at the edge because, in computer vision specifically, if you didn’t have real-time AI processing, it was a null use case. Computer vision in the cloud is not super interesting. That’s where we started to really understand the value of AI at the edge.

    Nataraj: What applications that we see today in the wild are a result of these efforts?

    David Stout: The research of getting models to devices is continuing to play out; it’s not done by any means. A lot of these examples are still referencing a cloud model. Not a lot is happening on the device still. But yes, you are seeing basic object detection. A good example would be Photos on an iPhone. That’s running on-device, and you’re able to search and query basic object states or titles or names and index things. There are also modes on the iPhone in the magnifier that let you detect objects you’re looking at, and the audio kit will turn on. If you have a vision impairment, the object detector in real time will talk to you and tell you what’s in front of you. I think those are examples of some of that early work in the industry, but there’s still been a tremendous amount of focus on cloud AI. We’re seeing a lot more now in the private sector with people we’re working with where it’s multimodal, which we think is the ultimate paradigm.

    Nataraj: You were working on compressing models onto devices, and in 2019 you started Web AI. What was the thesis for Web AI, and how has it changed over time?

    David Stout: I started the company on three pillars. I’m a simple thinker when it comes to business and strategy; I wanted to know the utility value. I thought this cloud arbitrage is not going to work. This idea of big data and cloud compute is going to flip the whole cost structure upside down and is not super promising for AI in regards to individual ownership. It felt like we were going to copy the internet era and reproduce all the mistakes we made there. The thesis for Web AI, when we founded it in January of 2020, was: if we could bring AI to devices and run it privately in a way that a user or enterprise owns it, would people pay for that? Would people want that? If you could serve AI on a device and bring world-class intelligence and put it in someone’s pocket, is that valuable? The simple answer is yes, it’s worth pursuing.

    The second question is what kind of use cases could you unlock that the alternative would be unable to do? A simple way to look at this is you have companies with IP-centric data that they can’t share with a foundational model. You have companies with regulated data they can’t share. And you have use cases that require real-time, no-latency decision-making that can’t go up to the cloud. These problems require an AI solution that lives in the environment, that they can directly engage with, and that’s state-of-the-art. That’s really the problem we were solving.

    Nataraj: General audiences often think in terms of large models, especially post-ChatGPT. But before that era, it was all specialized models. When you identified these factors, what was your initial approach to productizing this? How did you focus, because the field is so wide?

    David Stout: It is very wide. Actually, our strategy was the inverse. We said we need to be as horizontal as possible. We need to own the tooling, the methodologies, the frameworks, the communications. We don’t need to own the model. We want to be the pickax and shovel of an industry rather than be the best medical model company, and that’s all we do. The reason for that, and I think it’s played out quite like we thought it would, is that so many VCs told me, ‘You guys have great technology, you should just focus on one industry.’ We disagreed for the fundamental reason that we’re seeing now: if you’re not horizontal as a tech stack, you’ll get steamrolled by these incredibly smart, powerful foundational model companies. If you’re building an app focused on coding, I still think you’re at great risk of just getting steamrolled. I just don’t see how those companies have long-term staying power when the model that they rely so heavily on is not theirs. We decided to focus on the tools that made the models great, the way to retool these models so they could run anywhere, the connective tissue that lets the model talk to another model, to a device, to a person. And we will enable our customers to interact with their data with these models and make them better. That’s our staying power. We support everything from vision models to multimodal models across the ecosystem, with the idea that the platform is designed to be horizontal and not a point solution.

    Nataraj: What type of customers use your product today? Can you give a couple of use cases to crystallize where Web AI plays a role?

    David Stout: We work in industries where there’s highly contextual data that is not on the internet. It’s not on Reddit, whether it’s working on an airplane engine or with individual personal health data. It’s data that does not exist on the web that needs to be navigated, trained on, and personalized for each of these users to drive real results. We work with the Oura Ring, if you’re familiar with them. Additionally, we are working with major airline companies and aircraft manufacturers to improve maintenance as well as assembly. And outside of that, we are working with the public sector on all sorts of use cases that require AI to work anywhere, not just in a data center stateside. The ubiquitous connective tissue in all of our customers is they have data that no one else has. They operate in a privacy-mission-critical environment where data cannot go somewhere else, it needs to be highly accurate, highly performative, and it needs to operate at the edge.

    Nataraj: I haven’t seen a lightweight personal model that exists on my machine yet. Is that model not possible, or why haven’t we seen that kind of experiment from any company?

    David Stout: I think we haven’t seen it because the models that are easy to ship to devices are bad. People have become accustomed to a certain performance and intelligence capability. Web AI is actually in October releasing what you’re describing. We’re releasing our first-ever B2C solution, which is that: download it, run it on your machine, run it on your phone. Why it’s taken us time is we had to make some architecture changes so we have a great model that’s performative, that’s not disappointing, and that runs and lives on your phone. That’s a hard problem to solve. It’s always been easier to just have the cloud do it. I think a lot of companies are hitting the easy button on this one and just using the cloud. It works from a functioning perspective, it absolutely works; it’s just astronomically expensive and inefficient. The AI companies that are popular today are really focused on trying to solve the super-intelligence problem rather than solving the actual unit economics, monetization, and privacy problems. These tools will be valuable for users because now they have something that’s private that they own. It lives on their device, it’s personalized, and it’s ultimately safe.

    Nataraj: There’s so much spending going on in data centers, rationalized by the argument that this will lead to something that looks like AGI. What are your thoughts on the trajectory of these foundational model companies and AGI?

    David Stout: If we’re fairly pragmatic about what we’ve seen, there’s this common consensus that it will keep scaling, models will get better, and we’re going to steamroll everyone with the best model. My problem with that is the empirical evidence we have right now doesn’t say that. Pre-training, in all senses, is pretty much proven to be flattening. GPT-5 is an MoE, a model router. It’s a lot of post-model work. Most of the gains we’re seeing are post-training. For the last several iterations of these models, the majority of the advancement is happening post-training, which would indicate that we are hitting a plateau on this idea of training continuing to scale. I think we’re tremendously overbuilding. We have an energy problem, a water problem. It’s so early. I’m not a big believer in the long tail of the transformer architecture. To build all these data centers when we don’t even know the architecture… it’s questionable. For me, what makes the most sense is this idea that civilization is the only example of super-intelligence. You have groups of people with different contexts, talents, and abilities that build incredible things. We don’t have any example of singular super-intelligence. What I would say is much more likely is we see super-intelligence come out of millions and billions of contextual models that are living across the world as a compute dust that’s everywhere. That statement is far less risky than the one we’re talking about in parallel, which is, ‘I’m going to figure out a way to train this one model, it’s going to solve everything, and it’s going to be AGI.’ The civilization approach is not only theoretically accurate, but nature and science have demonstrated it to be true.

    Nataraj: I was watching your talk where you had this very interesting line: ‘Prompts don’t pay bills.’ Can you elaborate on that?

    David Stout: These companies have created bad habits. Prompting is horrible for their business model. They need to be proactive; they want to get prompting out of their business. Every question costs them money. It’s not the same model as internet companies, where a user coming to your website is a dollar sign. With OpenAI, when you log in and ask a question, you’re cutting into their profits. That’s a challenging business to be in. The philosophy of ‘prompts don’t pay the bills’ is about how we create AI interactions that are precognitive, working on behalf of the user so the user doesn’t have to ask another question. This supports the distributed model architecture as well. When you create an AI application on a foundational model, you use a system prompt to tell the model how to behave. Fundamentally, you’re telling the model to be smaller. You’re saying, ‘Be a doctor, answer this way, don’t talk about race cars.’ What Web AI would say is you just want a doctor model. And the doctor model is going to be far better than a system prompt model pretending to be a doctor. That’s how you get to super-intelligence: you have millions of models that are category-leading. They aren’t prompted to behave a certain way; they just *are* a certain way. This is why the internet beat mainframes.

    Nataraj: What do you think about what XAI is doing? I feel like they are result-maxing for the leaderboards, but I don’t see XAI being used much in real applications.

    David Stout: I think everyone trains towards leaderboards. You’ve seen the party games where people wear a sign on their head and have to guess who they are. AI is doing the same thing with a benchmark. When you train around a benchmark, you eventually realize what the benchmark is. That’s all that’s happening. A really interesting example we saw personally: we trained on open-source F-18 data for the Navy and ran a retrieval task against it. We got about 85-90% accuracy on a really complex maintenance manual. We did the same exercise with GPT-5, and it was 15% less accurate than our Web AI system. What was interesting is on the open QA benchmark, OpenAI was only seven points lower than us. So on the leaderboard, it seemed like we were far closer in performance, but in practical application, the delta is always a little bit bigger. I think the leaderboard is a little irrelevant to what’s actually happening.

    Nataraj: We are almost at the end of our conversation. Where can our audience find you and learn more about Web AI?

    David Stout: I’m on Twitter, @DavidStout. We’ve got a lot of new announcements coming out. We just released two new, really significant papers. We’ll be sharing more in our fall release, with several new products that will be available for users the day of the announcement. You can get more information on our website and on social media. I’m really thankful for the opportunity to come on and talk and learn from you.

    David Stout’s insights offer a compelling vision for a future where AI is not a monolithic entity in the cloud, but a distributed, personalized, and private tool running on our own devices. This conversation highlights the practical and philosophical shift towards an accessible and secure AI ecosystem.

    → If you enjoyed this conversation with David Stout, listen to the full episode here on Spotify or Apple.
    → Subscribe to our Newsletter and never miss an update.

  • Fixing Broken Meetings & Managing Calendars with AI | Matt Martin

    In an era where back-to-back meetings and fragmented schedules are the norm, how can teams reclaim focus time and achieve deep work? Matt Martin, co-founder and CEO of Clockwise, is tackling this problem head-on with an AI-powered calendar assistant designed to create smarter schedules. In this conversation with Nataraj, Matt delves into the complexities of modern work, from the “maker versus manager” schedule conflict to the surge in meetings post-pandemic. He offers his perspective on the evolving SaaS landscape, the real-world impact of AI agents, and why many new tools feel half-baked. Matt also provides a look inside Clockwise, explaining how they leverage AI to not only optimize individual calendars but to orchestrate entire organizational workflows, ultimately giving teams back their most valuable asset: time. This discussion is essential for anyone interested in the future of work, productivity, and the practical application of AI.

    → Enjoy this conversation with Matt Martin, on Spotify or Apple.

    → Subscribe to our newsletter and never miss an update.

    Nataraj: To get started, can you describe to the audience what Clockwise is and how your customers use it?

    Matt Martin: At its core, Clockwise is a very advanced scheduling brain. We connect to your calendar, whether that’s Google Calendar or Outlook, and you can use it as an individual. We start to analyze your calendar when you connect it, understanding the cadence of your meetings, when you tend to work, your working hours, and when you like to take breaks. We ask you a few questions to get to know you a little bit better. Based on that information, we start giving you suggestions on how to optimize your schedule for more time for high-impact work. Where Clockwise really hits its groove is when you start to use it among a larger group of people. Clockwise can look at the interconnection between you, other attendees, their preferences, and how to optimize calendars holistically. We do this at scale for some of the best companies in the world, like Netflix, Uber, and Atlassian, where we help optimize schedules for almost the whole company or complete engineering departments to give more time for high-impact work, meet with the right people, and have a sane work life.

    Nataraj: You are living in this world of calendars and meetings. This reminds me of an instance a couple of years back when Shopify CEO Tobi sent out a memo saying you can cancel any meeting you want and we want to reduce the number of meetings happening in our organization. What is your general take on the frequency or number of meetings happening in a company? What trends are you seeing in how companies are optimizing their meetings?

    Matt Martin: In a lot of ways, Clockwise goes all the way back to a famous article by Paul Graham called “Maker’s Schedule, Manager’s Schedule.” The reflection in his article was that, often inside software engineering organizations, the two modes of operation conflict. The managers control the schedules because they’re setting the cadence of meetings—syncs, standups, one-on-ones, team meetings—and they get a lot of their productivity done in meetings. Whereas for makers, people like software engineers and designers, they need large chunks of time to go heads-down on a project and get in flow to be able to tackle things. The first thing I would observe is that different people have different demands on their schedule, so there’s not really a one-size-fits-all here. I love Tobi’s memo because I think it’s always a good idea to clean out the cruft on a regular cadence and reset the baseline because things build up over time. But I would also caution that meetings aren’t inherently bad; it’s just another way of collaborating with peers and making sure you can get your work done. The question is, what are you trying to accomplish and who is the audience for it?

    There are some almost gravitational forces when it comes to meetings. One is that we’ve seen in our data that the larger the company gets, the higher percentage of time people tend to spend in meetings. As you have more people in your orbit, the cost of collaboration and coordination goes up. Another thing that happened is when COVID hit, the quantity of meetings spiked way up because as people went remote and hybrid, they were trying to figure out how to replace a lot of the content of an in-office environment with meetings. That subdued a little bit, but it never came all the way back down. There’s an overhang from companies going remote, and even today, you do see some split between in-office companies and remote companies in terms of volume of meetings.

    Nataraj: Is there any interesting trend? One of the things that happened after COVID, for me at least, is an increase in non-scheduled meetings. You just have a question and you all get on a call spontaneously, sort of replicating the hallway chat remotely. Do you have any statistics on a spike in those and how they’re doing right now?

    Matt Martin: That tends to be one of the sources of the split between remote and in-office because when you’re in the office, those conversations still happen, they just don’t get recorded formally on the calendar. If you’re remote, you do have to reach out. There are informal ways to do that, like a quick Slack huddle, or you could move some of that to asynchronous conversation, which is a good pattern. But one of the phenomena is just that there’s a shift in the medium. Instead of bumping into someone in the hallway or going over to their desk, you have to find a Zoom meeting or schedule something on Google Meet. The frequency goes up. The amount of time spent in synchronous conversation, however, doesn’t actually vary as much with remote or in-office because it’s just a different type of synchronous conversation. It depends a lot on the culture. At a place like Apple, where it’s not uncommon for software engineers to have their own dedicated private offices, that sort of synchronous conversation in the office is much lower than a place that’s a wide-open office environment.

    Nataraj: Clockwise started before ChatGPT and all the LLM mania started. It feels to me that there’s now this rethinking in organizations about what type of tools we are adopting. A typical thousand-person organization might have 100 to 200 SaaS products. We are seeing a shift in how many products you’re adopting, and there’s also an accelerated pace of launching new features. Do you see this happening in your perception of how sales are going for your product or other products when you’re talking to other founders or customers? Is this a change in narrative, or is it more of a narrative than it’s real?

    Matt Martin: It’s interesting that you bring up Zoom in the context of AI tooling and acceleration of feature adoption because I think there’s a more significant undercurrent that’s not related to AI, which is the correction a couple of years ago from a zero-interest-rate environment to an environment where money isn’t free. That had a significant impact on SaaS buying, renewal, and adoption cycles, especially among more mature organizations. We saw a huge wave of consolidation, removal, and re-evaluation of tools that we hadn’t seen in the lifecycle of our business before. I think Zoom’s proliferation of product development is downstream of that consolidation effort, not AI. They saw that if you’re just video conferencing software, it’s easier to rip you out. Everybody pays for Microsoft 365 or Google for basic email and calendar, and both come with video conferencing. So Zoom is trying to replace that office suite. It remains to be seen if they can be successful, but I think that’s the more significant trend.

    When it comes to AI tools and adoption, that has been a bit of a resurgence and a correction in that downturn in buying. There’s definitely been top-down appetite to find ways to add to the productivity and capacity of the organization with those tools. I will say, however, the trying and retention is way different. I’m quite proud of Clockwise’s retention; people use it and they like it. But as I’ve talked to IT leaders and CISOs, there’s a lot of experimentation, but there’s a lot of churn. A lot of these AI tools look interesting at the outset, but it’s hard to measure what they’re contributing to the bottom line. It’s an interesting mindset where you have this massive constriction in what people are willing to spend for software, but then a real increase in experimentation. Some of that conservatism in terms of what they’re actually buying is still there.

    Nataraj: I ask because there’s also this hype around what an AI agent can do. Every new AI agent platform offers things like optimizing your calendar or increasing productivity. The problem I see is the form factor isn’t fitting the promise. When you get into things like revenue management, where a CIO wants to see the number, it’s not yet easily correlated, especially in these agentic, chat-based form factors. Could you talk a little bit about that disconnect between what AI agents are promising and why that disconnect is there?

    Matt Martin: A lot of this is the basics of software selling that have been around for a while. Ultimately, the buyer needs to see the case for the return on investment. The reason there’s so much hype around AI is that people have seen the impact it can have in various facets of their job, so they’re clamoring to find other areas for that application. But to your point, if there’s revenue acceleration that the CRO isn’t actually seeing, they’re not going to buy the software, whether it’s an agent or a piece of SaaS. In many of these areas, the efficiency gains are notoriously difficult to measure. Clockwise undeniably helps people be more productive, but our ROI measurement problem has always been there. We’re productivity software. We can tell you about all the dedicated time we put back in schedules, which to some extent is a measurable hard ROI, but some buyers look at that and ask, “Okay, you made their schedule more flexible, but did they actually get more done?”

    There are interesting new pricing models being experimented with. You see places like Sierra doing outcome-based pricing; for each ticket they take off a customer service person’s desk, that’s what you’re paying for. That’s much closer to hard ROI because you’re offsetting real employee time and salary in a concrete way. I think it’s difficult to find those measurables often, though. It’s difficult to find that hard translation of outcome and to have accountability all the way back. People are experimenting, and it’ll be interesting to see where it lands, but a lot of these problems echo through software sales since the 70s.

    Nataraj: How are you leveraging AI in terms of creating new features and products? Can you give examples of how you’re using AI within Clockwise as a product?

    Matt Martin: I’ll answer in two ways. First is operationally, how we are developing product. The second is how Clockwise as technology actually uses AI. On the first point, we have a truly AI-native product development cycle where people are utilizing tools at every stage to accelerate results. One of the clearest points of leverage for me is the collapsing of product research and prototyping. I have designers who are literally spinning up their own interactive prototypes, whether in Figma, v0, or Lovable, and putting them in front of somebody. Previously, that was quite costly. Now you can do it quickly without worrying about bugs. That accelerates development cycles. With all the tooling we have, you can spin up a lot of paths and experiment with the best one because you can get there faster. It still requires a lot of human review, or you’ll create a really hairy code base, but you can really accelerate your experimentation cycles.

    On the Clockwise front, what are we actually doing with AI? There are multiple levels. One is we have a product in the field right now that allows AI-based scheduling. You can chat with Clockwise and say, “Hey, I want to schedule a time with Nikita, Aaron, and Joe next week.” We have our own fine-tuned model that we fine-tune to pay attention to time and time-based requests. It can parse the user’s intent and give it to our back-end systems to conduct the scheduling. We’re also about to launch our own MCP server that connects up our scheduling engine to frontier models or whatever MCP client you might be using. It’s been fascinating to see, especially with MCP, the combinatorial power of having different tools that can be called into from a pretty intelligent base model.

    Nataraj: You mentioned becoming babysitters for half-baked tools in a LinkedIn post. What trend are you seeing? Why are a lot of these tools looking half-baked?

    Matt Martin: I’m so energized by what’s happening in the industry right now because I love experimentation. When you have an explosion of new technology, it’s exciting. But with that explosion comes chaos. People are trying out new things and trying to connect them. When you look at LLMs, the ability to call into other tools is an obvious need. Anthropic developed MCP, and it’s an interesting and elegant first attempt, but it’s cumbersome. It is not for my mom. The more tools you add, the slower the LLM gets, the more complicated it gets. It’s clearly not a pattern that will extend into infinity, but it is a jumpstart on experimentation. So I think we’re in this early phase where getting a workflow completed in an AI-based way is often more cumbersome than just using a pre-existing piece of software. Some skeptics look at that and say this is all BS, just a more complicated way to do things we already know how to do. But the ability of the base model to intelligently reason and navigate these workflows is transformational. We just haven’t gotten there with the interface, with how we put those workflows together, or with the accessibility and usability for the average user.

    Nataraj: My take has always been that it’s an evolution. First, we saw the base models, and then a bunch of engineers built V0 versions of everything. Now you really need product thinkers who understand the market and use cases to build the next generation of products. We are still early in terms of the apps leveraging AI. There’s an opportunity to rethink fundamental apps. Can you rewrite Outlook with AI being first? Notion rethought how a note-taking tool should be for the internet. Can you rethink even Notion with AI in place instead of added on top? There’s a lot more experimentation to come.

    Matt Martin: I agree with that. We’re definitely in the phase where there’s a lot of bolt-on. There’s a lot of looking at current products and asking, now that I have this additional technology, what can I do on top of this product to augment it? The note-taking example is interesting. Notion has added on its own AI product. It’s one of the more interesting ones I’ve seen, but the frequency with which I use Notion’s AI features versus Notion as just a note-taking tool is maybe 100 to 1. In the future, note-taking probably looks more like something that is an omniscient collection of information that you can query and talk to about surfacing the right information at the right time. Most technology is additive. When we got smartphones, we didn’t get rid of laptops. There’s going to be an evolution where a completely new category and feel of software emerges from AI. Right now, outside of the frontier models like ChatGPT and Claude, I haven’t seen that many things that genuinely feel very new instead of augmentative.

    Nataraj: I think we’re almost at the end of our time. What are the best ways for our audience to discover you and the work you are doing?

    Matt Martin: The first place to go is clockwise.ai or getclockwise.com. You can start with Clockwise today; it takes about 30 seconds to get up and running. It’s amazing. You’ll get time back in your day, and it’s free to start. If you want to get into contact with me personally, I’m always happy to connect. LinkedIn is actually where I post the most. You can find me, Matt Martin, at Clockwise. On basically any social media, I’m /voxmatt, V-O-X-M-A-T-T. You can find me on Mastodon, which I tend to post to a little bit. I’m a little bit on Threads, a little bit on Blue Sky, a little bit on X. The fracturing social ecosystem has not done well for me in terms of one channel, but LinkedIn’s probably the most consistent.

    Nataraj: This was a very fun conversation. Excited to see what Clockwise does next. Thanks for coming on the show.

    Matt Martin: Thank you very much. This was a lot of fun.

    Matt Martin’s insights reveal a clear vision for a future where AI doesn’t just assist but actively manages our schedules to enhance productivity and well-being. This conversation is a must-listen for anyone looking to reclaim their time and understand the practical applications of AI in the modern workplace.

    → If you enjoyed this conversation with Matt Martin, listen to the full episode here on Spotify or Apple.

    → Subscribe to our Newsletter and never miss an update.

  • Apollo GraphQL CEO on APIs as Graphs, Not Endpoints | Matt DeBergalis

    Introduction

    In the world of modern software development, managing the flow of data between services and applications is one of the biggest challenges. Matt DeBergalis, co-founder and CEO of Apollo GraphQL, has been at the forefront of solving this problem. His journey began with the Meteor framework, which revealed a critical need for a more principled way to handle data fetching. This led to the adoption of GraphQL, a query language that treats APIs not as a collection of disparate endpoints, but as a unified, connected graph.

    In this discussion, Matt joins Nataraj to explore the evolution of Apollo GraphQL from an open-source project into an enterprise-grade platform. He breaks down the unique value of GraphQL for developers, the strategic decisions behind building a commercial product around it, and the complex trade-offs in today’s full-stack architecture. He also offers a compelling look at how AI is amplifying the need for robust API strategies, making technologies like GraphQL more relevant than ever.

    → Enjoy this conversation with Matt DeBergalis, on Spotify, Apple, or YouTube.
    → Subscribe to our newsletter and never miss an update.


    Conversation Transcript

    Nataraj: You’re now CEO of Apollo GraphQL. Can you give us a two-minute history of your journey until now?

    Matt DeBergalis: We started the company with Meteor, which was a JavaScript development framework from the era of the first true browser-based apps. When you build software that way, you need a principled story for how you move data from the cloud into the application.

    Matt DeBergalis: GraphQL and Apollo are at the heart of that story. While building Meteor, we found that the piece of the stack that brokers the flow of data from underlying databases, APIs, and all the systems that feed your software up to the app is where there’s a ton of complexity. It also accounts for a huge fraction of the handwritten code that makes building good software take so long. GraphQL is a wonderful, declarative language, so you can build infrastructure around it. We see this happening all over the stack—Kubernetes and React are examples. Apollo is that for your APIs. It’s about replacing all the single-purpose, bespoke code you might write with a piece of infrastructure and a principled programming model.

    Matt DeBergalis: The name GraphQL hints at what makes it wonderful: we treat your systems, data, and services not as individual endpoints you call imperatively, but as a connected graph of objects. That completely changes the development experience. It makes it possible to express complex combinations of data in a simple, deterministic way. There’s a query planner in there, so you can do all kinds of transformations and other things necessary to build software in a repeatable, understandable way. We’ve found that this dramatically helps companies, especially larger ones with lots of APIs, accelerate how fast they can build good software.

    Nataraj: GraphQL was initially open source while you were working on Meteor. At what point did you realize there was enough opportunity to make this a new company?

    Matt DeBergalis: A couple of things were happening. First, the original version of Meteor was based on MongoDB, so we received many requests to support other databases, data sources, and REST APIs. The Meteor development experience was almost like black magic; you would write a MongoDB query directly in your client code. Meteor would run that same query on the Mongo server in the cloud and synchronize the results across the wire. This infrastructure efficiently kept them in sync in real time. The consequence was that Meteor apps were real-time. You’d write a query, connect it to a component on your screen, and as the database changed, the screen would automatically update. That was amazing, especially in those days, but it had to be in Mongo. So we needed a more general query language.

    Matt DeBergalis: Just as we needed that, Facebook announced GraphQL. This brings me to the other part of the answer, and why I think GraphQL has flourished where similar ideas haven’t. In my view, GraphQL is the first API technology that really asks about the needs of the clients—the consumers of data—instead of the providers. REST APIs, or older technologies like ONC RPC or SOAP, are all defined by the server. The consumer gets what they get—the payload, the format, the protocol. This puts a huge burden on the application developer because it’s rare that what comes back from the API is exactly what you need. You might need to filter it, transform it, or join it with another API.

    Matt DeBergalis: GraphQL has an incredibly good developer experience for the consumer. You write a strongly-typed query, which means great tooling support in your editor. Now there’s great tooling support in agentic editors because it’s semantic and self-documenting. A lot of what makes a technology win is the feeling a developer gets when they try it—how easy it is to use and how quickly you can get to something good. GraphQL had that delightful characteristic. It came from the same team at Facebook that created React, so it had a lot of energy as web development moved into the modern era. Those two things made it an easy choice for us.

    Nataraj: It’s interesting that so many developer technologies came out of Meta that they didn’t monetize. Unlike Amazon or Microsoft, they seem to define themselves strictly as a social media company. Why hasn’t Meta become a fourth cloud provider? They have the technology, developer experience, and money to do it.

    Matt DeBergalis: Here’s one quick point on that. A lot of the energy around React, for example, really came out of recruiting. Many companies do this. Open source was a tool for driving an engineering brand, especially in an era when it became very difficult to hire. There was a war for application development talent. So, one reason to open source something like React, even without a direct business case—it’s hard to monetize React, Vercel is probably the best example and it’s a tenuous connection—is that if it helps you recruit, you can justify a lot.

    Nataraj: That’s a very important point. It likely explains Microsoft’s strategy shift to becoming one of the biggest open-source contributors. So, we have these open-source products that find developer love, and then a company forms around them, like Databricks with Apache Spark. What was the journey like for GraphQL, taking a great open-source product and turning it into a business with a product worth paying for?

    Matt DeBergalis: One surprising thing is that open source on its own often doesn’t get a ton of adoption, especially when it’s designed for a larger company’s needs. Take Databricks and Spark. One way to look at it is that they built the company because people weren’t adopting Spark. Why not? Because it was hard. It’s a complicated piece of machinery. The company that needs that problem solved needs more than just Spark. The best vehicle for solving those kinds of problems is a business because it allows you to create a whole product, a solution. The enterprise sales process is really about helping the customer navigate the decision. The monetary cost is one thing, but the much bigger cost is the complex, multi-stakeholder architectural decision.

    Matt DeBergalis: With GraphQL, we asked a simple question: How do you get something like this adopted? I can give you a compelling technical reason why having a graph and writing a query is better than writing a bunch of procedural code for every new application experience. But in practice, how does that get adopted? If you pull that thread, interesting things emerge. Who owns APIs in an enterprise? Who makes architectural decisions? How do you balance the executive who owns the roadmap with engineering needs? The VP of engineering job is maybe the hardest job today. You’re under enormous pressure to ship quickly. If you ship slower than your competitor, it could be the end of your company’s viability.

    Matt DeBergalis: At the same time, you can’t mortgage the future. If you race to ship a product but create a big security vulnerability, you’ll get fired. If you build a product and then discover Amazon shipped Alexa and you need a voice app, or OpenAI shipped GPT and you need an agent, you’re in trouble if you’ve painted yourself into an architectural corner. You’re caught between a rock and a hard place. The consequence is you’re going to want help—more than just a raw piece of technology. You’ll want a plan, end-to-end integration, and all the ‘ilities’: observability, security, auditability. That, for infrastructure at least, is the heart of how you go from an exciting open-source project to something that makes business sense and can be adopted at scale.

    Nataraj: When a business adopts an open-source technology, they’re not just adopting a product; they’re adopting a certain level of risk. That’s why you have legal agreements about things like data privacy issues.

    Matt DeBergalis: That’s right. And the biggest risk by far is picking the wrong technology. Think about the cost of getting that wrong. If you adopt a database and five years later it has no users, the open-source project is on life support, and there’s no vendor to help you, you’re in real trouble. You’re facing a migration.

    Nataraj: The problem with Meta open-sourcing Llama is that if I’m building on top of a model, I need someone to host it and guarantee 99.99% reliability. There are all these dynamics between open and closed source.

    Matt DeBergalis: You see this across the stack. Maybe 10 or 20 years ago, a developer would start by asking what’s open versus closed to avoid vendor lock-in, especially after experiences with companies like Oracle. Now, it’s a little different. There’s so much to buy across the stack that you don’t have time for a deep analysis of everything. The biggest risk is getting it wrong. With AI moving so fast, you see a preference for what’s prevalent. You’re probably going to be in good shape if you go with the market leader. That means you can hire people who know the technology, and there’s a good chance it will mature quickly enough to meet your future needs. It’s a virtuous cycle. That’s the pitch I make for GraphQL: you have an API strategy to decide on, and you should start from the premise that picking the one developers like, with a vibrant user community, is a safe choice.

    Nataraj: I have a view on the evolution of full-stack development and wanted your thoughts. Are we making it more or less complicated? I feel like we’re making things more complicated to stand up a scalable web application.

    Matt DeBergalis: I grew up writing software on a Commodore 64. It’s definitely gotten more complicated, and it’s all across the stack. Microservices make sense for scaling engineering efforts, but they drive the need for things to manage those processes, like Kubernetes. You need a way to orchestrate API calls across them, which is our GraphQL story. Does Apollo add complexity? There’s an argument that it does. On the other hand, each of these layers, when done right, adds value and lets you go faster. A good architecture should have the property where the more you build, the more valuable the whole thing becomes. Some technologies feel like the opposite; you keep putting energy in, but it slows you down.

    Nataraj: It feels like there’s a lot of distance before you see that geometric growth. The early investment is high, and you’re always thinking two or three years down the line. You can’t start something fast because you’re planning for the future, like choosing the right database.

    Matt DeBergalis: It’s gnarly because many technology decisions boil down to: do I want a quick result today, creating debt for tomorrow, or do I want to set myself up for a bright future? It’s a hard call. Our job is to try to square that circle. Kubernetes was really complicated for a long time, but it has gotten easier to use. Now, we see all the benefits. That’s been a big priority for us at Apollo. The knock on GraphQL is there’s a lot of upfront setup work to build a schema—the catalog of all your APIs. Once you’ve done that, it’s wonderful, but getting there is a pain. Much of our roadmap over the last year has been about making it really easy and fast to get to that point of value. In 2025, most people will choose to solve today’s problem and worry about tomorrow later.

    Nataraj: For a small company with just one or two APIs, at what point does adopting GraphQL make sense? What type of customers do you have today?

    Matt DeBergalis: It makes sense from the client’s point of view. When you’re using GraphQL and React, you just write a query in your component, and that’s it. From the API side, with one or two REST APIs, it’s not a big deal. You can easily change them. But for a company with 10,000 REST APIs, it becomes very difficult to change them because you have no way of knowing what fields are being used. The thing that’s really interesting now is agents. Everybody wants to have some kind of agentic experience on top of their APIs, and GraphQL is a fantastic fit. GraphQL is an orchestration language. It’s about transforming, changing protocols, filtering. We’re excited about agents because they put those needs front and center.

    Nataraj: Can you talk about the size and scale of Apollo today as a business?

    Matt DeBergalis: GraphQL is used in about half of the enterprise world. We see this because we provide most of the standard open-source libraries. Commercially, we’ve focused on larger companies because graphs have a network effect; they are most valuable when they’re large. For example, we have a lot of retailers. Think about an online store. On a product page, you want to show the customer, ‘Arrives on Friday.’ To do that, you need to make a ton of API calls under the hood—inventory APIs, shipping partner APIs, loyalty APIs. It’s the kind of thing that seems trivial from a user experience perspective but explains why it often takes months to ship. Larger, established companies in retail or media, like The New York Times, find GraphQL incredibly valuable. Also, companies that have grown through M&A and have heterogeneous systems need to bring products together into a single user experience. That’s where GraphQL is historically adopted most. But again, agents are changing that, creating excitement around GraphQL at companies of all sizes.

    Nataraj: How do you acquire customers and market to new developers to ensure you remain the go-to product?

    Matt DeBergalis: For most open-source companies, including us, open source is an important part of the funnel. Development teams make the technical decisions. It starts with a developer who reaches for something they’ve heard of that makes sense and that they can try quickly. Open source is a great vehicle for that. It starts with a React developer reaching for Apollo Client or an MCP developer wanting to define an agent declaratively. You can build from there, but that open-source entry point and the content around it are by far the most important things.

    Nataraj: You recently transitioned from CTO to CEO. How has your focus changed?

    Matt DeBergalis: It changes a lot and changes nothing, depending on how you look at it. I’ve always felt our customers are where everything comes from. The first version of Apollo Client was really bad, but it had one killer feature: we took pull requests. People would adopt it, and that turned into wonderful partnerships, like with The New York Times. It was a formative technical partnership that became a customer partnership. For me, the heart of it has always been partnership. I don’t see that differently from the CEO seat. Startups that do well all say users and customers come first. I always spent a lot of time with our sales team and on marketing. That hasn’t changed. And I still do product demos every Friday. I don’t ever want to lose touch with that. If we don’t have a great product that developers love, the rest doesn’t matter.

    Nataraj: What does the sales motion for Apollo look like? Who are you typically talking to?

    Matt DeBergalis: It varies. You can look at Apollo from two different perspectives. One is the team trying to ship a piece of software. For them, Apollo helps them ship faster and with less risk. That’s who you want to sell to because that’s where the value is. They own roadmaps. The other lens is seeing Apollo as a platform. You have platform engineering teams that own developer productivity or operational excellence outcomes. Ideally, we initially bring Apollo to a team with a specific use case. If you’re selling to someone for whom the value isn’t immediate, it’s much harder. You start there, but you think about eventual expansion. When you think about expansion, a technology like this will naturally find a place in a platform organization’s portfolio, so you want to meet those people early on.

    Nataraj: Let’s talk about AI. What are your thoughts on the current hype cycle, and what is the opportunity for Apollo in AI?

    Matt DeBergalis: I’ve never seen anything like it. It’s so disruptive. The immediate thing for us is that AI will drive a lot more API consumption. Every company is scrambling to build some kind of agentic user experience. The line between an agent and an app will get blurry. For example, a bank has one mobile app for a wide range of customers, from retirees to college students. With agents, maybe that one app can serve both by learning what I’m into and adapting the user interface. That’s a really different kind of app, and every one of those efforts will drive a whole bunch of net new API calls. That’s good for Apollo and GraphQL.

    Matt DeBergalis: Also, the nature of those API calls is different. You can’t trust a non-deterministic model, so the API layer needs more access control and policy enforcement. GraphQL is a nice fit here because it’s semantic. Token management becomes really important. The best way to keep a model on track is to not feed it tokens it doesn’t need, which also saves money. That sounds like GraphQL—only the fields I want. So, we’re seeing a lot of demand for GraphQL because of agents. The way we build software is also changing. Every part of our company is changing. If we’re going to be an essential part of the AI-first stack, we have to be an AI-first company.

    Nataraj: Has AI changed the business metrics for your company? Has it helped save costs or optimized capital expenses?

    Matt DeBergalis: AI definitely accelerates some things. Sometimes you use that to get more done, and sometimes to do it more cost-effectively. We’ve seen both. It’s also not magic; we’re still figuring out what it can and can’t do. Personally, AI helps me do a lot of things faster and better. I use it a lot for research, and I feel more informed than I was a year ago. I don’t have a research team at my disposal. I don’t think of it as changing how we hire as much as having more relevant information at my fingertips, which helps us make better, faster decisions.

    Nataraj: It almost feels like AI is overestimated in the short term and underestimated in the long term.

    Matt DeBergalis: So much is like that. I think that has to be true.


    Conclusion

    Matt DeBergalis provides a masterclass in identifying a core developer need and building a powerful platform to solve it. His insights into GraphQL’s client-first approach and its growing importance in an AI-driven world offer a clear vision for the future of API architecture.

    → If you enjoyed this conversation with Matt DeBergalis, listen to the full episode here on Spotify, Apple, or YouTube.
    → Subscribe to our Newsletter and never miss an update.

  • Box CTO on Enterprise AI: Unstructured Data & AI-First Strategy

    How are large enterprises navigating the seismic shift to artificial intelligence? For many, the journey begins with managing the 90% of their data that is unstructured—documents, images, videos, and contracts. In this conversation, Nataraj sits down with Ben Kus, Chief Technology Officer at Box, to explore the real-world challenges and opportunities of becoming an AI-first company. Ben shares critical insights from Box’s own transformation, detailing how they leverage generative AI to unlock value from an exabyte of customer data. They discuss the evolution from specialized machine learning models to powerful general-purpose AI, the practicalities of managing AI costs, and the essential steps to ensure data security and customer trust. This discussion moves beyond the hype to provide a clear-eyed view of enterprise AI adoption, from initial use cases like RAG and data extraction to the future of complex, agentic systems that can perform deep research and automate sophisticated workflows.

    → Enjoy this conversation with Ben Kus, on Spotify, Apple, or YouTube.

    → Subscribe to ournewsletter and never miss an update.

    Nataraj: I was really excited because I work in unstructured data as well and I realize how important it is. But let’s set a little bit of context for the audience. In the storage industry, it’s a common phrase to use unstructured data. But it would be good to set the context of what is unstructured data and why Box is in the center of all things AI.

    Ben Kus: It’s interesting. Oftentimes, if you say the word data to anyone, especially computer scientists or people who have come from programming backgrounds, you naturally think of structured data. We want to become more data-oriented; we need to use data. And it’s partially because there’s been a massive data revolution over the last 10 or 20 years. It used to be that my data was in a MySQL database somewhere. Then it became more tools available where you would use terms like data lake and data warehouse, more advanced analytics tools. You see companies like Databricks and Snowflake that become these very powerful platforms of structured data. That’s just naturally what you think of.

    Now, the world of unstructured data, which I would define as data that’s not in a database and doesn’t have a schema to it—things like emails, messages, and webpages. In our world at Box, it’s the world of what we call content or files, the stuff that goes into documents, PowerPoints, markdown files, videos, or images. All of this is unstructured data. Interestingly, almost every company you talk to, in a business-to-business, enterprise-oriented thought process, 90% or more of their data is actually unstructured. At Box, we have 120,000 enterprise customers, we have over an exabyte of data, and this is what we’ve always lived by. You need to collaborate on it, you need to sync it to get it to different places.

    But then generative AI comes around, and generative AI is born on unstructured data. So it naturally, immediately, every company I’ve ever talked to, if you ask why they’re interested in generative AI, one of their top three things they’ll say will be, ‘Well, I’ve got all this internal stuff in my company that is unstructured data, and I don’t think I’m taking advantage of it enough.’ It takes a million different forms, and it’s partly why it’s been hard to really automate or make specialized applications to deal with these types of data. But there’s this huge untapped potential in unstructured data. So for Box, with all of these new models coming out from all these great providers, it’s a gift to companies and to people who think the way that we do, which is how can you get more out of your unstructured data? Now AI can basically understand unstructured data. For the first time, you have this automated ability to have computers be able to understand, watch, read, and look at these things and then be able to not only generate new content for you but also to understand and help you with the content that you already have, which in many companies is massive—petabytes, hundreds of billions of these pieces of content that in some cases are the most critical stuff they have.

    Nataraj: Unstructured data includes Box, Amazon S3 files, Azure has Blob, and any given enterprise has multiple places where they’re storing data. In terms of your strategy for building products, how much are you thinking about extending the Box ecosystem into all these surface areas versus building tools or products within the ecosystem? Talk a little bit about your strategic approach.

    Ben Kus: If you go back to the analogy of where people store their structured data, it’s in many places for many different reasons. Similarly, there’s the very generically large term of unstructured data; you would store it and use it in many different ways. But for Box, one of our things we’re typically known for is to make it very easy to use, extend, secure, and be compliant for all of your data. For that, we typically would need to manage it. We have a million ways to sync data between repositories. We recently announced a big partnership with Snowflake where the structured data, the metadata about a file in Box, automatically syncs into Snowflake tables. That kind of thing is definitely part of what we think about.

    But in general for Box, it’s key that we offer so much AI, in many cases for free on top of the data you have, even though it’s quite expensive, because we want people to bring their data and get all the benefits of security, collaboration, and AI. But we don’t believe we’re going to be the only people in this AI and agentic ecosystem, which is why we partner with basically everyone. We believe there will be these major enterprise platforms that every company will be looking at. Our job is to give the best option for them for unstructured data and then integrate with everybody else so that you can have our AI agents working with other companies in addition to custom AI agents that you build yourself. Because we’re unstructured data and a lot of people need to use it, we integrate with other platforms, non-AI in addition to AI integrations that let other companies call into our AI capabilities to ask about data, do deep research, do data extraction, and so on.

    Nataraj: Was there a moment within the company where you guys realized that this is a big shift? Box has been around for almost 20 years, starting in 2005. Was there an internal moment where you said, ‘Okay, this is really big for us?’

    Ben Kus: Sure. If you look back five or six years for the term ML and unstructured data, you’ll find we had a lot of big announcements around how Box uses ML to structure your data. So taking unstructured data and structuring it is a big thing we’ve done for many years. We’ve always been trying to be on the bleeding edge of what’s available. But there was this challenge. Imagine a company with forms people are filling out, or documents, contracts, leases, research proposals, images—anything a company does day-to-day. If you were to have AI or ML help you, it would be training a model. You’d get a data science team together or buy a company. We would see that getting an ML model to handle contracts and structure them was too complicated. You’d need a model not just for contracts or leases, but for commercial leases in the UK in the last three years. You’d have a model for that, and it didn’t really work that well. You’d have to train and customize it a lot.

    That was the nature of how it used to be. When Generative AI came out, we were watching the early days of GPT-2 style models, and it was okay. But somewhere around the time ChatGPT came out, with GPT-3.5 style models, you suddenly saw this amazing moment where a general-purpose model could actually start to outperform the specialized models. It could do things you never even would have bothered to try, like, ‘What is the risk assessment of this contract?’ or ‘Can you describe whether you think this image is production-ready for a catalog?’ You couldn’t even imagine the feature set you would give a traditional ML model. But Generative AI could kind of do it. As it got better, GPT-4 was this big, ‘Oh wow,’ moment where some of the challenges of the older models were being fixed. GPT-3.5 was the moment where we said, ‘Let’s just go back and retrofit everything about Box to be able to apply AI models on top of it,’ so you could do things like chatting through documents and extracting data. It was amazing how fast you could get things working and get them working better than you ever had before, even after spending a ton of engineering resources on trying to get something working. An hour and a half of using one of the new models actually gave you better performance. That was a big aha moment. And then of course you realize you’ve got 90% of the problem, and the last 10% is going to take all your time going forward. But since then, all of our efforts have been around preparing Box to be an AI-first platform. We often talk internally, ‘What if we were building Box tomorrow?’ It clearly would be an AI-first experience. So why don’t we do that? That’s just part of our mentality.

    Nataraj: What are some of the earliest use cases that you launched at Box, and how has the enterprise customer adoption been? In enterprises, we often see the cycle of adoption is a little bit slower.

    Ben Kus: Some of the first features we launched were around the idea that if you’re looking at a document, you need to have an AI next to you to help you chat with it. I’ve got a long document, a long contract, this proposal—help me understand it. It’s almost like an advanced find. That was a simple feature, but it was this new paradigm. And then we added the concept of RAG, not just for a single document but across documents. You can implement chunking, vector databases, and the ability to find the answer to your question, not just a document like in a search. I’ve got 100,000 documents here in my portal of product documentation. As a salesperson, I need to find the answer to this question. I ask it, and the AI will ‘read’ through all of it using RAG and provide the answer.

    For enterprises, they were scared, and some of them still are, about AI because it’s so different. Data security is critical. No matter the benefit of AI, if you’re going to leak data, no one’s going to use it. In many cases, for bigger organizations, the first AI they’re actually using on their production data is Box, partially because it’s very hard for them to trust AI companies. You need to trust the model, the person calling the model, and the person who has your data. Since Box is that whole stack for them, they were able to say, ‘I trust that your AI principles and approach will be secure.’ Then they’re able to start with some of the simple capabilities. One of the more exciting ones is data extraction, where you have contracts, project proposals, press releases. There’s an implicit structure to them. You want to see the fields, like who signed it, what time, what are the clauses. Then you can search and filter on that data. Enterprises look at that and say these are very practical benefits. They get through their AI governance committees, security screenings, and ensure nobody trains on their data. That’s the scariest thing to them. We have to go in and meet with the teams, explain every step, show them the architecture diagrams, and the audit reviews so they know their data is safe. That’s typically their number one concern.

    Nataraj: I want to talk a little bit about the cost of leveraging AI. It has dramatically gone down. Are you seeing improvement in your margins by creating AI products? How is it directly impacting your profitability?

    Ben Kus: This is a particularly hard problem. We’re a public company. We publish our gross margin, our expenses. It’s not practical for us to do something that would double our expenses. Nobody has $100 million laying around to apply to whatever cool ideas. At the same time, it’s very clear that if you’re too worried or stingy about your AI bills, you will lose to somebody who is just trying harder. There’s been a really nice byproduct of all the innovation in chips, models, and efficiency—they’re much cheaper than they used to be. Sam Altman said a few years ago that models would get dramatically cheaper, but you’re also going to find you’ll use them more and more, which will slightly offset that. That’s exactly what we found. We are doing way more tokens than we did previously, by orders of magnitude. However, we’re now utilizing the cheaper models, and they’re just offsetting.

    When you get to agentic capabilities, like deep research on your data, that’s way different than RAG. RAG might use 20,000 tokens. But for deep research, you might go through many documents, 10,000 tokens at a time, maybe 50,000, 100,000, and then reprocess that. You might spend hundreds of thousands of tokens or more. That’s a massive exponential growth in your AI spend. But you get a great result. Deep research on your own data is revolutionary. The way we approach it is to give AI for free as much as possible because that’s what an AI-first platform would do. Sometimes, for very high-scale use on our platform, you can pay. But whenever possible, we’re going to eat the costs ourselves and handle that risk because that’s what you want out of your best products. Nobody wants to sit there and worry when they’re clicking on things that it’s going to cost them. So we try to protect ourselves with some resource-based pricing but also just say AI is part of the product. That’s our philosophy.

    Nataraj: What do you think about pricing based on usage versus pricing based on outcomes? I’m assuming you’re following the regular per-seat, per-subscription model.

    Ben Kus: Yep. We’ve been through every single possible flavor of this. I hope business schools are doing case studies on how everybody had to rethink technology pricing. At the end of the day, pricing a product isn’t just about the supply side cost; it’s about what people are willing to pay and how they’re willing to pay for it. When we originally launched our AI, we had seen some people who launched AI were charging too much and people weren’t ready for that. Then there was this massive trend of $20 a month for enterprise-style tools, and the adoption was terrible because nobody quite knew what to do with it. So we decided to offer it as free as part of our product, but we put a limit on it. If you did too much, it would stop.

    But then enterprises would actually not turn it on because they were worried they would hit those limits and then everybody would be mad at them. The limit became an adoption barrier. So we got a lot of feedback from our customers and turned that off. There was no limit. Now, there’s the idea of abuse we could address. You can’t just buy a seat to Box and use the API to power another system. But for normal usage, we handle that risk. It’s incredibly expensive if you look at public cloud rates for transferring and storing data. We’re used to infrastructure expenses. So we decided we’re going to eat the cost of it as a way to deliver better services to our customers. That is our continuous philosophy.

    Nataraj: Storage is a horizontal use case, but AI is also being used to build vertical-specific products, like Cursor for developers or Harvey for legal assistants. Have you evaluated creating specific products on top of Box for different verticals?

    Ben Kus: This is a very fundamental question for any company: am I going to focus on a specific vertical and a problem, or am I going to focus generically across the board? At Box, one of our product principles is to focus on the horizontal IT use cases. Much of our value proposition is across the whole environment. Everybody in the company wants the security features, the compliance features, the sharing features. This is why we talk about it as content or files—everybody needs files. Some companies specialize and talk about contracts and clauses, or digital assets and marketing materials. This is a big question for any startup: go deep or go broad. If you go deep, you can make more targeted products. But your total market size is diminished. For us at Box, no one industry makes up more than 10% of our overall business. We have a giant market, but the more you specialize, the more you’re probably not going to solve a problem for somebody else.

    The interesting part about AI is that it pulls you in two different directions at once. Some people will start to use AI to very specifically solve problems, like in life sciences or financial services. But at the same time, in some cases, a generic AI can actually solve what a historical specialized company used to do. In which case, people might go back to a generic solution so they don’t have a million point solutions. You always have to analyze how deep to go in an industry versus how much you can provide horizontally. AI reshuffles it.

    Nataraj: You guys are one of the first companies to adopt being an AI-first company. What does that mean and how does it change how you operate?

    Ben Kus: When we use the word ‘AI-first,’ we think about building a feature knowing the full abilities of AI. Search is an interesting example. The historic way you would build search is completely different from how you would build it in a world with an AI or agentic experience. Not just from a technology perspective with better vector embeddings, but also from the technique. People act differently when they go to a search box than when they are talking to an agent. Many people use ChatGPT or Gemini for internet searches, and what you type into Google versus your chat experience is different.

    That’s an interesting moment for Box. If you think AI-first, you don’t just put an AI thing inside a search box. You rethink the search experience from the beginning. We announced our agentic search, or deep search, where you ask an AI, and it will not just go through a complicated search system, but it will look at the results and figure out whether those results match what you’re looking for. It goes well beyond RAG and into using intelligent agents to loop and figure out if they have the best answer or if they need to try again. Thinking that way, not just ‘I have a model, I want to use it,’ but ‘What can AI do for you?’, especially if you think agentically, becomes a different product process, a different engineering process, a different strategy process. You start to invest heavily in your AI platform layers and common AI interactions in your products, like an agentic experience or AX. If you’re going to be an AI-first company, you need to examine the fact that maybe AI will change the way you’ve done something traditional.

    Nataraj: We went through RAGs, we went through copilots, and now we are seeing agents. How are you thinking about agents within Box? What is your definition of an agent?

    Ben Kus: My definition of an AI agent, technically speaking—and Harrison from LangChain has a fun definition—is that an agent is something that decides when it’s done. Normally, you run code and it completes. But an AI agent needs to figure out when it’s done. That’s a good technical definition. I have a slightly more detailed engineering answer: an agent has an objective, instructions, an AI model (a brain), and tools it can decide to utilize with context to operate. I’m a fan of agents that can call on other agents, like a multi-agent system.

    When I’m thinking about agents, I’m thinking about multiple agents cooperating. To me, the power of agents going forward is this idea that you can think about them as little state diagrams of intelligence that can loop and do more sophisticated things. This is a very different thought process for most engineers. You asked for an example. One is deep research. To do deep research in Box, you have to search, look at the results, get the files, make an outline, create the prose, and then critique it. That’s like 15 steps for these agents. We call that a deep research agent, but it has a multi-agent workflow to process that. I don’t know if you could have done deep research very well previously because there are too many paths to handle. It’s the kind of thing that works really well for an intelligent system like an agent to orchestrate.

    Nataraj: Do you see any form factor for agents? In an enterprise product sense, how does that form factor play out?

    Ben Kus: There’s the AI models concept, which is more of a developer concept. Then there’s the idea of an AI assistant, where you have something there to help you in context, but it’s typically one-shot. The term ‘agentic experience’ (AX) is very interesting in this form factor discussion. OpenAI, Anthropic, and Gemini do a great job of building valuable capabilities into their agentic experiences. You go to ChatGPT or your favorite tool, ask a question, and it just figures out, ‘I’m going to search the internet, I’m going to do deep research for you.’ This idea that you go in and ask a system to do something, and it can recognize your context, is critical. Context engineering is a critical aspect of agentic stuff going forward. This might be the new form factor.

    At Box, when you’re on our main screen, what you want to do is very different than if you have a file open or if you’re looking at all your contracts or invoices. The hard engineering and product problem is to make agents that figure out what you might want at that point. We think about building an agent that handles a certain flow but first figures out what the user wants, and then does a search or queries the system or brings data together. That context engineering is critical. I believe context engineering is one of the more interesting areas developing, and it will be something that everybody will want to hire for soon.

    Nataraj: Let’s touch upon productivity. How much productivity improvement are you seeing within your company? And there’s a group of people panicking that AI is going to destroy jobs, starting with developers.

    Ben Kus: For productivity for our customers, we see people start to use AI a little bit skittishly, and then they use it more and more over time. Especially in enterprises, adoption starts slow, but then they start to add it in big chunks, and you see an acceleration of usage over time.

    Internally, we have seen benefits from using assisted tools for our developers, like GitHub Copilot and Cursor. As the models and integrations have gotten better, they are helping us overall. We don’t think of it as, ‘We can save money and have fewer developers.’ Instead, we’re like, ‘If 25% of our code is written by AI, that’s 25% more we can do to deliver value to customers.’ We’re not constrained by a fixed amount of output we want from our developers; we want more. If tools help people become more productive, that’s wonderful.

    Economically speaking, I’m not a believer in the lump of labor fallacy—that there’s only a fixed amount of things people want to do. I think it’s the opposite. If things get better and cheaper, you want more of it. We want more videos, content, marketing, and internal content because new avenues are now possible. Now, there’s an important aspect: if change happens too quickly, it can be very disruptive. I’m very sensitive to the plight of people in the middle of a disruption. But I see this as a tool to help companies do more. You need good people using AI to help them, as opposed to cutting whole areas.

    Nataraj: Some CTOs have the opinion that they no longer need a lot of junior developers. I always thought this is actually much better for junior developers because if it was taking them three or four years to become senior, it will now take them one year. What’s your take?

    Ben Kus: What you said is true. When you add a junior developer, you often expect a relatively small level of output compared to more senior people. But now, a person who’s really good at using the latest tools is actually quite productive, and that’s a big value. At Box, we have the most developers we’ve ever had, and we’re not only hiring senior people; we’re hiring across the whole spectrum. We just expect people to be able to use tools. Anecdotally, I see that people coming out of school now have always known AI-assisted coding, and they’re good at it compared to somebody who’s been around for a long time and might be resisting it. Also, in areas like context engineering, which is a slightly different form of coding, some of our most successful context engineers are relatively junior in terms of how long they’ve been out of school but really excel at that kind of thing.

    Nataraj: An audience member asks: can you share a little bit about document parsing and how you’re extracting from those documents and what models or technologies you’re using behind the scenes?

    Ben Kus: In this world of handling unstructured data, there’s a set of things you always need to do. You have all these different file types. The first thing is to get it to a usable format. Markdown is a great format. Sometimes you have scanned documents or different formats. There’s a big conversion as a first step. Many people talk about PDFs because of all the weird things that go into them. A PDF is not a good format for AI to figure out; it needs to be converted. So step one is to convert it to text with some limited style support like markdown. Then you typically go through and chunk it. You want to make a vector out of the most important section of data. You want it to contain a whole thought. You wouldn’t do it per sentence, but if you did it for giant pages, you’d end up with too many confused topics. So you want a vector to indicate what that area is about. Paragraphs work well at a high level, but then you need more advanced chunking strategies. Then you stick that into a vector database or put the text into your traditional search database.

    Nataraj: Are you building your own proprietary tools for this, or are you using things like LangChain with Pinecone or other vector DBs?

    Ben Kus: My philosophy and the philosophy of Box is that we love all the tools that everybody makes. If people are building the best tool out there—the best vector database, the best document chunker, the best agentic framework—we want to use it. I gave a speech recently at the LangChain conference about the benefits of something like LangGraph. When we started, we had built our own because this stuff wasn’t available at the time. But we are more than happy to go back and retrofit to some of the other systems. I’m very impressed at how good vector databases have gotten in the last few years. Why would we bother to rebuild the things that people are doing such a great job building, especially in the open-source community, or tools that we can buy? We’re big fans; we will replace stuff that we just built because something better is available. With AI, you kind of have to reevaluate every six months.

    Nataraj: What about the models you’re using? In an enterprise, you want to adopt the latest and greatest, but you also want to be secure.

    Ben Kus: We made a decision a long time ago not to build models, and I’m super happy we did that. Also, we are going to support all of the best models that are trustworthy. For us right now, we support OpenAI-based models, Anthropic’s Claude models, Llama-based models, and Gemini. We consider those to be some of the best models out there. Not only do we support them, but we support them on a trusted environment. This is critical for many enterprises. For example, AWS Bedrock is a very trustworthy environment to run the Claude or Llama models. IBM will support Llama models for you. These are trustworthy names from an enterprise perspective.

    We utilize these trusted providers and trusted models, and then we pick which model works best for a given task. Gemini is great for data extraction. GPT-style models are great for chatting. They’re all pretty close these days, the leading models. But we let our customers switch as they want. If somebody says, ‘I really think this data extraction is best for Claude,’ we let them do it. We support all of the models, and one of our goals is to support them as they come out. This is very expensive and painful internally because how you properly prompt and context-engineer for Claude is different from Gemini, which is different from OpenAI models. But for enterprises, they often have preferences, and our job as an open platform is to handle those.

    Nataraj: One final question. If you were building something now, are there any ideas that you would go and attack?

    Ben Kus: It’s a very good question. There are a lot of startups out there doing really interesting things. One interesting idea is to look at areas where an old-school traditional software approach could be disrupted, but maybe it’s so old that people don’t really think it’s cool or interesting anymore. Finding something that is very valuable but not as in the news might be a good approach. Anything we’re talking about all the time will probably have so much competition that you might be behind.

    But I will highlight one thing. If you see something like Cursor—nobody talked about Cursor a couple of years ago. They were up against Microsoft Copilot, one of the biggest companies in the world. An interesting thing is that with Cursor, you start to realize that even though people are using AI to solve a problem, there might be a better way. If you can make a really good product, even despite the VC advice that you’ll never make it in a ‘kill zone,’ you might have a chance. Often, that’s very good advice, but if you really believe you can do it better, it’s a dangerous path, but there are demonstrations of people who built a really good product. I believe those still have a chance in these crazy AI times to become large companies because they just solved the problem really well.

    Nataraj: Because Cursor literally cloned VS Code. They thought the UI could be better on just that product and that’s the main differentiation.

    Ben Kus: There are a lot of dynamics that go into any existing product. Sometimes a fresh look at it, even a problem that seems solved, can be helpful.

    Nataraj: This was a great conversation, Ben. Thanks for coming on the show.

    Ben Kus: Excellent. Well, thanks for having me on. It was a fun chat.

    This conversation with Ben Kus highlights the practical, strategic thinking required for enterprises to successfully adopt AI. By focusing on security, embracing a multi-model approach, and rethinking core product experiences, companies can unlock the immense potential of their unstructured data.

    → If you enjoyed this conversation with Ben Kus, listen to the full episode here on Spotify, Apple, or YouTube.

    → Subscribe to ourNewsletter and never miss an update.

  • AngelList CTO Gautham Buchi on AI, Crypto, and the Future of Startups

    In the rapidly evolving landscape of venture capital, technology serves as the primary catalyst for innovation. Few understand this better than Gautham Buchi, the Chief Technology Officer at AngelList. With a rich background that includes senior roles at Coinbase and founding a Y Combinator-backed startup, Gautham brings a unique perspective on leveraging cutting-edge tools to solve complex financial problems. In this conversation with host Nataraj, Gautham dives deep into the operational core of AngelList, a platform dedicated to building the infrastructure for the startup economy.

    He shares how AngelList is harnessing Generative AI to automate fund formation, provide deep, actionable insights for investors, and accelerate capital deployment. The discussion also explores the integration of crypto primitives, such as stablecoins and tokenization, to create new pathways for liquidity in private markets, a critical component for fueling the next wave of innovation. This episode is a masterclass in how modern technology is reshaping the world of startup investing.

    → Enjoy this conversation with Gautham Buchi, on Spotify or Apple.

    → Subscribe to ournewsletter and never miss an update.

    Nataraj: To set the context of the conversation, can you give a quick introduction of who you are and what your journey was before joining AngelList as a CTO?

    Gautham Buchi: My journey has largely revolved around key levers that can personally change someone’s life, which is largely education and access to financial tools. It started at Coursera, where we tried to democratize access to good education and then moved on to my own company, furthering the journey. Then to Coinbase, which democratized access to better financial tools using crypto as a methodology. Now I’m continuing on the path to democratize access to capital. Access to capital is probably the single best innovation hack we could do to create more startups. AngelList is in the business of creating more startups, creating more tools for the founders and builders. And I’m really excited to continue that journey there.

    Nataraj: Talk a little bit about for those people who are not aware of AngelList. What are the different products on AngelList and what are the core business drivers among those products? You have rolling funds, venture funds, syndicates. Talk a little bit about that.

    Gautham Buchi: A good mental model that I use is if you think of a triangle where one corner is the founders, the other corner is the GPs, like the Sequoias of the world, and the third corner is LPs, people who want to invest in early-stage venture or venture more broadly. AngelList is smack in the middle of the triangle. Our sole purpose is to make sure that the sides of this triangle are getting stronger and stronger because these three are the pillars of the innovation economy. The first thing you have to believe, to believe in the mission of AngelList, is that startups are good for the world. The creation of more startups is the way we innovate and is the way we accelerate innovation. Now post that, we need to identify how do we really strengthen each of these pillars: the founders, the fund managers, and people who want to invest in early-stage venture.

    If you’re a founder, and maybe this is surprising to a lot of people, Robinhood’s first check was on AngelList. Many companies through our product have been able to come in and say, ‘Okay, I’m looking to access good capital, not just dumb money, but good capital on the platform.’ I can go to AngelList and start a company. We are creating an ecosystem for founders to take the mental gymnastics around starting a company and really focus on building the product. We will build the rails for you to get the capital that you need.

    Moving on to the other corner of the triangle, which is you are a fund manager. Let’s say you have a unique hypothesis, a unique insight into where you think you can be investing to accelerate this innovation. You need two pieces: access to good founder opportunities and access to good capital that is looking to be invested. That is our core fund admin product, the core GP product. This is probably the one that AngelList is well-known for today. You get a lot of tools so you don’t spend time doing the gymnastics of how to raise and deploy capital, but really focus on what you can do to add maximum value to the founders.

    The third corner of that pillar is the people looking to invest in early-stage venture more broadly. This is probably the one that most people have historically known AngelList for. Wherever you’re in the world, if you want to invest, get access, and believe in the startup economy, you can write a $1,000 check to a $100,000 check. You want to be an angel, you go to AngelList. This is the thing that Naval envisioned: how do I really democratize access to early-stage venture across the world? So we provide a number of tools for people who are looking to get their toes wet in the world of angel investing. To sum it up, the way I would think about AngelList as a business is to really think about the triangle between founders, GPs who are looking to run the fund, and then angels. The speed with which we can spin the triangle is essentially innovation.

    Nataraj: You joined AngelList this year or last year, and you’ve worked in different companies. How is working at AngelList different from working at Coinbase?

    Gautham Buchi: Very different. Right off the bat, crypto in 2017-2018 was very different than crypto right now. To give a specific anecdote, if you and I met in 2017 and you told me that by 2025 we would have a Bitcoin ETF or we would have stablecoins, most people in crypto would have laughed. The pace of innovation is so constant, so relentless, and quite frankly, very uplifting. But there is always this overhang of regulation on top of you. There were many times, especially over the last four years, where being at Coinbase felt very much like you’re fighting a big institution, a regulatory battle. That is not something that we face at AngelList. You don’t spend time thinking about regulation in the way that you would in the crypto world. You’re really thinking about how do I accelerate capital deployment? How do I bring more efficiency to how capital is being deployed? Which is a very different problem space.

    Second, the pace of innovation in crypto is insane. We had a joke at Coinbase that one year in crypto is like 10 years. There’s a popular meme where after five years in crypto, somebody has this white beard and gray hair. It’s very true. I can personally attest to it. And so is the eternal optimism. The crypto crowd is probably one of the most optimistic crowds that I’ve ever worked with. It’s different in capital products. While innovation does happen, it’s not at the same pace at which it’s happening in crypto. So the way you think about product, you’re thinking more from a reliability lens, you’re thinking longer-term, which is very different. As for the companies, particularly, AngelList is much smaller, much more early stage. We are about 150 people. Coinbase, when I joined, was probably a few hundred, but it’s now a much bigger company. So that definitely has its own pros and cons.

    Nataraj: There’s one through-line I see between Coinbase and AngelList: both were involved in major regulatory changes. Naval and the team were involved in the JOBS Act earlier to change and make AngelList and crowdfunding happen. Now we are seeing that happen in real-time with some of the crypto legislative changes. I want to pivot towards what I wanted to talk about most in this conversation, which is about AI. Post-ChatGPT, we saw you could do a lot more with this current technology. In my career, this feels like a game-changing moment. I wanted to quickly get your thoughts on what you think of Generative AI and this current AI hype cycle.

    Gautham Buchi: Let me dial back the clock a little bit. I don’t know if your audience is familiar with Coursera; it was an education platform that started in 2011. Our first major success was a machine learning course by Andrew Ng. A lot of people, especially in deep learning, probably got their start with Andrew. At Coursera, we were incredibly excited about it, not just from a pure technology perspective, but also the audience and the learning; these were the most subscribed courses on the platform. The thing that was different back then was it was still largely a research problem. It was harder to think about what an actual go-to-market version was.

    Even when I was starting my own company in 2016-2017, there was a running joke within YC that all you had to do was attach .ai to your domain and you automatically would raise a bunch of money. So there was definitely hype cycle number two happening in 2017. What’s different about this particular iteration is one, it moved from being a research problem to an engineering problem. You could take the model in a box, assume somebody has already done all the heavy lifting, and now you’re just trying to figure out what other things in the ecosystem you need to connect to make sense out of this. That’s been incredible to see.

    Second is the utility of it. The utility back in the first cycle of my experience, 2012-2013 machine learning, 2016-17 AI, you really had to squint your eyes. There was always a human in the loop. The utility was not obvious. You had to bet that one day this thing would actually be at a point where you will see real feedback loops. But we are in a world where you can parse a PDF instantly in a couple of seconds, or you could do voice translation. So now you bring these two ideas together: it’s an engineering problem, and the utility is instant. That means you have a very fast feedback loop. You and I can spend the next 20 minutes and literally build something, put it out in the world, and see how people are interacting with it. And that is very powerful.

    Nataraj: What do you think of the use cases that are most exciting for you as a CTO, and how are you at AngelList adopting AI in different ways?

    Gautham Buchi: There are two interesting questions there: what is very exciting to me, and what is very exciting to the business. To me, it is so interesting to see the blurring of the roles. Even three years ago, if you wanted to build an MVP, you’d ask, ‘Who’s going to be the designer? Who’s writing the PRD? Who’s going to be building this project?’ That’s a lot of overhead. Today, our chief legal officer builds an end-to-end product himself. That includes the design, the spec, the PRD; he releases it, he’s tracking analytics. Our designer is building end-to-end products. An intern is building end-to-end. We really went from a role in a box to a product in a box. We have this full spectrum of skills that are very much available to you. The conversations become so much sharper on a day-to-day basis. This idea that you have to go through multiple iterations to even define what you’re doing will become so outdated, and the roles become so blurry. It is increasingly becoming hard to define the role of a product person versus an engineer.

    Second is your ability to deploy and get the boilerplate out of the way has been huge. The hard problem in most companies is working with legacy code, not greenfield code. The moment you are able to put things in place that can abstract the legacy away from you or even better, intelligently retool the legacy for you, you’re taking a ton of work out of the way. We are able to now see folks join and start deploying the same day. It used to be aspirational, but now it’s almost an expectation because of all the tooling available.

    The third thing goes back to the base-level expectation. I have this view that it will be increasingly hard to see a good role for yourself if you don’t become very quickly AI-native, meaning being able to understand which tools create maximum leverage. It almost feels like, ‘Am I late?’ You’re not late, but now is a good time to start. I can clearly see the difference between teams that have adopted AI and the teams that are still lagging behind. The difference is so clear, so obvious, that we now have a default expectation that everybody’s trying out these tools.

    Nataraj: What are the blockers for teams that aren’t AI-native?

    Gautham Buchi: I don’t think it is a philosophical stance. It is more of an inertia and momentum thing. You could also be skeptical. For what it’s worth, I was skeptical at one point as well. If you are an engineer today and you have not tried one of the IDEs like Cursor, CodeWhisperer, or Copilot, you are already behind. So inertia could be a big component. Second, there are some good reasons not to do it, depending on which team you’re in. For example, if you’re in security or a very critical path, you want to spend that extra time and attention. At Coinbase, we had a lot of concern internally around what we might accidentally expose because a lot of these are also primitives that are being built right now.

    Nataraj: I always call AI right now ‘draft AI’ because it gets you the draft pretty fast. But if I’m reporting business numbers to my leadership, I want to depend on myself to review each line, even if I use AI to write it. You still need that 5% manual intervention, but that 95% is a really big time saver. Can you talk about some examples of how you’re using AI in your own products at AngelList?

    Gautham Buchi: Let’s talk about our customer type. On a typical fund deployment, there are a lot of workflows you go through that are sequential, whether it’s legal, boilerplate, or dependent on internal movements. One of the metrics we track religiously is how long it takes for you to deploy your fund, raise your fund, or get set up for the fund. We are increasingly using a lot of AI and automation to do that. One thing we do is doc parsing. In a fund formation, there are tens, if not hundreds, of docs. We can parse the docs, provide the information that is very relevant to you, and automate your deployment. This is integral to how we simplify fund formation.

    The second thing is operational. Once you have your fund deployed, the thing that AngelList is known for is the venture associates and the quality of service. We want to enable our internal teams to very quickly get access to data at their fingertips. A couple of years ago, pulling up a specific legal term for a GP would be half a day’s task. Now we have built internal tooling where our customer support and venture associate teams can, in most cases, auto-resolve issues. We can pull information, make sense of it, and spit out exactly what the customer is looking for. This feeds into the cycle of closing feedback loops and becoming more efficient.

    The third bucket is AngelList is sitting on a gold mine of data. Some of the hardest resources to get to is early-stage venture data. There are hundreds and thousands of companies on the platform raising money. There is a tremendous opportunity here where we can drive deep insights into what’s happening with your portfolio. We can tell you exactly where you’re invested, opportunities you might be missing, and how your fund is performing compared to the rest of the funds on the platform. We are now able to start doing some of that using AI.

    Nataraj: Talk a little bit about your crypto integration as well. I know AngelList was one of the first adopters of Circle a couple of years back. Is tokenizing shares on a blockchain a path that AngelList is looking towards?

    Gautham Buchi: This is one of the best opportunities for AngelList. One thing we have done very concretely today is we enable USDC funding. If you’re a startup that is raising money with USDC, AngelList allows you to do that at no fee, and we have seen pretty significant adoption. The second opportunity is distributions. For a lot of crypto companies, distributions happen through crypto tokens. Us being able to support that means if you’re a crypto company that has an exit, your investors are able to get and keep those tokens on the AngelList platform.

    Moving on to stablecoins, I think it’s one of the most exciting areas right now because they’re instantaneous, near-instant settlement. This drastically simplifies cross-border wires, which is a massive pain. This is something we are seriously thinking about: how do we make capital deployment more efficient? We are seriously thinking about how we can make stablecoins a primary citizen on the platform, potentially enabling digital wallets for all customers and LPs.

    The second bucket is tokenization. What has changed over the last seven or eight years is companies are increasingly choosing to stay private. Stripe, OpenAI, Anthropic are examples. This means your capital is locked for much longer than historically seen. While you’re happy the valuation is going bonkers, at the end of the day, this is on paper; it’s not liquidity yet. And liquidity is really important because it fuels the next generation of startups. One of the things we are seriously thinking about is how do we create liquidity for the GPs and investors on the platform. On the technology side, tokenization is a reality. We are seriously thinking about how we can bring the regulatory framework, tokenization framework, and KYC/AML together to create liquidity for the funds on the platform and create good incentives for founders to participate in it.

    Nataraj: As the lines are blurring, what skills should product managers invest in building?

    Gautham Buchi: The thing that has changed is your ability to go from an idea to seeing it in the world has dramatically changed. The most powerful thing product managers have today that they didn’t have before is an ability to take their product idea, put it out in the world, gather actual data, and then come back to the table. They can say, ‘Here’s an MVP that I was able to build for myself. I’m not stuck in multiple rounds of prioritization. Here are 10 people I have shown this to, and here’s the information I received.’ That is so powerful and empowering. The classic role of a product manager as an information router is quickly disappearing. If you are purely serving the purpose of routing information and doing prioritization, you’re in trouble. We have moved to a world where product managers are empowered to very quickly generate these prototypes and take them to market. That’s what I would invest in right now.

    Nataraj: Thanks, Gautam, for coming on the show and sharing your insights and time.

    Gautham Buchi: Likewise, thank you for having me and nice to meet you all.

    Gautham Buchi provided a clear look into how AngelList is pioneering the future of venture capital by integrating AI and crypto. This conversation highlights the tangible benefits of these technologies in making startup investing more efficient, accessible, and liquid for founders, GPs, and LPs alike.

    → If you enjoyed this conversation with Gautham Buchi, listen to the full episode here on Spotify or Apple.

    → Subscribe to ourNewsletter and never miss an update.

  • Jared Siegal on Navigating AI’s Impact in Digital Advertising

    The world of digital advertising is a complex, rapidly evolving ecosystem that powers much of the free content we consume online. At the heart of this system is programmatic advertising, a technology that automates the buying and selling of ad space in real time. In this conversation, Nataraj sits down with Jared Siegal, the founder and CEO of Attitude, a company at the forefront of empowering publishers to maximize their revenue in this competitive landscape. Jared shares his unexpected entry into the ad tech industry, demystifies concepts like header bidding and ad exchanges, and explains how his company’s SaaS model is disrupting the status quo. They also explore the seismic shifts caused by AI in search, the ongoing debate around Google’s market dominance, and what the future holds for content creators and publishers trying to navigate this intricate digital world.

    → Enjoy this conversation with Jared Siegal, on Spotify or Apple.
    → Subscribe to our newsletter and never miss an update.

    Nataraj: So I think a good place to start would be how did you get into this ad business?

    Jared Siegal: Totally by accident. I don’t think anyone grows up saying, “I’m going to serve ads on the internet” or even really understands that this part of the economy exists. I went to school for econometrics, which is applying economic theory to math problems and vice versa. I couldn’t even tell you how I got into that, but as I was getting ready to graduate, I really wanted to work in the car industry. I reached out to every graduate from my university who worked at Ford, Chevy, and all the major brands here in the US. Couldn’t get a job.

    I went to the head of our school’s entrepreneurship program and said, “Hey, I got a cool idea for a class I wanna teach here.” I pitched it to him, and he said, “This is a great idea, but I really think you should meet this guy, a former graduate from our school. He runs a company called Answers.com.” I met him, and frankly, I had no idea what they did. I didn’t understand it. He offered me a job and I said sure. And that’s how I got into online advertising. I had a choice of working on the revenue side of the business or the cost side. I was always taught growing up to always be a revenue driver, so I chose the revenue side. That forced me into Ad Ops, and very quickly, within a few months, I fell in love with this industry.

    Nataraj: So what were you doing? Was it trying to grow revenue or grow traffic?

    Jared Siegal: It was twofold. One was actually trying to grow revenue, and one was trying to grow traffic, which obviously indirectly and directly grows your revenue. On the revenue side, this was right when DFP, now GAM, was created. So I was literally learning how to integrate DFP on a website, figuring out how to get away from this concept of a waterfall auction into something a bit more programmatic and real-time, and creating a bunch of different layouts and page types to understand which ad units, sizes, and arrangements make us the most money.

    Nataraj: Can you explain DFP and the waterfall concept?

    Jared Siegal: Yeah, DFP, which is now called GAM, is Google’s ad server. It’s used by almost every website on the planet to host the final auction of that ad on your website. Before that, people were hard-coding ads on their websites and hoping they made money. The creation of the ad server meant that you as a publisher could host an auction, get a bunch of people to compete for that ad, and choose the highest winner. Waterfall is this idea of, let me call Google, if Google doesn’t fill, let me call partner B, if partner B doesn’t fill, let me call partner C. Where we are today in programmatic is, let’s get Google, partner A, B, C, D, E, F, G, all to compete in real-time. They all bid at the same time, and whoever wins, wins. It’s a little bit faster, a little bit more efficient, and it’s far more accurate in terms of valuing your audience.

    Nataraj: So let’s explain the lay of the land today. For example, I go to a site like verge.com and I see display ads. What are all the players involved when I’m seeing that ad? Who’s the publisher, who’s the bidder, who’s the exchange, and what is Google’s role versus Attitude’s role?

    Jared Siegal: Okay, cool. So you go to that website; the website is the publisher. They’re the one that is publishing the content, and you’re on that site because you like their content. When that ad gets served, 99% of the time that publisher probably uses Google Ad Manager as their ad server. It’s what eventually makes the final decision of who had the highest bid from all of these exchanges. Google is also an exchange, but there are hundreds of exchanges that work with publishers directly and tens of thousands behind the scenes. All of these exchanges need what’s called a wrapper to host this auction and pass all the different bids and ad creatives into Google Ad Manager so it can make a decision. And that’s what Attitude does. There’s a handful of companies that do that part of the business. You have the ad server, you have the advertisers, and you have the company that is connecting the advertisers to the publishers. Attitude is that connection.

    Nataraj: So you collect the different bids for the ad spot. Where are you collecting them from?

    Jared Siegal: It’s all happening in the browser in real-time. Publishers basically load our code in the head of their page. On page load, boom, we instantly start pinging all these different advertisers they have relationships with to find the highest bids and send them along. It’s happening in milliseconds. For that one ad to be served to you on that one website, there were probably millions of different agencies, brands, and companies that got pinged in a matter of milliseconds to say, “Do you have something for me?”

    Nataraj: Why do customers choose Attitude versus just using Google? Because it feels like Google is a competitor here.

    Jared Siegal: To some extent. You could go directly with one exchange, and they’ll probably be able to serve most of your ads. But what happens when you only have one exchange is it’s no longer really an auction. They can pay whatever they want for that ad because they don’t have to beat out anyone else. Where a company like Attitude comes into play is we say, don’t let Google or Facebook dictate the value and price of that ad. Have a bunch of people compete and let the highest one win. In an auction, you want as many bidders as possible. You don’t want one person bidding because then they’ll just bid a dollar and they’ve won.

    Nataraj: So it’s better to use Attitude to create a neutral playing field.

    Jared Siegal: Yeah, you need some piece of technology to do that because Google Ad Manager and most ad servers don’t natively integrate all of the other exchanges. They’re limited to their own exchange. So if you want a bunch of exchanges to compete, you need this third-party tech to layer on your page. How we separate ourselves is our business model and the fact that we are agnostic. Everyone else in our space takes a percentage cut of the publisher’s business. We don’t have ulterior incentives to let one exchange win more because they pay us a higher rev share. It’s irrelevant to us who wins. We just want the publisher to make as much money as possible. We built a pretty big name for ourselves as the first SaaS pricing model in this space.

    Nataraj: Let’s talk about Google’s role. Do you have a view on the whole trial of Google as a monopoly?

    Jared Siegal: Let me preface this by saying we’re a really good partner of Google’s, and Google’s a great partner of Attitude. But there’s a reason why companies like mine exist, and that is because Google has historically had the last look at every auction. If they’re the ad server being used, they see all the other bids that come in, and after they see all that, they can say, “Hey, do I want to bid one penny more and steal that impression?” That starts getting into this idea of, is it really a fair auction? Companies like mine have been coming up with creative ways to make it fairer, whether it’s through setting price floors or creating our own ad server. With all of the recent news about monopolization, if you’re in our space, you’re kind of sitting back saying, “Yeah, obviously this has been going on for 20 years. Everyone knows this.”

    Nataraj: How did you start as a consulting firm and transition to a full-fledged product company?

    Jared Siegal: I started this company by accident. I quit my job and just wanted to do something on my own, so I started consulting for a bunch of publishers I had become friendly with, charging them by the hour. I did that for about 12 to 14 months and got the business up to close to a million-dollar run rate. Back then, auctions were second-price, meaning the winner pays one penny higher than the second-highest bid. I made a career for a year of trying to figure out the gap between the first and second bids and setting minimum prices to capture more revenue. Then Google said everything’s moving to a first-price auction, and my whole business model was gone. At that time, a lot of my clients were using the same header bidding company and having a lot of issues. They were paying me an hourly rate to communicate those issues to this third-party company. I realized, why am I helping someone else grow their business? I should build this piece of tech myself, do a better job, and sell it to my existing clients. I gave it away for free for six or seven months to grow the tech, and eventually, I converted all my clients. At that point, I got an offer to buy the company from an ad exchange. I was blown away. I sat down with my wife and some friends, and they all said, “Don’t sell, grow the business.” So I called up my best friend, who’s now our CTO, and said, “Quit your job. Come over here. Let’s build something.” And the rest is history.

    Nataraj: Post-ChatGPT and Google’s AI search results, how is that affecting publishers?

    Jared Siegal: For sure. The fact that Google rolled out AI in its search results radically changed SEO. If you’re a website where the majority of your content is easily answerable in one sentence or a yes/no manner, AI is going to crush your business because the answer appears in the search results and the user never clicks through to your website. If you’re a site that has opinionated, long-form content, or things that are not a simple question-answer relationship but more like thought pieces, you’re probably much safer, at least for now. AI inside of search results has made the internet worse. I think most publishers would agree. Every piece of tech developed in our industry has always been to help the biggest players—advertisers and search engines—not publishers. AI has a huge impact on traffic for a lot of publishers.

    Nataraj: Internet traffic seems to be shrinking or consolidating, but Google and Facebook are still increasing ad revenue. How is that possible?

    Jared Siegal: To some extent, it’s pricing control, but also an important piece of information is that any search engine probably makes more money from the ads served in their search results than the revenue share they get on ads they help serve on publisher websites. If you search on Bing and click on one of the paid search results, they probably made a dollar. If you click a link to a publisher’s website, they might make a few pennies. There’s a huge asymmetry and a conflict of interest here. It behooves them to not send you traffic and to keep you within the search results page. They make more money that way.

    Nataraj: What’s a common misconception about running a company that you’ve found not to be true?

    Jared Siegal: There’s this concept that was hot a few months ago about founder-led versus employee-led businesses, and many people were anti-founder-led. I am very involved in the day-to-day of Attitude, from cutting checks to talking with publishers to running A/B tests to negotiating deals. I love it and I do it all. I think a successful entrepreneur and leader is someone that actually understands all aspects of the business. People say, “Just hire smarter people and have them handle all that.” 100%, have them handle it, but you better understand what they do better than they do. If you want to run a successful company, you need to understand every penny that comes in and every penny that goes out. We’re very much a founder-led business, and I think it’s what has allowed us to scale up as quickly as we did.

    Jared’s insights reveal the intricate balance of power in the ad tech industry and the critical need for solutions that champion publishers. As AI continues to reshape content discovery, the strategies discussed in this conversation offer a valuable roadmap for navigating the future of digital monetization.

    → If you enjoyed this conversation with Jared Siegal, listen to the full episode here on Spotify or Apple.
    → Subscribe to our Newsletter and never miss an update.

  • Ambarish Mitra on Grey Parrot: AI for a $1.6 Trillion Waste Crisis

    The global waste crisis is a staggering $1.6 trillion problem, with mountains of discarded materials ending up in landfills and oceans. But what if we could see this “waste” not as trash, but as a valuable resource? This is the mission of Ambarish Mitra, co-founder and CEO of Grey Parrot. After a successful journey in augmented reality with his previous company, Blippar, Ambarish pivoted to tackle a more tangible and pressing global issue. Grey Parrot uses sophisticated AI and computer vision to analyze and sort waste streams in real-time, bringing unprecedented intelligence to the recycling industry. In this conversation, Ambarish discusses the technological challenges of deploying AI in harsh industrial environments, the importance of building a cost-effective hardware and software solution, and how data is key to unlocking a truly circular economy where materials are recovered and reused, not discarded. It’s a fascinating look at the intersection of deep tech and environmental sustainability.

    → Enjoy this conversation with Ambarish Mitra, on Spotify, Apple, or YouTube.
    → Subscribe to ournewsletter and never miss an update.

    Nataraj: What is Grey Parrot, and how did the idea start?

    Ambarish Mitra: Grey Parrot is a waste intelligence platform that uses computer vision-based AI blended with material sciences to recognize large-scale waste flows. When people throw away rubbish, it ends up in material recovery facilities where it’s processed and sorted for recycling, landfill, or incineration. Grey Parrot uses analyzer boxes to recognize 100% of the waste flowing through these plants, helping to sort it more efficiently. It’s a large and complex problem because humans generate garbage at such a massive scale that it can’t be solved with just human or mechanical interaction alone. It requires a large amount of vision-based processing and was almost waiting for the AI era to kick in to address it. We saw a large, unaddressed opportunity. Plus, waste is a global crisis that impacts lives and the planet, so we decided to address this issue head-on.

    Nataraj: Was the initial idea to do what you’re doing today, or was it different?

    Ambarish Mitra: It was different. My co-founder and our initial team came from my previous company, Blippar, where our mission was to build the world’s first visual search engine. We built a large-scale vision model, but we realized our revenue model led to recognizing brands that often ended up in the bin. This got us thinking. Everyone has mapped the consumption world—Amazon, DoorDash, Instagram all know what you’re about to purchase. But after that $23 trillion of annual consumption ends up in the bin, there was almost no digitization. I call it the shadow economy. One reason waste remains waste is that no one is doing enough digitally to value and recover it. That’s why so much value is lost. So the idea came: why don’t we use our vision expertise to do something more impactful and circular? We call it waste, but we see it as paper, aluminum, and different types of plastic. We think of ourselves as a material asset recovery company rather than a waste company.

    Nataraj: What is the actual product that you’re selling to companies in the recycling ecosystem?

    Ambarish Mitra: Let me give you a brief intro to how waste works. Waste is thrown in bins, collected by trucks, and taken to Material Recovery Facilities (MRFs). It’s tipped out, piled onto conveyor belts, and goes through layers of mechanical processes. There are large leakages in that process, and a majority of that leakage ends up in landfill. Our goal is to reduce that leakage. We built hardware we call the analyzer. The job of the Grey Parrot analyzer is to analyze 100% of the waste flow in real time. These are rivers of waste on belts two meters wide, moving at three meters a second, processing up to 1,500 tons of waste per day.

    When the camera recognizes 100% of the waste flow, it helps plant owners understand the unit economics of their business—what material comes through and what its financial value is. Secondly, it provides waste analytics to show if the plant is efficient or inefficient because every percentage difference is a revenue opportunity. The last thing is quality control—the purity of the materials. The more single-stream a material becomes, the more a buyer will pay for it. Finally, we’re integrating a brain into these mechanical machines, much like Waymo makes existing cars into self-driving cars. We are making these plants semi-automated by applying intelligence to existing mechanics, sending signals from one gate to another to ensure everything is sorted as purely as possible. The plant owner sees a dashboard where all this data is available, showing if the plant is working optimally.

    Nataraj: What are the architectural and structural issues specific to this industry that you had to navigate? It sounds like you’re shipping hardware and software into environments that are not known for being tech-savvy.

    Ambarish Mitra: That’s a great question. This is not a category where you can grow at any cost. It’s a cost-prohibitive industry where every cent matters. Unlike growth-oriented industries like e-commerce or advertising, you can’t have a variable cost architecture where revenue compensates for growth costs. Here, we have to recover more waste and create value from it. The tonnages are massive. So, we had to build an architecture where a lot happens locally on the machine. Our deep learning models sit locally so our costs don’t go up as we process hundreds of millions of images. We process images at the scale of social networks, but we’re processing trash, not people.

    It also needs to be near real-time, because the system has to react within 30 milliseconds to trigger a robotic arm, an optical sorter, or stop the plant for hazardous materials. The system cannot rely solely on internet connectivity. We came up with an architecture that requires the internet periodically, but a lot of the processing is on the edge. A huge amount of the vision processing actually happens on the camera itself to normalize images, because lighting conditions in every plant are different. We built one platform that works in every plant. It was an interesting challenge to consider everything from image capture to model building to ensure it works with 99% efficiency, 24/7.

    Nataraj: Can you talk a little bit about customer acquisition? How did you approach your first five to 10 customers and how do you scale now?

    Ambarish Mitra: As an outsider, we had to learn the hard way. We came from a background of large-scale, vision-based compute, but we didn’t understand waste. So, in the first days, we did something smart: we built the first version of the product *with* the waste industry. We asked waste management companies what problems they were trying to solve, like counting for audit trails or quality control. We learned from them and released our first version by talking to seven or eight customers, giving them the intelligence for free for the first two years while we built our larger model.

    We also didn’t build it in just one geography. We spread out across Europe, America, and South Korea to get diversity of data. Commercially, we started with a direct sales model, hiring people from the industry. Then we learned there’s a whole middle tier of specialized salespeople who are plant builders. They were already aggregating multiple technologies to build a plant, so it made sense to partner with them. In the last two years, we partnered with Bolograph, the world’s biggest plant builder, and Van Dyke Recycling Solutions in the US, America’s largest. We disintermediated our direct sales model through these strategic partnerships, which made us more cost-efficient and allowed us to scale effectively.

    Nataraj: Which countries are doing the best when it comes to waste management?

    Ambarish Mitra: Japan and Korea are very good. Germany is very good. The society is very conscious, and it’s designed to collect waste in many forms, not just from bins. Germany has a direct deposit scheme where people can return bottles for vouchers, for example. I would say there are four components to solving this. One is the manufacturer, who can take more responsibility through standardization, like how USB cables were standardized. Then you have the government’s role, which can enforce regulations. Then you have the waste management side, which can optimize and digitize with AI. And the last quadrant, which has a lot of power but often doesn’t use it, is the consumer making choices that are more circular in nature. Today, consumers are making some choices, governments are doing something, and a few brands are doing a few things in fragments, but a perfect storm hasn’t happened yet.

    This conversation with Ambarish Mitra offers a compelling look at how advanced AI can be applied to solve one of the world’s most fundamental environmental problems. Grey Parrot’s innovative approach not only enhances the efficiency of recycling but also provides the critical data needed to build a sustainable, circular economy for future generations.

    → If you enjoyed this conversation with Ambarish Mitra, listen to the full episode here on Spotify, Apple, or YouTube.
    → Subscribe to ourNewsletter and never miss an update.

  • How Chronosphere’s Founder Solved Uber’s Observability Crisis

    The Challenge of Modern Observability

    In the rapidly evolving world of cloud-native technology, observability has become a cornerstone for maintaining reliable and performant systems. Yet, as companies shifted to containerized environments like Kubernetes, traditional monitoring tools struggled to keep up with the scale and complexity. Martin Mao, co-founder and CEO of Chronosphere, experienced this problem firsthand while leading the observability team at Uber. He witnessed the explosion of data and costs associated with monitoring microservices at a massive scale. This challenge became the crucible for a new idea. Martin joins us to share the story of how he and his co-founder turned their internal solution at Uber into Chronosphere, a leading observability platform. He delves into the nuances of building for a containerized world, the strategy behind competing with cloud giants, and the future of observability in the age of AI.

    → Enjoy this conversation with Martin Mao, on Spotify, Apple, or YouTube.

    → Subscribe to ournewsletter and never miss an update.


    The Genesis of Chronosphere at Uber

    Nataraj: How did Chronosphere start? When did you decide you had to stop working at other companies and start your own?

    Martin Mao: The story goes back to when my co-founder and I worked at Uber, where we led the observability team. We faced many of the challenges internally at Uber that we’re now solving for our customers at Chronosphere. We ended up creating a bunch of new technologies in that solution and open-sourcing many of them. That showed us that the observability problems we were solving for Uber were also being seen by the rest of the market as they started to containerize their environments. Ultimately, that led us to decide we should create a company to bring the benefits of this technology to the broader market.

    Nataraj: What was the specific problem you faced at Uber that wasn’t being solved by available tools at the time?

    Martin Mao: If you think about observability, it’s about gaining visibility and insights into your infrastructure, applications, network, and business. The concept isn’t new; we’ve had observability software, previously called APM or infrastructure monitoring software, for a long time. What happens when you start to containerize and modernize your environments is twofold. First, you’re breaking up larger monolithic applications into smaller microservices. You have more tiny pieces running on containers, which are running on VMs. There are just more things to monitor, which generally produces a lot more observability data. The first problem you’ll find is either there’s too much data for your backend, or it costs you too much.

    Second, the types of problems you’re trying to solve on monolithic apps running on a VM are different from the causes of problems in a distributed, containerized environment. A lot of APM software focused on how software interacted with hardware and the operating system. In a containerized world, you often don’t have access to that level, and a cause of your issue is more likely a downstream dependency, a deployment, or a feature flag change. The causes of problems have changed, so you need a tool optimized for these new types of issues. Those were the two big problems we saw at Uber: too much data, too much cost, and it wasn’t the ideal tool for these new environments. When we looked at the market at the time, there was nothing we could buy, so we were forced to build our own solutions.

    Nataraj: What services were available at that point? There’s a lot more competition in the observability space now.

    Martin Mao: There was still a lot of competition back then, but different types of companies. Tools like AppDynamics and New Relic were very popular. Even Datadog was a series C company when we were looking at this problem space. There were many solutions, but none were targeting containerized environments. In 2014, when we were solving this at Uber, the majority of the market had not containerized. It was pre-Kubernetes becoming the de facto platform. Most folks were running on VMs, and an APM-style piece of software was probably the right solution.

    Nataraj: You mentioned open source. Was this the M3 database that you open-sourced?

    Martin Mao: Yes, it was multiple solutions. One was M3, the backend, which was a time-series database great for storing metric-based data. Jaeger, for distributed tracing, was created by the same team and is a CNCF project today. We also open-sourced various clients and other pieces.

    Acquiring the First Five Customers

    Nataraj: So you saw a gap in the market and decided to start the company. What were those initial days like? Talk to me about getting your first five customers.

    Martin Mao: We saw the gap in the market later, around 2018-2019, especially after KubeCon in Seattle when all the major cloud providers announced they were going all-in on Kubernetes. It was only then that we realized there was a real gap in the broader market. In the beginning, it was quite difficult. Just like every other startup, nobody knew who we were. There was no brand recognition. For the first one or two customers, there was a bit of trust because we had worked with people at those companies when we were at Uber. They knew us as the observability team at Uber and had used the technology before, which gave us some credibility. Honestly, the rest was just typical outbound efforts. I was on LinkedIn every day sending 500 messages to various VPs and CEOs, saying, ‘Hey, this is us, this is the problem we’re trying to solve. Can I get you on a call?’ A lot of outbound emails and messages to get those opportunities.

    Nataraj: Observability is mission-critical, used to find and fix live issues. It must be hard to convince a company to adopt a new mission-critical technical product. Were your initial customers transitioning to Kubernetes and saw it as a good time to test a new solution?

    Martin Mao: Initially, it was a lot of companies that had already transitioned. These were tech-forward companies running mostly containerized environments at scale in 2019-2020. Being mission-critical probably didn’t help us as a startup. You’re trying to convince a company to replace a mission-critical piece of software they’re likely purchasing from a big public vendor with a well-known brand name. As a one or two-year-old startup, the benefit of switching had to be so large that it would outweigh the risk. For us, early on, the benefit was on the scale and performance of the backend, but also on cost efficiency. It was so much more cost-efficient than other solutions. We’re not talking 20% more cost-efficient; we’re talking four to five times more cost-efficient. The gap had to be very large.

    The Chronosphere Platform: Differentiating on Cost and Capability

    Nataraj: Can you give a high-level overview of the products Chronosphere offers today and talk a bit about the business model?

    Martin Mao: We offer two products. One is our observability platform, which can ingest and store logs, metrics, traces, and events from your infrastructure and applications. We then provide analytics capabilities on top to help you debug issues. Compared to others, it differentiates in two main ways. The first is cost efficiency. We realized there’s a lot of waste in observability; you store and pay for a lot of data you may not need. Most observability companies charge you for the more data you produce, so they aren’t motivated to help you reduce it. As a disruptor, we had to do something different. We created features that show the customer what is and isn’t useful, giving them tools to optimize the data so they only pay for what’s useful. This not only reduces costs but guarantees that every dollar is well spent.

    The second differentiator is that you need a different tool optimized for modern environments, where the probable cause of an issue is a downstream dependency, a new rollout, or a feature flag change. Our platform looks for those changes and correlates them with issues. Our customers have found they reduce their time to detect and resolve problems by around 65%.

    Separately, we have a solution called an observability telemetry pipeline. You can install this in your environment in front of an existing tool like Splunk or Elastic. It can route and transform the data it collects to those backends, but it can also reduce and optimize data volumes. For instance, you can route subsets of data to cold storage like S3 to reduce costs. You don’t have to use it with our observability platform, but it provides a similar benefit without a full migration.

    Nataraj: So customers using competitors’ observability products think about cost predictability?

    Martin Mao: In the last two to three years, as the economy has changed, they care about it a lot. It’s not just the absolute dollar amount. Our customers ask what fraction of their revenue or operating expense is spent on observability. The predictability and knowing the relative percentage of cost matters. If your business grows 2X, but your observability costs grow 3X, that’s a bad efficiency model. Being able to see and control that is key. We provide tools that show them where their spend is going and how data is being used, giving them the ability to make decisions and stay within their budget.

    Competing in a Crowded Ecosystem

    Nataraj: All the big three clouds—AWS, Azure, Google—have their own observability products like CloudWatch and Azure Monitor. How do you compete with them, especially with bundled pricing advantages?

    Martin Mao: I look at this in a few ways. First, what’s unique about observability is that it’s meant to tell you if your infrastructure is up or down. If your observability service runs on the same infrastructure you’re monitoring, there’s a problem. For example, AWS’s observability services depend on S3 and Kinesis. When S3 goes down in a region, your infrastructure is likely impacted, but the thing meant to tell you that is also down. It’s in that moment you need observability the most. There’s a huge advantage in decoupling your observability from the infrastructure it monitors. Our architecture is purposely single-tenanted, allowing us to ensure we are not on the same public cloud infrastructure as our customers.

    Another angle is that cloud providers are really good at providing building blocks—the underlying infrastructure—but historically less great at building end-to-end SaaS products. Their observability services are decent for storage, but they lack advanced capabilities for data efficiency, root cause analysis, or anomaly detection. If you look at the leaders in the observability market—Chronosphere, Splunk, Datadog—none are cloud providers. To compete, you need to differentiate on the product side, not just on underlying storage and unit economics, because you’ll likely lose that game against the cloud providers.

    Product Philosophy: Building for the Bleeding Edge

    Nataraj: What’s your philosophy on deciding what to build next?

    Martin Mao: We listen a lot to our customers. Tech-forward companies are generally containerizing first and doing it at scale, so we get to work with companies at the bleeding edge of their technology stack. They are constantly pushing us on what’s next and inform a lot of our innovation. Targeting early adopters gives you significant input on product innovation, versus targeting the laggards or the majority. We’re lucky that we target innovators and tech-forward companies who provide us with a lot of input.

    Nataraj: Who are some of these tech-forward customers today?

    Martin Mao: When we first started, it was large, digital-native companies like DoorDash, Robinhood, and Affirm—companies that grew up in the 2010s in the public cloud. They were the first to containerize and were pushing technology. Today, we see more of the majority of the market containerizing. Big enterprises like JP Morgan Chase, American Airlines, and Visa are containerizing at a large scale, often because they have a hybrid and multi-cloud strategy. If you have two or three different pieces of infrastructure, you need a common layer like Kubernetes to avoid implementing your infrastructure three times. Now, we see a lot more demand from those companies. And of course, the latest are the AI companies. Everyone starting an AI company today is running on modern, containerized infrastructure from day one, which is our sweet spot.

    Observability in the Age of AI

    Nataraj: You mentioned AI. How does observability change for AI companies, especially for LLM-based applications?

    Martin Mao: We noticed that even with LLM technologies, you still have application logic and CPU-based workloads. But it added new use cases, like monitoring GPUs for inferencing. At the infrastructure level, monitoring a GPU cluster isn’t too different from a CPU cluster. As you go up the stack, we found that the basic observability data types—metrics, distributed traces, and logs—still map very well for debugging what’s happening in an LLM application. Because the data types map nicely, the features and tools we’ve built work quite well for these new apps. So far, we haven’t had to create a new solution; it’s just been more data and more use cases.

    Nataraj: How are you thinking about leveraging AI for your own product?

    Martin Mao: We’ve been playing around with it a lot. Initially, like everyone else, we put an LLM trained on our docs to create a chatbot. But we found that a lot of our data is numerical or unstructured in a way that’s not typical for LLMs. When we try to apply a foundational model to the raw observability data, it’s not very effective because it wasn’t trained on it, and this data is unique to each company. However, for years, we’ve been building knowledge graphs and structuring this data to power our analytics engine. When you feed these structured knowledge graphs into the models, they become much more effective. We were lucky to have already been doing the hard work of data scrubbing and normalization for our product, and now it’s beneficial for AI models. Still, I’m not sure a chat interface is the right starting point for observability. When you get paged, a visual interface with graphs feels more natural than a chat box asking, ‘Tell me what’s wrong’.

    Founder Reflections

    Nataraj: We’re almost at the end of our conversation. What do you know about starting a company that you wish you knew earlier?

    Martin Mao: Early in my career, I assumed that to be a CEO, you needed an MBA and executive experience. I found that not to be true. I don’t have an MBA or experience as a big executive. I was an engineering manager at Uber before this. There’s probably less of a barrier for someone to become a founder and CEO than one might think from the outside.

    Nataraj: What are you consuming right now that’s influencing your thinking? It can be books, audio, or video.

    Martin Mao: A lot of conference talks, especially on AI-related topics where things are evolving so fast. By the time a book comes out, it might be outdated. So, things like podcasts and conference talks are better for accessing what’s happening live. Historically, even a research paper takes a while to be released, and a book takes even longer.

    Nataraj: Martin, thanks for coming on the show and looking forward to what Chronosphere does in the future.

    Martin Mao: Thank you. Thanks for having me. I enjoyed the conversation, and hopefully, we can do this again sometime.


    Conclusion

    Martin Mao’s journey with Chronosphere offers a compelling look into solving complex technical challenges born from real-world, large-scale operations. His insights on product differentiation, customer acquisition in a mission-critical space, and the evolving landscape of AI-driven observability provide valuable lessons for founders and engineers.

    → If you enjoyed this conversation with Martin Mao, listen to the full episode here on Spotify, Apple, or YouTube.

    → Subscribe to ourNewsletter and never miss an update.

  • The Startup PR Playbook: Emilie Gerber on Media Strategy for Tech

    In the fast-paced world of tech startups, building a great product is only half the battle. Getting noticed by the right people—investors, customers, and top talent—requires a strategic approach to communication. This is where public relations comes in, but for many founders, PR remains a mysterious and often misunderstood discipline. To shed light on the subject, we sat down with Emilie Gerber, the founder and principal of SixEastern, a PR firm dedicated to helping startups and tech companies navigate the media landscape.

    With a background that includes corporate communications at Uber and product communications at Box, Emilie brings a wealth of experience to the table. In this conversation, she demystifies the world of startup PR, drawing a clear line between earned media and paid marketing. She offers a practical framework for when early-stage companies should consider hiring a PR agency, how to set realistic expectations for coverage, and the art of crafting a pitch that resonates with today’s journalists and content creators.

    → Enjoy this conversation with Emilie Gerber, on Spotify, or Apple.

    → Subscribe to ournewsletter and never miss an update.

    Nataraj: A lot of my audience is tech-heavy—people working in tech who are trying to start companies, founders, operators, and they’re usually unaware of the PR industry. A good place to start is if you can set a context about what a PR company or person does in general, and then we can narrow it down to tech specifically.

    Emilie Gerber: The biggest misconception I see when chatting with founders, especially first-time founders that haven’t done PR before, is conflating marketing and public relations. Marketing involves a lot of paid methods: paid advertising, sponsorships, that sort of thing. There’s also owned content, stuff that you post on your blog, doing webinars, and the social channels that you post to. PR is really neither of those things, though there’s obviously always going to be a little bit of overlap.

    PR is anything that’s earned media. So earned is when you are able to get that speaking slot or get that interview with a reporter or get on a podcast without necessarily needing to sponsor or pay. You’re getting it because of your credibility. The value in that is that because you’re not paying, there’s supposed to be this sort of objectivity to it where you earned the spot because of your credibility or the business you’re building or what you have to share with the reporter. It’s held in a different regard than other kinds of marketing, and it’s an important part of the puzzle. But for startups, because they’re usually small and new, there’s not going to be the same sort of interest necessarily in the business as the companies that are further along.

    The other big misconception is that you launched your company, now let’s go get that big TechCrunch feature or that big Wall Street Journal feature. Most of those publications have maybe one or two relevant reporters to your business and they’re in charge of covering your entire space. So that’s not always necessarily what you can get right off the bat. There are other things that we can go into that you can get, but that’s usually what I find from the first conversation.

    Nataraj: At what point in a startup’s stage is it worth having an internal or an external PR engagement?

    Emilie Gerber: For a lot of seed-stage companies, it does not make sense to have a PR agency on retainer. There are exceptions to that rule. We’re working with a seed-stage company right now that is doing some really wild stuff. They have an AI tool being used for a class at Harvard Business Review and every student’s taking that course. To me, that’s a big enough story where it doesn’t matter how much funding they have; reporters are going to be interested regardless. But if you’re building a more infrastructure AI tool or software, chances are unless there’s something that’s really, really unique—and the bar for unique is super high—you don’t need to have an agency on retainer yet. What you can do is potentially still make a one-off announcement announcing that the business exists and that you’ve raised funding, especially if you have a relatively large seed round or some great investors. You just have to be more realistic with what you’re going to get for that piece.

    Generally speaking, when we work with a company that’s early, we’re trying a lot of different things. We’re being really creative with the outlets we go after and we will get something, but you shouldn’t bring on a PR agency if you’re expecting a really top-tier piece of coverage in The Wall Street Journal, because that’s not realistic. But in a project capacity, seed-stage companies can do something, but I wouldn’t have someone on retainer. I think by the time you’re Series A, there’s more that can be done and it can make sense. There are some really great consultants out there too; you don’t necessarily need to bring on a full-fledged agency. We’re kind of in the middle where we act a little bit more like consultants, but we are an agency. But by then, you’re still not going to be getting the huge stories, but there’s going to be podcasts to go on, awards and lists you can submit to, and speaking opportunities at conferences. So there’s going to be stuff that you can be doing and find value out of the engagement. But really, the longer you wait, the more you can end up doing, and you’re going to get higher ROI from the engagement. So even then, some companies wait till they’re closer to Series B, I would say.

    Nataraj: How do you cater expectations? Because every startup will see your previous success story and come to you saying, ‘I also want a TechCrunch or Wall Street Journal coverage when I raise my seed round.’ How do you gauge or set those expectations?

    Emilie Gerber: I try to really dig into the details with them of their story versus what they’re comparing themselves to. Maybe they are the same caliber and we can go pitch something similar to something else we landed for another client. Even when we are able to do that, it often just comes down to reporter bandwidth. So I explain that. Sometimes you could have the coolest story in the world, but if it’s happening at the wrong time or you just have bad luck with pitching it—part of it’s luck—then you might not get the same win. The first thing I try to do is emphasize how much of it is not in our control.

    Another thing to emphasize is that reporters are not paid by us; their only job is to report on the news and to tell stories they think their audience will find interesting. They don’t owe us anything. They don’t owe the startups that they cover anything. And then if they’re comparing themselves to a unicorn story that’s not similar to what we’re telling for them, I try to go into the details: ‘Well, this company shared that they just reached $100 million in ARR,’ or ‘This company has celebrity investors. What are we bringing to the table that’s similar?’

    It’s a balance because you also don’t want to shoot down a founder who is super excited about what they’re building. So it’s a balance of showing them that we’re equally excited and that we’re going to try to get them the best possible outcome, but it’s just a tough world out there with media.

    Nataraj: For podcasts specifically, do you advise founders to craft their message? Do you help with that? Because not every great founder is a great storyteller.

    Emilie Gerber: It’s a fine line. I think a lot of the larger agencies spend so much effort crafting messages that the execution piece gets lost and they’re not even focused on pitching. I think it’s easy for founders to get too in their head if they’re going off of talking points. Those can be more valuable for traditional media interviews where you really do want to land the headline and one or two specific quotes. For podcasts, I’m a fan of going at it a little more casually.

    If we can get the questions in advance, which some podcasts do share, that can be useful. We’ll say, ‘Hey, look these over, see if there’s any that you think are alarming or you want to discuss.’ But because it’s not really a product pitch most of the time—it’s talking about their journey and their story—I prefer they don’t spend too much time on specific talking points because they usually end up sounding really canned.

    One thing that can be really great for prepping for podcasts is having a couple of stories or anecdotes in your back pocket that you always just use. Those can be useful to think of in advance; otherwise, they might not occur to you on the spot.

    Nataraj: I always tell founders to start a document to note down their thoughts or the highlights they want to make. You can use it as a starting doc for future interviews. People see successful thought leaders and think it’s coming off the hip on a podcast. It’s not. They have running notes of ideas and sometimes a team of people bringing in interesting statistics.

    Emilie Gerber: That’s why I like the stories. And a good point you raised that I forgot is having in your back pocket the stats that you can share, whether it’s customer names you’re able to disclose, the latest stats on the business, or any market or industry stuff. Those are not going to be top of mind for you unless you have them prepped in advance. And if you’re at a startup, you do want to make sure you’re being consistent with what you’re sharing and you’re not just riffing with company metrics. That’s another area where it can be really useful to have something written down.

    Nataraj: There’s also this trend of founders going direct and not engaging with a PR filter. Every founder wants to be a persona on Twitter. Is that where the PR industry is going?

    Emilie Gerber: It’s funny you brought that up. I’m actually doing a survey of startup founders, and so far, I think 96% put that it’s important to build up your founder’s social profiles, which is way higher than I expected. So the general sentiment is yes, you should be doing this. Personally, maybe this is a contrarian view, but I don’t think it’s realistic or scalable for that to be the case for everyone. Not every founder is going to have it come naturally to them. For some, it’s going to take a lot of time, especially if they’re not willing to just outsource their social presence.

    I don’t know that it’s going to be possible for every founder to build up a huge social following where it’s actually worth the time investment. I just don’t know if it’s always realistic. Within our community right now, it’s definitely the hot new comms approach. I do think there’s tons of value in it, especially for the right founder. But for others, I just think it would be distracting them from the business and other marketing they can do. The work that we’re doing, the more traditional approach, is that if a client goes on your podcast, there’s a built-in audience. You’re able to tell the same story but without having to do the work of building the audience.

    Nataraj: People say traditional media is dead, but we’ve been talking about TechCrunch, Wall Street Journal, and CNBC. Why does it still matter for startups to be on traditional media?

    Emilie Gerber: It definitely is smaller. One of the biggest benefits is the trust that you get from being in a traditional outlet. There’s just a certain brand cachet that comes along with having your startup in a publication that people know and respect. I think it helps with trust with customers and with potential candidates. It’s a validation piece that companies still look for.

    But I should also flag that beyond traditional media and podcasts, there’s this whole world of new media. Alex Konrad from Forbes just launched Upstarts. Eric Newcomer has Newcomer. Some of those are more open to startup stories and conversations. I think those are kind of blurring the lines. I really value those as well. There’s this third bucket that I think is very helpful right now too.

    Nataraj: A lot of PR firms I see usually have a marketing wing. How do you think about that PR plus marketing service offering?

    Emilie Gerber: It’s interesting because I’ve gotten asked about this a lot with how much media is changing. We basically had a waitlist for the past six months. We can’t take on new clients. We’ve been so busy that I haven’t felt the pressure to explore that yet. I’m sure it’ll happen eventually because media is going to continue to change, but it’s almost like, don’t mess with a good thing. For us, we’re busy with our current client base and we can’t take on new work, so adding new services doesn’t sound appealing to me right now.

    Nataraj: What do you know about PR now that you wish you knew before starting your career?

    Emilie Gerber: It has changed so much. A lot of publications overall have moved away from doing funding stories, period. Even TechCrunch and Axios, which covered them a lot. I think I would have maybe changed our model sooner to not be as focused on those. This is a lesson that I’m currently learning as we speak, but I think that the playbook is changing there and I don’t know what the new playbook is. But it’s one that I think I should have given more thought to maybe earlier.

    Nataraj: You were at Uber during a period of interesting PR challenges. Are there any crisis mode situations you were involved in that you can talk about?

    Emilie Gerber: I joined right when a lot of that stuff had started. My role at Uber was focused on comms for Uber for Business and their business development team, so any company partnerships. I wasn’t on the corporate comms team where we were focused on the actual crisis. If anything, it was a lesson for me to try to figure out how to pitch and land positive stories amidst a world where all this negative stuff was happening. I got some really great hits during that time, and I think it was about being very creative with who we worked with, doing the due diligence on them, and then pitching stories in a very specific way. It was a unique challenge trying to get them positive press during that time.

    Nataraj: What type of positive press did you get?

    Emilie Gerber: I launched Uber Health, which was HIPAA-compliant patient transportation. We went after health tech reporters, who could not care less about the ride-share side of the business, and got tons of product features on that. We put customers forward, we put a spokesperson forward that was the GM of that part of the business so it wasn’t anyone involved in anything else going on. We got some really straightforward hits that way. Some of these folks are just excited to get a unique opportunity to chat with Uber about how they’re thinking about healthcare, so they want to write a story that’s really focused on that.

    Nataraj: Which niche or sector of startups is ignored by the PR industry right now?

    Emilie Gerber: With all the focus on AI, a lot of those reporters that used to cover enterprise software more broadly are not anymore. If you’re not doing AI, there are not the right reporters out there for you right now. Those are the companies I struggle with the most in getting the right folks interested because everything is so all-consuming in AI right now. If your company doesn’t have that angle, you’re kind of left out to dry. I would say enterprise software, non-AI, is the answer.

    Nataraj: Emily, thanks for joining the show. It was very insightful.

    Emilie Gerber: Awesome, thank you so much. It was a great conversation.

    This conversation with Emilie Gerber provides a clear and actionable playbook for any founder looking to leverage the power of public relations. Her insights cut through the noise, offering a realistic perspective on what it takes to build a strong narrative and earn valuable media attention in the competitive tech industry.

    → If you enjoyed this conversation with Emilie Gerber, listen to the full episode here on Spotify, or Apple.

    → Subscribe to ourNewsletter and never miss an update.

  • Todd Bracher: Designing for Longevity at the Intersection of Science

    In a world saturated with fleeting trends and disposable products, what does it take to design something truly meaningful and lasting? We explore this question with Todd Bracher, an award-winning industrial designer and the founder of BetterLab. With a portfolio that includes partnerships with iconic brands like Herman Miller and Issey Miyake, Todd has been honored twice as the International Designer of the Year. In this conversation, he delves into the powerful intersection of design, science, and technology, revealing how this synergy drives innovation. Todd shares his philosophy on human-centered design, the critical importance of sustainability, and his journey building a successful design firm. He also gives us a look inside BetterLab, where his team is creating game-changing products, from UVC light sanitizers to glasses that can reverse childhood myopia. This is a deep dive into the mind of a designer who is shaping a more responsible and thoughtful future.

    → Enjoy this conversation with Todd Bracher, on Spotify and Apple.

    → Subscribe to our newsletter and never miss an update.

    Nataraj: We haven’t had many industrial designers on the podcast. We usually talk about growing companies and designing technology products, so I think it would be interesting to get a more design-centric perspective on bringing products to market. To start, could you give a quick background about your entry into design and your career so far?

    Todd Bracher: I’m not surprised that designers aren’t usually spoken with regarding business or startups, because designers often aren’t part of that process, strangely enough. That’s a source of my frustration. What brought me into design was applying to art school in the 1990s. I applied to Pratt Institute in New York, and to get in, you had to do a visual exam. The topic was to design a breathing device for a hypothetical future where we couldn’t survive in the open because of pollution. As I was drawing it, I started thinking through the design process: does it work? If you’re wearing it all the time, it has to look good, be comfortable, and work for men and women at work or at parties. When I submitted the drawing, they asked what it was because it wasn’t illustration; I had created a solution. They said, ‘Well, that’s called industrial design, but that’s not what you’re applying for.’ That’s the moment I switched to industrial design.

    Nataraj: Were you always good at drawing? What made you gravitate towards design?

    Todd Bracher: Drawing has always been a part of my life. It’s the lowest barrier to entry for seeing your ideas. When my brother and I were kids, we used to build little plastic model planes. He always said he wanted to be a pilot, and I was always in love with the form of the plane—how it’s very purpose-built, but beautiful. We had two different points of view on the same subject. Interestingly enough, he became a pilot, and I became a designer. It shows two ways to look at the same thing very differently and have very different experiences.

    Nataraj: To crystallize the idea of industrial design, can you talk about a couple of examples of projects you’ve worked on and brought to market?

    Todd Bracher: By definition, industrial design means really understanding how to manufacture at scale. You see a lot of design objects, but that doesn’t mean they’re industrially designed. Someone might make five chairs in their garage, and that’s design for sure, maybe a version of art or craft, but industrial design is about things that are repeatable and manufacturable at scale. My expertise is in understanding manufacturing, materials, processes, and the whole orchestration around supply chain and engineering. It’s really A to Z. I see myself as the representative of the market or the end user, and at the same time, the representative of the business manufacturing it. I’m the translator between the two. The products I work on can range from furniture to beauty products—I do fragrance bottles for Issey Miyake—to glasses or even a water dispensing machine. There’s a whole host of things, which is what’s cool about industrial design.

    Nataraj: I want to shift to your perspective on technology products. What are some tech products you admire that have a strong design element, constructed in a way that you as a designer appreciate? And please, no Apple products—that’s the go-to answer for all designers.

    Todd Bracher: And rightfully so, to be honest. Apple is incredible. What’s most interesting to me is when I see design in the world that leverages a certain aspect of science. I recall seeing things like color blindness correction. One example is a project we worked on with a gentleman who had invented a device that distributes a specific spectrum of UVC light. He developed it for NASA and the space station. I was part of the team that helped deploy it into architecture. What’s so incredible is that we weren’t just making a lamp. This UVC light is a germicidal light that deactivates pathogens—bacterial, viral—on surfaces or in the air, while being safe for humans in the environment. This gentleman figured out the science, engineered the light engine, and created a device we can afford. The designer’s job is to package it and deliver it to the market. These types of solutions are fascinating to me.

    Nataraj: In the world of industrial design, what trends are you noticing? What’s in, what’s out, and what might an average person not know about?

    Todd Bracher: The trends I see in design tend to be unfortunate in my opinion. They’re not going in the direction I would like, as they’re often very cosmetic. However, one trend that’s quite important is sustainability. You will see designers using less material and reaching for materials that are recyclable or come from recycled sources, like ocean-bound plastic. Various companies are collecting this material from waterways and reprocessing it for designers. This is a really wonderful trend. So on one hand, we have this incredibly responsible trend happening that most people don’t see. On the other hand, we still have the old trend of making consumable products, which has been disappointing. I think we’re in a transition point as an industry.

    Nataraj: What’s disappointing about the consumable products?

    Todd Bracher: I think they’re made a bit irresponsibly, without considering circularity or sustainability. A colleague and I once looked at a 30-story apartment building in New York City and wondered how many hammers were inside. If there are 100 apartments, there are probably 90 hammers. Why would there be even 50? Shouldn’t there just be two hammers in the building that people can share? This communal mentality could solve some of these problems. Instead, everyone is consuming things they don’t really need. It’s funny that as someone who creates products, I’m sort of anti-consumerism in that way.

    Nataraj: What’s your take on Ikea? It’s mass-market, attainable, and brings designs that might otherwise be inaccessible to a wider audience, similar to how Zara operates in fashion.

    Todd Bracher: It’s funny because they copied one of my lamps, and they did a terrible job at it. It’s not a well-executed version. However, I had a friendly argument with a friend about the drug industry—you can get a prescription for $80 a pill or the generic for $1. I think having a generic option is fantastic. I see IKEA in a similar light. I welcome that they copied my design. If someone enjoys it and can’t afford or access the original, that’s fine. I don’t know enough about their sustainability practices given their huge volume, and I imagine there’s a lot of waste because their products are so accessible that people tend to throw them away quickly. But as a business, I think they make pretty good design very accessible, and that’s a good thing. Design shouldn’t be expensive.

    Nataraj: What are some brands, in furniture or fashion, that you admire as a designer?

    Todd Bracher: One brand in particular is a Swiss brand called VITSOE. They make a shelving system designed by Dieter Rams around the 1950s. He’s often considered the founding father of Apple’s design DNA. It’s a very simple extruded aluminum rail you screw on the wall with a simple folded metal shelf. What I love is that these products look incredible nearly 70 years later. They function perfectly and last forever. They’re beautiful. That’s what I strive for in my work—creating something that stands the test of time in the truest sense.

    Nataraj: Is that a big aspect of well-designed products—longevity? And does that contribute to their cost?

    Todd Bracher: Yes, at least that’s how I like to live my life. I have a few things I really need and like, and they last forever. I don’t have to replace them every few years, which feels irresponsible. I go to these huge furniture fairs in Milan, and it’s an enormous amount of new stuff coming out every year. The question of where it all goes at the end of its life is a big one, and our industry doesn’t handle that very well.

    Nataraj: You run BetterLab. Tell me about the business of running a design firm and the types of products you’re building.

    Todd Bracher: I have two businesses. One is Bracher, my design consultancy, which is inbound—I work with clients. The other is Betterlab, which is my outbound venture platform. I started Betterlab because after serving clients for two decades, I wanted to do what I actually want to do. With client work, I don’t own it and don’t get to make 100% of the decisions. With BetterLab, it’s different. We have three ways of engaging. First, we do a diagnosis. Like going to a doctor, we first understand what a company needs rather than just taking a design brief. We provide a recommendation for treatment. The next phase is opportunity discovery, where we figure out what we’re trying to solve and if it aligns with business goals and market needs. The final phase is execution—the design portion—and then the rollout and marketing support.

    Nataraj: What are some of the products that came out of BetterLab?

    Todd Bracher: I’m quite in love with science, physics, and optics. I helped build a lighting business for 3M, and it was a realization that design and science fit beautifully together. BetterLab spun from this thinking. I had a beer with a scientist friend and asked him about his fears for the world. He mentioned myopia. Myopia is when the human eye doesn’t fully develop through childhood. He was one of the guys credited with inventing the commercialized LED, and he explained that modern LEDs are value-engineered to only emit the visible spectrum of light, ignoring the rest that the human eye thrives on. Now, kids spend more time indoors with LED lighting and screens, so they aren’t exposed to the full spectrum of light. The World Health Organization has identified myopia as the largest threat to eye health in the last hundred years. So, we developed a pair of glasses. In the frame, we attach a glow-in-the-dark material. When the child steps outside or the glasses are near a light source, they passively charge—no electronics. This material delivers the healthy spectrum of light to the eye. It also actually reverses myopia, unlike traditional treatments.

    Nataraj: I think you’re also working on another sustainability project using light. Can you tell me about that?

    Todd Bracher: Yes, back to the UVC light. Around 2019, I was helping put UVC light in architecture to mitigate the spread of COVID by sterilizing environments. But I realized a vaccine was coming, the technology was expensive, and people didn’t understand it. Meanwhile, I saw my young kids constantly using gel hand sanitizer and I wondered about the chemicals they were putting on their hands every day. On one hand, I had this chemical problem, and on the other, a technology that uses light to stop pathogens. I thought, what if we merge the two? So we developed Lightwash, a hand device using UVC technology. You put your hands under it, and within three to four seconds, they are sterilized. Light gets into all the crevices of the hands where liquid sanitizer doesn’t. Later, I learned that gel sanitizers are responsible for 2% of the global carbon footprint due to transport, storage, and maintenance. Our solution displaces that completely, which makes me incredibly happy.

    Nataraj: You also advised startups at Antler, a pre-seed firm. What was that experience like?

    Todd Bracher: My role there was interesting because they don’t make physical products, which is my expertise. I was a design advisor, asking questions from a design lens that they might not have considered. My role was to represent the end users. For financial or legal software, for instance, I’d ask, ‘Have you considered this? Does this experience feel trustworthy when you’re dealing with legal documents?’ I brought the soft side to their hard business, focusing on what really resonates with people.

    Nataraj: Are there any day-to-day products you use because their design and utility are so good?

    Todd Bracher: The first one that comes to mind is Leica cameras. They make what’s called the Leica M. The design has been roughly unchanged since it was first introduced, maybe in the 1930s. It’s an all-manual camera—no autofocus, no video. What it does is provide a real connection with capturing an image. It’s like the difference between driving a 1960s air-cooled Porsche and a modern Honda Accord. The Accord is great, but it doesn’t have the spirit, the feel of the machine and the connection to the road. The Leica is like that. It’s an inferior camera in some ways, but the experience is so superior that it makes you deliver your best work.

    Nataraj: What’s your take on modern design aesthetics, like the trend where many luxury brands have adopted very similar, minimalist iconography?

    Todd Bracher: I think one or two brands spearheaded it with success, and others followed quickly. I welcome it. I think design is late in this country. Apple helped unlock some of that, but the rest of the world, like Japan and Scandinavia, is light years ahead of the U.S. in areas like furniture design. I think globalization is helping improve design here. While it can get a little sanitized or washed out, I think it’s for the better. When you create simpler things, you have nowhere to hide. You’re delivering things that are more honest, which fits the contemporary culture we need, rather than hiding behind flashy noise.

    Nataraj: What’s your take on digital design? Is the tech world doing it well?

    Todd Bracher: I think it’s gotten better. I do fault Apple for some of their earlier choices, like the digital leather notebook with stitches. In my opinion, you should embrace the technology and its material rather than creating an image of a yesteryear material. But I do think digital design today has gotten quite good, even a bit experimental, which I welcome. I’m seeing more personality. The new codebases allow for more adventurous things. Designs are becoming less static, more engaging and interactive in a beneficial way. You can customize and adapt things much more, and I’m happy for that.

    Nataraj: Anytime a designer talks, Japan is always mentioned. What is it about Japan that is so interesting in terms of design?

    Todd Bracher: That’s a very big conversation. I have my own take. My partner is Japanese, so we have a deep appreciation for this. There’s a really deep connection to the experience of something and being truly present in what you’re doing. To me, that’s the anchor of what makes their design so good. In the Western world, we’re more interested in the cosmetics—is it the right shininess? In Japan, I feel they ask, ‘Are we really meeting the soul of what this thing needs to do?’ Take a traditional tea ceremony: the materials, the smells, the lighting—everything is considered for very specific reasons. It’s a true attention to the deepest meaning of what you’re doing.

    Nataraj: We’re almost at the end. What are you consuming right now, be it a book, podcast, or show, that you’re inspired by?

    Todd Bracher: I’ve been watching Lex Fridman’s podcast since he started. I enjoy his long-form interviews, usually on subjects I know nothing about, like a recent one with a former Russian spy. He also covers machine learning and other topics. He keeps it very neutral and is just there to share information. I’m also that weird guy who loves watching old MIT physics lectures on YouTube. I’m not a physicist, but after years of watching them, I feel like I have been trained. It’s fascinating how much you can learn, and it’s my way of switching my brain off.

    Nataraj: Who are your mentors?

    Todd Bracher: I don’t necessarily have a mentor, but one personality that keeps cropping up, strangely, is Charles Darwin. His thesis on the finches on the Galapagos—how different species had different shaped beaks based on what they were eating—really helped formulate my philosophy for design, which is designing in context. I’m making the solution most appropriate for its situation. I’m not imposing my opinion. The finch’s beak doesn’t have a random shape; it’s designed for function, but it’s still beautiful and logical. It’s absolutely designed for purpose. So I would say Darwin is my mentor.

    Nataraj: What do you know now about being an industrial designer that you wished you knew when you were starting out?

    Todd Bracher: The business side of design. For some reason, designers are often inserted at the end of a process to ‘make it look better.’ When I get things like this, I often ask, ‘Why are we making it? Did you talk to your market?’ You quickly find holes in the system. As a young designer, I wish I knew that we should be inserted at the beginning of the process to help identify the full context. That way, when the design arrives, we can deal with it relative to that context and not in isolation.

    Nataraj: Todd, thanks for coming on the show. This has been a fascinating conversation, and I’m looking forward to seeing what BetterLab creates next.

    Todd Bracher: Thank you, I really appreciate this. Thanks so much.

    Todd Bracher’s insights offer a powerful reminder that great design goes beyond aesthetics; it solves real-world problems with intentionality and responsibility. His work at the crossroads of science and design highlights a future where products are not only beautiful and functional but also sustainable and deeply human-centered.

    → If you enjoyed this conversation with Todd Bracher, listen to the full episode here on Spotify and Apple.

    → Subscribe to our Newsletter and never miss an update.