Dash0 Acquires Lumigo to Expand Agentic Observability Across AWS and Serverless

Episode 3438 mins12/18/2025

#34 - Rethinking Observability: eBPF, Bring Your Own Cloud, and the Future of the Monitoring Market with Shahar Azulay

host
Mirko Novakovic
Mirko Novakovic
guest
Shahar Azulay
Shahar Azulay
#34 - Rethinking Observability: eBPF, Bring Your Own Cloud, and the Future of the Monitoring Market  with Shahar Azulay

About this Episode

Groundcover CEO Shahar Azulay joins Dash0’s Mirko Novakovic for a candid conversation on why modern observability needs a fundamental reset. They dive into the real-world challenges of eBPF-based instrumentation, migration friction from legacy vendors and bold go-to-market strategies. They also debate Groundcover’s “Bring Your Own Cloud” model and how it prompts a reassessment of cost, control and business model incentives in observability.

Transcription

[00:00:00] Chapter 1: Welcome and Guest Introduction

Mirko Novakovic: Hello everybody. My name is Mirko Novakovic. I am co-founder and CEO of Dash0 and welcome to Code RED code because we are talking about code and Red stands for Request Errors and Duration the Core Metrics of Observability. On this podcast, you will hear from leaders around our industry about what they are building, what's next in observability, and what you can do today to avoid your next outage. Hi everyone! Today my guest is Shahar Azulay. Shahar is the CEO of groundcover, a cloud native observability company based on eBPF and in cloud architecture. He previously led ML Engineering at Apple and was a leader in Israel's national cyber division. I'm really excited to have you here. Shahar, welcome to Code RED.

Shahar Azulay: Thank you for having me. Great to be here.

[00:00:50] Chapter 2: Code RED Moments and Early Agent Lessons

Mirko Novakovic: Absolutely. And I start with my first question. What was your biggest Code RED moment in your career?

Shahar Azulay: I mean, I had more than one, but I think that one that relates to the groundcover is where it's part of what we do. We deploy an agent, right. Or an eBPF sensor at one of our early POCs with enterprises, we deployed our production and crash production. It was actually not the eBPF service or it was the fact that it was a very kind of crowded Kubernetes environment, you know, where you stick another pin and everything kind of you know, goes completely off, off, off the rails. So when we deployed the agent, it was enough to move around stuff, and everybody got scared. It was an exciting moment to build an agent. Yeah, that was a few years ago.

Mirko Novakovic: Absolutely. I crashed multiple production environments with agents in my career. We had at Instana. We also had a very I mean, we had this auto discovery. We injected things not with eBPF. We also used eBPF for some things, but we did like auto injection of agents into Java virtual machines and things, and you have so many different environments, it's scary. Yes, you have so many different environments and so many different versions and stuff. So you always figure out something that you have never tested and it's crazy, but how do you do it? That's a good question. Right? How do you monitor your agent? Do you have some things where you can monitor the agent and get feedback right away? I mean, we had we have implement a lot of things at back in Instana to get logs, etc. from customer environments automatically so that we could troubleshoot things very quickly.

Shahar Azulay: Yeah. I mean, eventually when you run something on the infrastructure, you have to monitor it in some way, right? So we have a lot of anonymous metrics and stuff like that, where we measure the behavior of the sensor and report them to the BYC backend into our control plane to just figure out if things make sense. Yeah. It's still running in the customer's infrastructure. So there's always that last mile of, you know, not knowing exactly what's going on, keeping the privacy of the environment. But we do try to figure out at least if there's a, you know, kind of weird spikes or weird behaviors of, you know, specific version or something like that, right, that we can backtrack to. But yeah.

[00:03:08] Chapter 3: Scope of groundcover and Profiling Perspective

Mirko Novakovic: I mean, let's start talking about your solution, right? I mean, you are a full observability platform as far as I can see, right? You do logs, metrics, traces, everything, profiling. I don't know, you are profiling.

Shahar Azulay: Yet, although it's very native to eBPF. We do plan to go into the future, but to be honest, we're not feeling too much pressure for the market right now. Profiling. From my experience, profiling is like 2% of the engineers that can read flame graphs. But yeah, it's very native to the sensor as well as another approach.

[00:03:35] Chapter 4: eBPF Fundamentals and Observability Use

Mirko Novakovic: Yeah. And eBPF is kind of the edge on the agent side. Right. Can you give us a little bit of a background on eBPF. Why do you think it's a good technology and how you're using it?

Shahar Azulay: Yeah. I mean, when we started about four years ago, it was also, you know, cutting edge back then, a bit unknown when we started no, VCs didn't know about it. Devops didn't know about what it was exactly, because we have just transitioned from the point of, you know, being incubated to a point where it matured into actual features. But basically what it is, it's kind of a you can think about it as a sandbox in the actual Linux kernel that allows you to kind of get superpowers into what you can do in the kernel without being a kernel developer. Developer. Right. In the past, we used to kind of write kernel modules and crash the kernel from time to time. And a lot of security companies were built on top of that. Right? Replacing the kernel with a different version that kind of bakes in a kernel model was very dangerous. And also the cycles were very slow. eBPF is the ability basically to write code or logic into the actual kernel in a very safe way, sandbox way, where you can actually do things that can create business value with a very high turnover of, you know, just deploying them and getting them safely out there. So security is one aspect, you know, all the sensors that we see out there, like, you know, with Palo Alto, they're all based on eBPF these days. And observability is clearly another frontier, which we we've kind of chased after with the eBPF, the ability to get traces by seeing, you know, calls going through the network stack of the kernel instead of, you know, taking them from the application layer through the actual code stack, we can actually see them using kernel resources and figure out figuring out how that, that works. So tracing and stuff like that with eBPF is the ability to, you know, get a build an APM basically on top of a different sensor, right? Rather than the SDK we're used to thinking about.

[00:05:22] Chapter 5: Kernel vs. App Instrumentation and Handling Encryption

Mirko Novakovic: Yeah, exactly. So normally the approach would be you have an SDK for a language like let's say Java, right. What we did, what I just explained. And then you instrument on the bytecode code level inside of the JVM. Right. So you have to have an agent running inside of the JVM where with eBPF you are basically running outside of the JVM right on the kernel level, and you intercept the calls as system calls on the kernel level. And then you have to do kind of the magic to understand that. That's actually, I don't know, an HTTP call in Java. And then you have to translate that into a call or a span. You're using OpenTelemetry. You have to translate that into a span and do the correlation and things on that level. Right.

Shahar Azulay: Yeah. So basically there's a few parts of the secret sauce, as you say here. Right. One is to you know, I'm seeing, you know, anonymous read and write syscalls on the kernel level, which I have to then, you know, decipher into, you know, layer seven application. Right. Figuring out this is a HTTP. This is my sequel. And then build the protocol stack from, you know, the kind of the sticks and stones that I have as a kernel developer kind of thing. But that's part of it, you know, detecting the protocol, building that up and creating the actual understanding of that. The second is dealing with stuff that are out of that context, like you know encryption, for example. Right. In some cases where you stand in the kernel just using kernel resources to read and write syscall is not enough because an API is, you know, SSL encrypted, right? So eBPF allows us to basically hook different parts of the stack, not just at the kernel level, but also the user space level. So we can, for example, capture an API as it goes through the kernel before it's been encrypted by the application stack. So there's a lot of you know, kind of there's a lot of IP on how to do that, right? How to do it efficiently, how to transfer data between the user space and the kernel space efficiently. But eventually, yeah, you're kind of sitting there anonymous, agnostic even to the application stack. Right? You don't care which stack the developer is using. You know who wrote the code. And you can basically see what the code is doing from an API perspective, right, of interacting with the outside world.

[00:07:31] Chapter 6: Language Agnosticism and Heterogeneous Stacks

Mirko Novakovic: And especially interesting for languages that are not interpreted Go, C++, or Rust, because in those cases you can't use an auto instrumentation agent, essentially because the languages do not support it. So developer has to bake it in. But in your case it doesn't matter. Right? You could trace a rust application the same way you trace a Java application.

Shahar Azulay: Exactly. It's even easier, right? Because these are hard compiled languages. And you can basically you don't have to assume any context that is being built in the JVM or whatever. It's kind of straightforward compiled. So unless it's stripped or something, you know, more sophisticated than eBPF can easily track down to the code line that, you know, that is compiled and running so you can actually do sophisticated stuff even at the user space while the code is compiled. So, yeah, go, rust all these, you know, very modern applications that are running right now on cloud native environments are a great fit for eBPF, where you know, all the instrumentation, as you say, starts to kind of not work as expected. Not at least not like in the old days where you had just a Java agent and it was enough today with, you know, go and a kind of heterogeneous environment. But I have go Scala and 50 other languages that, you know, that are rotation becomes hard and eBPF is kind of agnostic to all that. And that's a great advantage, right? In a large scale system.

[00:08:52] Chapter 7: Why eBPF Adoption Lags in Observability

Mirko Novakovic: Absolutely. So why do you think it has not become the default yet in observability? I mean, I would say it's I don't know what's your statistic is but it's probably less than 1% right of observability. It is done with eBPF at the moment where you say in security it's probably, I don't know, 80% whatever. So why is that?

Shahar Azulay: I think that you know, one, one thing is, is hard, like, like any new technology, it takes time to adopt. And even eBPF Frank is pretty new, right? Sdk is and I mean, OpenTelemetry definitely changed the way we standardize things and maybe even hardened what the SDK is and kind of the requirements from an SDK. But to be honest, these SDKs have been with us for a while, right from the Dynatrace, early Datadog and New Relic ages. And eBPF has been around for 4 or 5 years in its current state. Right. Which is kind of valuable. It's been around for 30 years, but not in its current state. So that's that. It's hard to create, you know, a lot of value like that. The second is that it's also not opinionated, which in some cases is immensely, you know, great value. Right? You can come over into a platform and basically we call it kind of, you know, the superpower of the kernel because it doesn't care what you do. But if you've instrumented or non-instrumented, it's your code. It's a model that you don't want to touch. Basically it sees everything. But on the contrary, it's not opinionated, right? If you want to measure the, you know, the runtime of a specific function which is not transacting to the outside world, you know, from an inter microservice kind of communication that eBPF won't necessarily see that without you specifically you know, figuring out what to do.

Shahar Azulay: And an SDK can be pointed at that. Right? So an SDK is always easier for developers, for developers to think about, because they can kind of intentionally touch their code at specific points in time where, you know, I care about that. I care about that. But the disadvantage is what happened, where I didn't, or where I haven't thought about it or where it's not my code. Right. I'm running sidecars or other other things that I cannot instrument. So I do see them as compliment, kind of eBPF and, and an SDK. But I think it's also mostly just a very, very new technology. Right? It takes time to adapt on the other side. On the flip side of that, we do see vendors like Datadog already rolling out their eBPF capabilities, but then business value and business models kind of hold that back. You don't want to cannibalize your current APM. You don't want to, you know, create a new method where there's a new firehose of data that you're not necessarily controlling, unlike an SDK. So there's also kind of business model you know, prohibits around that that are not just technology that we're.

[00:11:40] Chapter 8: Practical eBPF Use Cases and Growing Adoption

Mirko Novakovic: Yeah, I think for some things you kind of have to use it. Right? I think for network monitoring most of the APM solutions are using it. We use it for process crash analytics, for example. So one of the issues if you're running inside of a JVM, for example, if you haven't injected agent, if the JVM crashes, you have no visibility anymore, right? Because the process has gone, including your agent. So we used eBPF to essentially monitor those cases, right where our agent essentially crashed with the JVM, for example. Yeah, that's a good point. Yeah, I like eBPF. I think I was always wondering why it's not more adopted, but I think it's growing. Right. You have companies like Adidas. I think a better stack is using eBPF. These days you are going out with it. I think they're more and more vendors out there who are using eBPF. And also on OpenTelemetry it becomes more and more standard. Right? So there are eBPF agents of the community now that can be used. So I think the adoption is getting better.

Shahar Azulay: The adoption is getting better. I think that also you know, the awareness that it can coexist with OpenTelemetry is getting better. And the way that you can, you know, kind of make the two you know, contribute their values rather to the same part that kind of that gets better. I think that over time there is no one solution. People understand that, you know, they both have different advantages. And the real solution is to figure out just how to use them in tandem in a way that is beneficial to your specific use case.

[00:13:09] Chapter 9: Shift to Bring Your Own Cloud (BYOC) Positioning

Mirko Novakovic: Yeah, but if I am looking from the outside, I think when I looked at groundcover, I don't know, a year ago it was a lot about eBPF. But recently I would say, I mean, you have become very vocal on LinkedIn. I love that, right? I love the LinkedIn player. You do a great job. I like it a lot. I see you a lot these days and you are also pretty aggressive, right? Going against Datadog, which is funny to watch, I have to say. And I like your stories and but the thing is now these messages are not so much about eBPF anymore. They are more around you or bring your own cloud concept. So a different deployment model or management model right where you are still managing it like SaaS. But your stack is not running in your cloud, only the control plane is running in your cloud, but the actual instance and the data is running inside of the cloud of the customer. Right. And you are promoting that a lot if I see it correctly. Also, from a pricing point of view, can you elaborate a little bit how this works and how you see that as a benefit?

Shahar Azulay: Yeah, I mean, the funny story is that when we started, you know, again eBPF was for us, what has changed in the market, right? It was so new, so exciting, so kind of, you know, fundamentally different. We, you know, we went all in into building an agent or a sensor. And to be honest, you know, four years later, I definitely think it's one of the best data sources for tracing and APM. And, you know, we see the value in the fast onboarding that it provides. And when we started, it was clear to us that we didn't want to go into the same rabbit hole of ingestion based pricing because we thought that Datadog knows what it's doing right. It knows how to mark up exactly the things that it needs to do. And we're not going to be better at that. But it's still super expensive, and that prices eventually hold back customers from getting value. Like most of their customers don't activate APM, most of their customers aren't activating all the features, and they're kind of not enjoying the platform due to pricing. And we wanted to change that. So we never had a SaaS model. We always deployed our data in the customer's environment. We didn't know how to call that, call it back.

Shahar Azulay: Then we called it in cloud, which was, you know, was not the right name because the community calls it a cloud at some point. But we didn't know back then. Right. So I think that what is the shift you're describing, which I think is accurate, is that we're starting to understand at this point from the groundcover perspective, that bring your own cloud is maybe the main dish from our differentiator. eBPF is super important. It's one of the sensors that we're going to use. And you mentioned it also before eBPF alongside OpenTelemetry alongside cloud integrations. Right. There's many sensors that are going to pur in data, but the bringer of cloud, cloud base is maybe, you know what differentiate us the most. We basically deploy our entire backend, our entire data plane on the customer's environment, rather than storing it on our cloud side like a normal SaaS vendor. And we manage it through an automated control plane. So it's kind of the best of both worlds. It's an on prem data plane for one from one aspect, which I'm sure you'll agree that I don't have to convince the customer that, oh, by the way, these the same metrics in Prometheus Self-hosted would cost less than Datadog, right? I don't have to convince people that it's going to cost, you know 1000 less than, you know, storing it in Datadog.

[00:16:25] Chapter 10: Economics, Alignment, and Pricing Incentives

Shahar Azulay: I don't need to convince them that they don't want to manage it. Right. No one wants to manage at scale, you know, metrics, logs and traces. So I think bringing cloud kind of cracks, that part of I can now manage this data plane with Kubernetes with cloud native primitives. Right. I can now manage that on your cloud side. You're paying the bill for the infrastructure, but I know how to give you that SLA and kind of fully manage experience so you can enjoy it, like you would enjoy a SaaS solution. Fully managed, guaranteed SLA, guaranteed uptime, all the things you want from a vendor, right? And it allows us to do the final trick in the show, which for us is the most important one, is to change the pricing model. We're not charging by data ingestion, and therefore we believe we can bring more value to customer rather than kind of creating that trade off of. Oh, by the way, for any additional feature, you're going to have to pay more. We can put ourselves in more alignment with them. That's kind of how we see that. So that's, that is for us what we want to see.

Mirko Novakovic: Yeah. It's interesting. I mean, I'm still I mean, one thing is that you see that there's kind of a movement in the market, right? Grafana announced a bring your own cloud model. Honeycomb announced to bring your own cloud model. There's Tsuga here in Europe that former Datadog guys who are building a bring your own on cloud model. So there is kind of a trend towards Datadog has partly some stuff on bringing you on cloud now, right? With QuickWit.

Shahar Azulay: QuickWit, which is kind of a similar thing.

Mirko Novakovic: Yeah, exactly. So you see that I'm still I mean, I saw this post from you where you basically did the math that you can be ten x more cheaper than Datadog, right, with bring your own cloud. And, and the thing I'm, I'm really I mean, from a pure financial point of view, the cost is the same, right? I mean, at the end of the day, the customer will essentially pay the same amount of money for the cloud infrastructure than you would do in a SaaS environment, right? So there's no difference in the cost of the infrastructure. The only problem is that you have to create a gross margin. So if you put the cost towards the customer, you have a different gross margin because you don't have the cost of the infrastructure anymore. But at the end of the day, just from a pure cost perspective, it is the same, right? So you're not changing the cost thing. You're changing the gross margin thing.

Shahar Azulay: But I think that's the key factor. Right. Because eventually once you take it to your side, you have to mark it up. Right. And then when you have it when you mark it up, two things happen. One is you say you have to create a healthy gross margin, right? Because eventually you need to exist, right? As a startup, you have to, you know, create a margin that makes sense. So you're going to mark it up. But the second thing that happens is that you're built. You just build your business model, you know, on top of the data of the customer. So you're also growing with the data of the customer. So you have you as a vendor, you know, again, not all vendors are equal, right. But you have as a vendor the wrong incentive to basically you know, create kind of, you know, garbage data in, price out kind of thing with the customer and mark up more data and more different overages. And when that happens, you also have to mark up the unexpected, right? What happens if suddenly a hundred people from the organization walk in and, you know, start refreshing dashboards? I again, I have to protect myself. So maybe I'm marking up just a bit more of the margin for a rainy day, right? And when that happens Datadog, you know, pricing model, that's kind of how we evolve, right? They're not evil. They're trying to create a healthy margin and mark up what is basically driving their business. And it's your data.

Shahar Azulay: And once you kind of detach from that the good things start to happen is that, first of all, you know, as you can say, we can create a healthy gross margin with a lower cost. But second, we're also aligned with the customer. I don't have zero incentive to make the ink cloud or cloud beast not effective, right? I have zero incentive not to put the best tools out there to reduce data. Datadog are charging you, for example, on data pipelines running on prem. I mean, that just shows you where that model can kind of side sidetrack, right? Why charge me for data pipelines running on my end? Right. People are using stuff like Cribl and other solutions to reduce volume before it reaches the vendor, because they know they're going to have to pay for ingestion even though they're not using the data. So that's exactly where that misalignment change would be in our cloud, because we have zero incentive to not be on your side. You don't want to send the data. Great. I mean, we want you not to send the data. We're going to reduce the total cost of ownership. You're going to think we're great, right? So I think that that dynamic is not just margin, but it's also behavior which changes. Right. AEs, SEs everybody in the in the cycle of the vendor has zero incentive to really help me out. Because, you know, I'm damaging their quota if I'm sending less data.

[00:21:03] Chapter 11: Market Size, R&D Scale, and Pricing Strategy

Mirko Novakovic: It's true. But on the other hand I'm thinking if you if your techniques cheaper, right. Let's assume that by putting a different gross margin on a different model, that also means we make the market ten x smaller. So if Datadog has a they are around 4 billion in IRR. So if, if you would be ten. So if you get all those customers in your ten x cheaper, it's 400 million for you. And Datadog spends 1.2 billion on R&D. So that means I mean, we have to be honest, right? If gross margin is one thing, but they also take that margin because they have a really good R&D team and they have 1.2 billion in R&D and then the sales or marketing or whatever. So that means if you make a 10%, you are you are also only able to spend 10% of that rest. Right? Because you don't have you don't have. So you can only spend 10% of the R&D, 10% of marketing, 10% of sales. So that's kind of the thing that's kind of the thing I'm thinking in my head is like, okay, I get it.

Shahar Azulay: That's exactly where the power of the gross margin on your side comes into play. Right? Because basically I can choose the way of price. Right. And that's exactly where groundcover isn't trying to be a saint. Right. We're not trying to charge you one tenth of Datadog. Most of our customers reduce between 60% to a little more, right, compared to Datadog. But basically, the pricing point which we use is almost random, right? We chose to work by host base because we believe the market gets it. We believe that it's fair, but we're also tying ourselves right to the growth of your infrastructure. Like any observability wants to do, right? So we can create that high IRR, that high growth rate, and also choose a price point, which as you can say, right is relevant to us. And you know, the town that we want to cover. But I could have chosen other things, right? I have the power to mark up whatever I want kind of thing. And I think that as long as we choose something that the customer think is fair, then we can coexist in a good way with customers, you know, save the money, not be a saint, right? Because eventually observability is a pricey market and, you know, it's a mission critical tool, but create a trade off, which makes more sense and to be honest, a bit more predictable, because I think, you know, that, you know, for many years of your experience, customers are not just concerned about how much data costs today. They're going to they're very concerned about how much data are going to cost tomorrow. And I think we can eliminate some of that pain even without saving costs. Right. Even if you can provide more predictability that's worth money to a lot of customers.

[00:23:33] Chapter 12: Network Egress, Pipelines, and Vendor Behaviors

Mirko Novakovic: I get that. And there are some other cases right on large scale egress cost and all the networking costs that you are paying for, getting all the data over is just massive, right? We are in the customer situation right now where only the egress costs and the networking cost is $2 million a year. Right. So it's massive. Right. So that's where we for example, we have a bring your own cloud model for, for pipelines so that we can have the pipeline and do tail sampling etc. on the customer side so that, that you don't have to send, especially if you have high sampling rates. If you sample 90% of the data, you would normally send 100% of the data over the wire. Pay the egress all the networking costs and then you reduce it by 90%. You basically paid for garbage, right? So that's where I think you have to do kind of a bring your own cloud model, because that the pricing model of the cloud vendors does not fit the use case, right.

Shahar Azulay: It's free and, you know, cross AC communication and stuff like that because people pay a lot of money for, you know, stuff they don't even see, right. But again, you know, once you run part of the Dash0, you know, pipeline inside the customer environment, you have the choice, right? You could say I'm pricing by that. Right. And I'm pricing by data ingestion even there. Or you can choose something else. But for example, Datadog, I think because of their I wouldn't call it addiction, but it's kind of an addiction to marking up these ingestion costs, right? They're actually charging you as well for bringing cloud pieces that are running on my infrastructure. And I think that's exactly where it's not just technology, right. It's some it's somewhat behavior and market comprehension decision.

Mirko Novakovic: Yeah. We actually it's for free. If you run it in your own cloud you pay the infrastructure price as a customer. But the actual component, the pipelines, is for free.

Shahar Azulay: Because that's a decision you made, which I think is. Makes sense. Right. But I think that's exactly the heart of it, right? You can suddenly make a decision by what you're marking up or what you're charging.

[00:25:34] Chapter 13: Standout Marketing in a Crowded Market

Mirko Novakovic: Absolutely. But I also want to talk. It's not so much technical, but I really like your marketing approach. Right. I I know we are both in a very crowded space, right? I mean, this market is pretty crowded, so you kind of have to stand out. At last year's KubeCon, we started with our racing suits. Right? Red. And then at KubeCon in Atlanta. I saw you for the first time in your beach outfits. So your whole booth is like you're all wearing these nice beach outfits and tell me a little bit more.

Shahar Azulay: Do you love the red jackets?

Mirko Novakovic: And Yeah. Yeah.

Shahar Azulay: Yeah.

Mirko Novakovic: So tell me a little bit about it. I saw your post about it, but I think it's working right. People recognize you, and you have to be recognized. And I definitely recognize you guys. It was a nice booth and really recognizable. So what you're doing, you're also play with the vendors, right? You throw balls at them and make videos. So tell me a little bit about your marketing.

Shahar Azulay: My silly posts. I think, you know, I think part of it is you say it's going to be noticed you know, the Yellow Jackets and standing out and I think it's important because this, this market is so, you know, dark purple, dark blue, everything is, you know, but that's one thing. But the second thing is that we kind of did this year, which I think is important is to not to try to overexplain what we do. Right? Because you know, we've been talking for 20 minutes so far, give or take, about BYC and eBPF, and we have tons of things to say about it. Right. We have a lot of information to share, but I think the expectation of people to understand it at first hand is really, really, really difficult. And you want people to wonder what you're about, right? What you're trying to change. And a lot of what we're saying is, you know, modern, modernizing things. You see my posts about mentioning legacy players, right? I'm not trying to insult anybody. But just saying that the approach of SaaS is moving, right? And with it, things are changing. So that's kind of the vibe we're trying to create. And on top of that, right things like the beach club and is basically saying, you know, we're positioned against Datadog, we're positions against other people that are out there in the market where, you know, changing something compared to them. This is easy for people to understand and then walk up and say, by the way, what do you do? Right? So nothing was written on our booth, for example, we said nothing but Beach Club, which was an attempt to get people interested, right in what we do, rather than, you know, hitting them with the elevator pitch as they kind of, you know, walk through the aisle. It kind of changed the dynamic. And I think it's more healthy and more interesting and more fun. Throwing beach balls at people was part of it, but

[00:28:18] Chapter 14: Datadog Migration Automation

Mirko Novakovic: I like it. And you're also talking about also like, maybe you can tell me a little more how it's doing. I just saw it from the outside. But you offer migration tools, right? So one of the things which I also know when we discuss, somebody wants to switch from Datadog and they kind of hate the pricing model and the way they do business, and they want to come to Dash0, but then they say, oh my God, I have 1000 dashboards, 500 alerts. How do I migrate that to your tool? And that's kind of sometimes a stopper, right? Because it costs you a lot of money. And so I saw that you build a migration tool that helps you transferring those dashboards and alerts to groundcover. Tell me a little bit about that.

Shahar Azulay: So you know, we talked about when we met you at the conference. I think that's one of the things that there's like two prohibitive, prohibitive factors in this market which are very, very strong. One of one is renewal rate, a renewal date. Right. You walk into a customer, they're in Datadog. If you meet them nine months before renewal, it's not the right time, right? So you need to kind of find that window in time where you can kind of catch people and make them think about your product. The second thing is migration, because I think it creates kind of an uncertainty, which is, to be honest, scary, right? You walk into a person that never heard about you. I mean, with all due respect, the groundcover and Dash0, we're still young, right? As players and people know they have the dog and no New Relic, and you're trying to convince them that you can replace the mission critical tools high scale. So part of it is saying don't overthink the part that you don't know, which for people is migrating dashboards and monitors. Some people might think, oh, it's two weeks.

Shahar Azulay: That's great. I can make it happen. And some people might say, it's a four month operation, right? I don't want to go into that right now and slip into that renewal again. Kind of renew a Datadog. So we wanted to make it easy and try to kind of make that entire frictionless approach that groundcover brings with the UPF sensor and all that to the actual POC or migration with us. So it's automated right now. Right now, just for Datadog, what, we're going to release more vendors in the upcoming months so you can put in your API key and start transferring dashboards and monitors at, you know, the first hour of the trial, right? So you can actually see your data with groundcover, which we think is equally important to anything else that we do. We believe it can, you know, take, move a lot of people to the other side of the fence rather than just, you know, sitting and wondering if it's a good time to invest time in migrating to a tool, which I think a lot of them are intimidated by.

[00:30:37] Chapter 15: AI Agents, RCA, and Control of the Data

Mirko Novakovic: Absolutely. No. It's awesome. I think it's a great idea. I finally have to copy it, I think. But we think a little bit differently about it. Which also brings me to my last topic. I always stop these conversations. I have the last five minutes spent on AI. Yeah. And we just released our Agent0 platform a few months ago, and it's working pretty well. We have six agents and that's the way we think. We already have a dashboarding agent, which creates dashboards for you, so you can tell them what you want to see or see a problem, and it will create a dashboard. And our thinking is more our agent will also be able to log in or connect to your API of Datadog, and then use that kind of AI technology to create those dashboards and alerts and stuff like that. Right. So we are really at the moment I'm, I'm really thinking about how AI is changing the whole observability world. If you look at the whole AI, SRE agent category, which is connecting to observability to provide value, in a sense, it could be that that becomes the new user interface and we become only a database, right.

Shahar Azulay: You see companies, you know, kind of emerging that. Yeah for sure.

Mirko Novakovic: Exactly. So that's kind of a threat right. That we have to see how this is evolving in my point of view. So I'm thinking about it a lot at the moment.

Shahar Azulay: And you know I know you're on the same side as that as well. We get asked a lot about that from VCs, right, about how will you be fit to the AI world? And the AI question pops up a lot. My personal take is that you know, these companies are basically like, you know, resolve and other companies are basically showing us what is possible, but they will never win the race. It's kind of like you know, Google compared to other players, when it comes to, you know, Gemini and other solutions, as the observability companies, we basically have the resources right to, to, to, to solve the problem. We own the data. There's no agent on top of API, right. That will eventually be able to do enough multi-step queries, you know, on thousands of logs, refine the queries, get traces, correlate them, you know, do another query that will be cost effective eventually. What Dash0 will implement or what groundcover will implement will always be better than the external AI takes. But I think it's important to see them succeed because it does show us. Right? You and I, as founders that are currently focused on, you know, making log management highly performant or whatever, that this is a critical feature that our customers expect, right? They expect to have root cause analysis and a guided agent basically that like Agent0 that will help them figure out what's going on.

Shahar Azulay: But I think that eventually the game will be owned by the observability vendors that will own the data and do my job. That's kind of how I see it, is to make sure that I have the best data in hand to kind of enrich that, you know, with eBPF, with OpenTelemetry, with whatever sensor I can get my hands on. So when the time comes that a person wants to query the data, I have the data in hand, already pre-processed, ready to be efficient, you know, APIs that are ready for AI, that eventually no other vendor will be able to query a groundcover at the rate where it will make sense. And at some point, you will probably see Datadog and other players also restrict or rate limit these APIs to, you know, make these other vendors, you know, kind of not use their data because it's not something that I see solvable from an external source. But that's kind of my take.

[00:33:55] Chapter 16: AI’s Impact on Value and Pricing Models

Mirko Novakovic: Agree I agree it's a little bit similar to the apps that you look at, tools like Bigpanda. They do similar things, right? They worked on top of events, on logs, etc. I think you can come you can get good results, right? Also, I think those tools are pretty smart in developing graphs and getting the service dependencies etc. from the data. So using LLMs and other things. So I do think they are quite good. But I also agree that you need the data, you need context to to really do a great job. Right. But that, that I mean, my head is more like, okay, how is it changing the whole world, right? I was discussing with my team if, if, if you make telemetry data for free. I mean, at the end of the day, if it's only the source, if it's only the source for your AI and the value you create, why wouldn't you say, okay, give me all your data. It doesn't matter. And I and I charge you for the value that we generate for you. Right? And then. And then it's totally different.

Shahar Azulay: Story along the way. You know, before I, like, observe and stuff like that, trying to do it, you know per query basis stuff. I think that it might make sense, right? Eventually, it's almost always kind of bypassed to injection based pricing in some way. Right? People have to correlate it to protect themselves. But I agree that it, you know, it might make sense to eventually use whatever value I'm getting from the data, and then also have the incentive to drop the data, right? Yeah, not useful for AI.

Mirko Novakovic: No. Absolutely no I agree. So I definitely know I mean, I'm doing the podcast now for more than a year. A year ago I was very skeptic about AI everything. Now I'm very positive. I see the value. I see the results.

Shahar Azulay: As a saying that he doesn't have permission to be skeptic anymore because you know, we've seen so much, so many things change. So I think it's right. You don't have the, you know, kind of the ability to be skeptic anymore, even though, I mean.

Mirko Novakovic: You can still be skeptic about AI in general. Right? But I think there are use cases where these llms work pretty well. Right. And one of the use cases is actually working on OpenTelemetry data. Right. They are very well trained on it. They understand it pretty well. So and they can find root causes pretty, pretty well. Right? I mean, it's really impressive, I have to say. What you can get out of it. And also from the first feedback from our customers, it's really, really working. Right. It really helps. So and so there's no way around it. Right. It's just a benefit for the customer. So we have to move along that way. I think it's changing the way user experience works. Totally right.

[00:36:41] Chapter 17: Closing and Takeaways

Mirko Novakovic: Yeah. And I don't like it. Right. I think we have a chat interface for this at the moment, but I don't think it's the final. The final version will change.

Shahar Azulay: It will change or something. Yeah.

Mirko Novakovic: Exactly. It will change. It doesn't really make sense to use it that way. It's a it's a first good way of, of also what I like about it is that you learn from your customers because they put in chats, and now you have a product where you actually have not predefined what the user can do, but they can ask questions and you can see what they are asking. You can see what kind of information they want. Right. So it's a good learning thing. But at the end of the day you have to. It must be integrated, right? The agent and the user must be.

Shahar Azulay: Yeah. Eventually there's some things that are easier to do. Right click a filter or whatever. It will have to be some kind of, you know, augmented interface right now working just in chat. That's a nice transition, I think, to just figuring out how these things work, but it's not going to stay like that.

Mirko Novakovic: It was super fun. I was super excited though, because I'm really a fan of groundcover. And what you're doing is challenging and and I like your marketing approach and that you allowed and and and opinionated I, I love this, so I really wish you all the success and I hope to see you at the conferences.

Shahar Azulay: See you soon. Probably at one of the Expos.

Mirko Novakovic: Exactly. Thanks, Shahar. Thanks for listening. I'm always sharing new insights and insight and knowledge about observability on LinkedIn. You can follow me there for more. The podcast is produced by Dash0. We make observability easy for every developer.

Share on