[00:00:00] Chapter 1: Introduction and Eli's Code RED Moment
Mirko Novakovic: Hello everybody. My name is Mirko Novakovic. I am co-founder and CEO of Dash0. And welcome to Code RED code because we are talking about code and red stands for request Errors and Duration the Core Metrics of Observability. On this podcast, you will hear from leaders around our industry about what they are building, what's next in observability, and what you can do today to avoid your next outage. Hi everyone! Today my guest is Eli Cohen. Eli is the co-founder and CEO of Helios and Applied Observability Platform for security and monitoring Insights. Helios was recently acquired by Snyk, where Eli is now director of Product Management. I'm excited to have you here today, Eli.
Eli Cohen: Thank you. I'm excited to be here also.
Mirko Novakovic: Yeah. And we always start the podcast with one question. And that's your Code RED moment. Do you have any special one in your career? Probably.
Eli Cohen: Yeah, definitely. So, you know, I started my career from engineering, you know, but then I moved to the dark side of product. And during one of the startups I worked with, you know, I've always been a startup guy. I was working for a fintech company, and we handled like millions of transactions a day, right? For billions of dollars. It was a system like microservices. Everything was fragmented. It was really like it was new. The new style, like we did today. Like every service is responsible for one thing. And eventually you hope that they all connect together really, really well. And when we tuned and when it turned out that we changed something in the logic of one of the inner services, and we tested it as a black box, as a white box, it was quite well. And we tested its service separately and everything worked well. But when we deployed the code and it reached production, we suddenly realized that, you know, many transactions are going on wrong and we had no idea what was going on. And apparently there was a side effect because we changed something that was quite well in service.
Eli Cohen: A but there is a side effect, not even on service B, but on service C and then we realize, you know, we have to do many, many, many end to end tests of this system for many different scenarios. And then we started to invest heavily in that because, you know, we were sure that everything will work as expected. But, you know, reality as it is was much more complicated because it was an edge case. And it you can only see it on based on real data. So eventually we had to start replicating all the data from production into a dummy account in our testing environment and simulate everything all over again. And this is also the, you know, the phase where I realized, okay, like complex those are complex system microservices. You're going to end up in a very complex system, and you have to do many maneuvers to really make sure that everything is working as expected. And since we were dealing with money, which is what people really cared about, it was a bit of a nightmare.
Mirko Novakovic: I can understand that. And was that one of the reasons to found Helios?
[00:03:00] Chapter 2: The Founding and Evolution of Helios
Eli Cohen: Yeah, definitely. So as I said, you know, I'm coming from a background in engineering and to me, almost it felt like such a burden to do all those kind of tests together. Me and my co-founder, Ron, who was the CTO we realized there is a big pain about it, and especially for distributed architectures like the one we had back then in the fintech startup and realized this is really a pain. Meaning, I had to take some of the best engineers in my group and to let them write manual testing later on to automate that, to really, you know, examine all of those edge cases and all of those scenarios, because distributed system is much more complicated than you used to be when you had, like, you know, single page application or like you had a monolith. And that was one of the moments when we realized, okay, distributed distributed systems is hard. And we kind of saw an opportunity within, you know, OpenTelemetry and leveraging tracing to build your to build your testing architecture automatically based on that. So that was kind of like the initial idea.
Mirko Novakovic: I looked at the tool and we had a few discussions on that. Right. Because I also like your tracing visualization is look pretty nice. And you over time pivoted into different categories also. So but starting at the, at the testing case. So you basically used a tracing technology to see what is happening in production. And then you're allowed to basically use the payload of each individual spend to create tests and then simulate basically replay it with different test scenarios. Correct?
[00:04:38] Chapter 3: Tracing Technology and Pivoting to Observability
Eli Cohen: Correct. So that's definitely what we did. We leveraged the tracing technology that we had. We enhanced all the payloads. And then we provided the user, first of all with a capability to replay the same scenario, then to do some fuzzing. And on top of that, that was before, you know, the generation of JNI and LLMs. We also generated the code automatically for him, but obviously it wasn't comparable to what you can do nowadays.
Mirko Novakovic: So what does it mean? You could copy paste the code out of the tool and then into the IDE.
Eli Cohen: Exactly. So at first phase, you know, it was very scrappy. You could copy paste it into your IDE. It was the beginning was Python and Node.js. Later on we had, you know, later on you can also export it directly like open API, check it into GitHub or merge it, you know, run it as part of Jenkins. Definitely.
Mirko Novakovic: And you also had, as far as I remember, you developed your own agents. Right. For Go for example. So yeah, you spend a lot of time like everyone who's in that space. At some point you invest heavily into some hardcore tech to instrument code and to inject something into runtimes. Right. So as far as I remember, you, you spend a lot of time with your CTO. I talked to him that he had some really cool stuff around. Go.
Eli Cohen: Yeah, definitely. So what we did back then is we realized that, you know, creative testing is really cool. But OpenTelemetry, you know, if you want to collect all the payloads and also to do it from many different systems, you know, like Kafka and, and, you know, GraphQL, etc., then you have to collect the payload yourself, because that's really important for the testing. And we realized that some of the pains for some of the development languages is the instrumentation takes too much of your time. Like as time passed by, what we realize is that our customers were more exciting about our auto instrumentation capabilities and less and less about the testing, meaning they bought us for the testing. But at a certain phase, they didn't have enough time. It wasn't that much of a priority for them, like they kept neglecting the testing, but they were really excited about the fact that you could, you know, instrument your entire code base automatically for various languages. Is. And I remember even one of our customers told me like it was the devil. The head of the DevOps, he told me that the first time we buy a product that I'm not involved in. So he's super excited about the fact that he could deploy an instrument, everything. And it didn't require any DevOps or, you know, you know, effort from platform. And that also laid the foundation for, you know, the second like, I don't know if it's a pivot, but definitely yeah, kind of a pivot to be focused more on how we help organization automatically collect the data and ease the pain on the onboarding phase of tools like OpenTelemetry.
Mirko Novakovic: Yeah. And did you start with OpenTelemetry from day one or did you start with an Otac and then moved to OpenTelemetry?
[00:07:30] Chapter 4: Challenges and Transitioning Interests
Eli Cohen: So we started with OpenTelemetry with day one. At a certain phase, we also developed EBPF capabilities because we saw that the deployment over there is a lot easier and for certain use cases it was Good enough. So yeah, we kind of Leverage OpenTelemetry and later on also EBPF.
Mirko Novakovic: Yeah. Opentelemetry makes sense. EBPF is a technology for listeners who are not familiar with them, which is basically using is on the kernel level. Right. And you get all the system calls and then you can basically instrument on each call. Right? You could intercept on the network level. You can intercept calls on a code level or wherever you want. Right. So it's a kind of interesting technology to do instrumentation without the need of adding an agent per environment. Right? Or per runtime. But you can really instrument on a kernel level basically. Right.
Eli Cohen: Yeah. So that's the thing. You can do it on a kernel based level. So it was you know, it's obviously, you know, all of our customers that were running Linux, it was great for them. And really what was the idea behind it is you can do it's low, low effort instrumentation. Right. The deployment is a lot easier. It is considered much more efficient, much more safe. So it was like for a certain organization, it was a no brainer. Especially depends on the use cases. But then we can collect really deep level data with less burden, let's say that way.
Mirko Novakovic: That's what you mentioned was interesting for you as a testing use case. You need the payload, right. So most of the instrumentation libraries also form OpenTelemetry do not provide you with the full payload. If you do a HTTP call, for example, you want all the parameters. You won't get that out of the box, right. But if you want to do testing, that's the relevant data, right? Because if you want to replay a request without the payload, it doesn't make sense. Right. So you need all the parameters. And then you need to be able to change the parameters and provide different fuzzy type of data so that you can do the test. So that was probably part of the engineering efforts you had to do is collect all the payloads for different protocol levels and different libraries.
[00:09:42] Chapter 5: Entering the Security Domain
Eli Cohen: Yeah. And up until today, some of the contribution to OpenTelemetry, especially in Python is, you know, for example, with the integration we did for Kafka later on, also with GraphQL, at a certain point, we even added support for temporal, like all the queuing system, because that was especially when we saw that the pain is very big.
Mirko Novakovic: So paying for that's interesting. So the pain for the customer to test it or what. What was the pain or the pain to, to really instrument it.
Eli Cohen: So what we realized at a certain phase is that the pain for the customer to instrument was bigger. And also the appetite to really make sure the instrumentation is working was bigger than the testing phase, because in the testing phase, you know, for many organizations, you know, unlike my case, when it was, you know, financial data and money was involved for many customers, you know, they don't have to have this really robust end to end system that is being tested nightly for some of them. For some of them, Yes. And we saw less of an appetite to really to really, you know, generate tests automatically based on that. But they were quite eager about the capabilities we provided them with the other instrumentation, either with open telemetry or EBPF. And then we realized there are many other use cases we can solve.
Mirko Novakovic: No, that makes sense. So basically you pivoted towards classical observability then from testing. Right. And with a focus on tracing as far as I understand. Right. Yeah.
Eli Cohen: Yeah. The focus on tracing and easy onboarding. Really cool UI. You know, I think some of like some of like the things you do today with Dash0 actually. Yeah. Which is also I really like the idea.
Mirko Novakovic: Yeah. As I said, I mean, I have as if you're in this space, you always look what others are doing and a lot of the things are similar. But then everyone has some cool things. And you had this nice visualization of traces for sure, where you basically created something. It was a little bit like a map, right? And the flow from top to down where you could see all the services that get called in a really nice way, and you could click on each of the nodes and get the details right. That was basically your trace visualization.
Eli Cohen: Yeah, definitely. And that's exactly, you know, you could see all the different components of your system. You can click in, then you can expand, you can see all the data. And by the way, later on when we identified the big gap in the abstract domain, we leverage the same technology. And this is now today embedded into Snyk. Like you can see all the different calls between the app. And to really understand like with the context of snake, which is where we work today after the acquisition, is like all the different vulnerabilities.
Mirko Novakovic: Yeah. Happy to discuss this more. I mean, I not that deep into the security and especially all the acronyms are kind of I don't understand all of that. There are hundreds or hundreds. Yeah, but what I found interesting, I was at KubeCon recently, so last week and I went to the booth, a booth the the company that also is rallying company, right. That I think grew from 0 to 400 million in, I don't know, four years or something. Right. So I was quite curious what they are building and, and one of the interesting parts, and maybe that's also what you saw, is that they use a I'm not sure if they do it with tracing, but they have a dependency graph of the services. And so they basically understand if there's for example, a vulnerability in a service, they can then check, okay. Is that for example exposed to users on the internet. Or so they can basically by using that graph and by understanding the dependencies, they can also better judge if a security issue or vulnerability is actually very critical or only critical, or is it only affecting internal systems? Does it affect data from customers? Right. So I found that pretty interesting that it's something that is pretty common in the APM space, right. In the application performance management space. But it seems to be quite a new approach in the security space that you use tracing technology to understand dependencies and use that for scanning and understanding security issues.
[00:14:01] Chapter 6: Acquisition by Snyk
Eli Cohen: Yeah. So I agree with you. And this is exactly the motivation we are in the pivot that we did at a certain phase. We realized that, you know, the market is getting very, very crowded. But then our customers were really, you know, excited about the fact that you can deploy our technology with no need to involve DevOps, etc.. And, you know, in many cases, you know, like the head of platform engineering also reports to the CISO. So, you know, they saw we can really, really easily integrate into their system. And they told us, listen, there is another use case of maybe you can help us prioritize our vulnerabilities, and then we realize there is a big gap in that domain, because all of the treditional or let's say, like the main companies that were statically analyzing your code, they had no data from the runtime. And as you said correctly, for us originally the APM domain, this is quite trivial, right? I can tell you whether something is loaded, whether something is publicly accessible. Then we realized that once we integrate those runtime insights into the static scan of the code, we can provide you with valuable interest and valuable insights that are helping you do, first of all, this entire application graph, all the dependency graph and also help you understand why do you need to prioritize? Because you might have ten vulnerabilities, but only a portion of them are actually loaded into memory, and only a portion of them really are accessible over the internet.
Eli Cohen: And that's very significant when you look at that from the eyes and from the lenses of an upset persona that wants to understand what vulnerabilities and what you know. What issues should they tackle today? So that's kind of when we realized, okay, we have a really great technology, but we might even be more appealing in a different space. So we kind of, you know, we leverage the technology we built. We changed the product a bit, changed the marketing a bit, changed the sales a bit, and then realized that we have a very cool product in the Appsec domain that can really help companies and our customers do better prioritization based on data they were never exposed to. And thanks to the fact that, you know, our deployment was very easy. And just with a few clicks, then suddenly it became, you know, really appealing.
Mirko Novakovic: Yeah. So did you pivot already to security then with in Helios or after the acquisition or, or have you figured out the security part before that?
Eli Cohen: No, we figured it out before that. But then we realized it would be better to do, you know, Go to market with some of the big players out there? So we started partnerships with some of the big, big players out there that obviously understood that we have a very unique technology and a very unique product. And then we started doing POCs with their customers, our mutual customers, and we'd sneak it, you know, quickly escalated into an M&A.
Mirko Novakovic: Tell us a little bit about that, right. I mean, I had the same situation with IBM, by the way. We also were partnering. And then one day somebody called me and asked me if we were open for an acquisition and, and, and basically the, the thing started. So tell us a little bit how these things work. Right. Did you get a call? I mean, I think Guy was also an angel investor, right? In one of the founders of Snyk.
Eli Cohen: Yeah. So Guy was also an angel investor, though he was out of the company. But that face but definitely, you know, so they've known us the the snake have known us for more than two years already because they are the thesis about how you can leverage runtime signal for prioritization. So during their analysis and their drafting of the thesis, we met with their folks, you know, to try to help them have better understanding of the domain of the different technology, when you should use the BPF, when you should use OpenTelemetry, etc.. And then at a certain phase when we pivoted, we actually called them back and told them, listen, we are actually now really doing the things that we theoretically spoke about with you, like one year aGo or six months aGo. Let's go and do a POC together with your customers. And that was super excited about that. And that at a certain phase, you know, I got an invite from the corp dev for a meeting at such a day. And this is when I realized I told my wife, I don't think they're going to discuss partnership with me on Saturday. And from that it's a topic for a different podcast, you know?
Mirko Novakovic: But it's interesting, right? Well, I think as far as I know, one of your rationale was that you figured out that if you want to get I mean, security is a very crowded and, and, and a space that you figure out that working together with snake, who was already a pretty big company at the time, gives you more leverage to bring your technology into a lot of customers. Right. Was that part of the story or what? What was your rationale?
Eli Cohen: Definitely. So we saw that what we were doing makes a lot of sense, but we also saw that there is a big consolidation in the cybersecurity domain. And we you know, we identified snakes as one of the obviously market leaders for that. And then for us it felt natural because, you know, for snakes today has more than 3000 customers. So it was a no brainer. We knew we could take our technology and product and put it in the hands of millions of users. So, you know, it was almost a no brainer, you know, to join forces with the market leader that is doing things from the static analysis perspective, combining our runtime insights. And then, you know, giving the customer a full platform of capabilities was almost a no brainer. And I think they also identified that from their perspective that they could onboard a team that is very strong and can do a lot of innovation and help them close gaps that otherwise would have taken them, probably 4 to 5 years, you know.
[00:19:32] Chapter 7: The Role of AI and Future Outlook
Mirko Novakovic: And tell me a little bit what Snyk is doing. I mean, I know Snyk from the beginning because we were founded by the or the investor. We were basically the same fund of Excel. And that's how I got to know guy at the beginning of Snook. And, I was at the beginning of Instana and it was basically at that time it was more or less, if I remember it correctly, they started more with JavaScript dependencies, right? So where they were checking libraries and comparing it to a vulnerability database. And so basically during I would say compile time on this CICD pipeline, they checked the dependency. They would tell the developers, hey, you are using a library here that has inversion XYZ that has some vulnerabilities. So we recommend you to update to a newer version of the library or a fix. Right. And that's how they started.
Eli Cohen: Yeah.
Mirko Novakovic: Correct. Yeah.
Eli Cohen: So that's how they started from JavaScript and helping you identify vulnerabilities in open source packages. Then you know, it evolved into first package, first party packages, meaning scanning your own libraries and your own code and helping you identify vulnerabilities and security gaps. Then you know, the they also got into IAC And also and also to container. And the last product was speaking of acronyms is ASPM is the application security posture management which is kind of giving you the control control plane over all of your different products and letting you run an effective application security program, meaning you can really know what to tackle first. Where are your gaps? Where are your blind spots? What do you need, you know, to chase your developer to fix and obviously to close the loop, meaning to help you already open up automatic pull requests for your developers and help you fix all the issues that we find. And this is also where the you know, product fits in for helping, you know, your application security people have better prioritization. We leverage the runtime insights.
Mirko Novakovic: The first part, as far as I understand, is more or less on static things, right. You run it against your code base or you run it against your containers or images, and then you scan it. You scan the code, you scan the packages, compare it to vulnerability lists or, or these I think, patterns that you are looking for. And then you do this during your CI, CD process. Right. But where your product is actually running in production, right. It scans or it instruments, the code that's running in production, and it gets a view of the production code. And so how do you map those production insights with the insights you get on the static Libraries.
Eli Cohen: So it's a great question. And that's really quite a challenge, quite a technical challenge that we have. But you know, we have some IP that we inside the company today that we know how to correlate between the assets that we find during production, during the runtime, and how to trace it all the way back into your code and into your static code. And then we can tell you, you remember this, you know, package and this vulnerability that we identified from repo X, okay. We also saw it running in runtime. It is accessible over the internet. It is deployed. It is loaded into your memory. So this is quite a difficult question that we have a lot of technology and a lot of IP today that we know how to correlate. And with this data we can actually have, you know, Appsec people tell them, okay, so you don't really need to handle ten vulnerabilities right now. Let's start with the top 50 that are actually, you know, more exploitable.
Mirko Novakovic: So it's really a little bit like what I saw at this. Right. Getting the runtime view and then using that runtime data to prioritize which of the vulnerable. Maybe it's hundreds or thousands of vulnerabilities that you can find.
Eli Cohen: Definitely.
Mirko Novakovic: And so it's used for prioritization based on things that are either exposed to critical users or to the outside, or prioritizing what's really loaded into memory. Right?
Eli Cohen: Definitely. Yeah.
Mirko Novakovic: That's interesting. That's interesting. And is it already delivered or are you in development? Is it the product that I can buy or.
Eli Cohen: No, no, it's a product that you can use. It's like our offering called App Risk. This is our SPM product. Yeah. And it's available for, for the market to use. Definitely.
Mirko Novakovic: Oh, interesting. And how does it work. Is it using EBPF these days or is it based on your agent technology?
Eli Cohen: Depends on the customer. We mainly leverage EBPF. But you know, for some cases we can also have, you know, our OpenTelemetry. It really depends on the architecture. We want it to be relevant for every customer. That's why we don't have ourselves the specific technology, but actually know how to tailor the solution for the specific customer.
Mirko Novakovic: One of the things you can see in the spaces that observability and security in parts are merging together, right? You can see that now that's acquired you guys, that's coming from the security side and acquiring observability technology. But you can also see that companies like Datadog. Or Dynatrace, they are adding more and more security features to their platforms. So definitely something you can see is happening in the market. And it makes sense because we are using, as we just discussed, parts of the same technology stack and the same agents and the same data. But I was always wondering about the user. Do you see? We always talk about shift left and what the developers need to do. Is that something developers really care about or who's your user for the tools?
Eli Cohen: Yeah, definitely. So I will tell you that for, you know, Snyk pretty much started from the shift left and from the notion that developers want to work fast, but really not to compromise on security. So in many cases, you know, there is the user and the buyer, obviously, and the buyer is the CISO or the opposite persona. But many of the users that interact with Snyk on their day to day basis are the developers that eventually, you know, need to fix all the issues that we found. And I also think that, you know, generally speaking, we see in many organizations, you know, part of the fact that, you know, head of platform engineering eventually reports to, you know, CIOs, etc.. So we definitely see a consolidation also in terms of who owns those domains. But I think, you know, the plate is big enough for everyone, meaning, you know, both of the mobility domain, it's enough to do over there. And there is enough. Enough consolidation. Only within observability. There is enough consolidation only within cyber. There are companies that they intersect with that. But I think mainly, you know, it really depends on the user and the buyer and they are not always the same.
Mirko Novakovic: Yeah. Makes sense. Makes sense. And how if you look at the data, are you also looking at the payload these days. Are you looking at the kind of data that is sent. Would you see that this is like user data or credit card information or anything. Is that interesting for you to. Because there's also data observability these days. Right. Which really looks in what kind of data is sent and where is it. And and also from a security perspective that can make sense. Right. Are you leveraging that or is it more about dependencies and and and these things.
Eli Cohen: So for now we are not leveraging that. Obviously that's something that can be done using that technology and we can examine that. But as I said before, eventually those are different use cases. And we like, you know, as a philosophy to be the best that our use cases that we target and the use cases that our customers care about. Is it a possible way to expand? Definitely. But I don't think you know it will be relevant for the near future. We have enough use cases to solve that are much more near future, let's say that way. But I do this. I do see startups over there, like in the areas of leveraging like payloads and tracing data for data observability and for data security also. Definitely.
Mirko Novakovic: Yeah, it's a big space, right? There are many startups, companies now that do only that, right, that are more looking at the data and payload and observing changes and vulnerabilities there. And you talked about or you mentioned LLMs when it comes to creating the test cases, etc.. And I think was it yesterday or two days aGo, I saw that guy new company Tesla raised, I think, $125 million. And he's super bullish when it comes to AI and LLMs. So I can I also know that he had a lot of ideas A snake about around these topics. So how do you see LMS gen AI changing the observability and also your product in the future? Are you seeing some disruption or do you think it's just a tool that will help you somewhere? I'm just curious how you see this space.
[00:28:23] Chapter 8: Future Plans and Closing Remarks
Eli Cohen: Yeah, definitely. So I don't think anyone today can ignore the LLMs. Even, you know, within Snyk we leverage that to help you do better fixes and to help you do better remediation. So we definitely see it as a, you know, as an opportunity for us because also, you know, like I remember I saw like an interview with the Google CEO, I think it was in a few months already that said that probably 25% of the code base is now being generated by LMS, right? So it's crazy. So definitely it's here to stay. It will change how we do observability. It will change how we do security. You know, but that's a challenge. We need to do it in a secured way. That's part of the things that we do together today with the Snyk. And it also, you know, it provides us with great opportunities to do things differently in a better and faster way, and not only just via chat bots, you know?
Mirko Novakovic: Yeah. I'm also not a big fan of I always call it the new Clippy, right? Some annoying chat bot that will tell you what to do or helps you and nobody really needs it. And Yeah. And by the way, with the Google AI, there was an interesting Hacker News threat around that, around the 25%. And one of the Google engineers posted that a lot of the things they are doing is basically code, computer code completion, right, intelligent code completion and that and that's counted as AI generated code. So in fact, he said that from the 25%, which sounded like 25% of Google's code is now somehow generated by AI, but this includes code completion, intelligent code completion, which makes it sound much less automated. AI generated much more like, okay, it's supporting a developer to be faster, right? That's something that's at the moment, sometimes really hard to figure out what is really cool thing and what helps me really generate things or what is just more or less a little bit of marketing. Right? I'm really not sure. Especially in our space, how much yet is changed by those LLMs, right? I haven't seen features yet that really impressed me beside annoying clippy type chatbots and using natural language to do queries, etc.. I haven't seen this disruption yet, right?
Eli Cohen: Yeah. Me neither. I think also like, extorted back then and we didn't find something that we felt the use case really made sense and really was, you know made me say, okay, like, no wow effect was there, but I believe it will come. You know, like the industry is making progress in, in a crazy face and I believe it will come, but I'm not. You know, I haven't seen something that got me really excited yet.
Mirko Novakovic: Yeah. I had this conversation with Ben Sigelman recently on this podcast, and he said, this is a moment where in two years you can seem very stupid if you say something, right? Because depending on how it evolves, it and it can change a lot, right? If you hit a point where it's much more intelligent, intelligent than it is today. So what's next? So what do you see as the future of Helios and what's your plans for the product?
Eli Cohen: So within Snyk, we work really hard on making this product accessible for all of our customers and making sure that every Snyk customer can leverage our, you know, APM capabilities, our runtime and prioritization capabilities. The feedback from the market is really positive. We're really excited about it. We think there is really a very big opportunity within Snyk to become a generational company, you know, and this is what we really focus on.
Mirko Novakovic: Yeah, that makes sense. And what about your nice visualization? Does it still exist.
Eli Cohen: So we are working to make it live again. Definitely.
Mirko Novakovic: I'm looking forward to it because I think it's really a nice piece of design and yeah engineering on there.
Eli Cohen: Thank you very much.
Mirko Novakovic: Eli, this was a nice conversation. Thank you for joining.
Eli Cohen: My pleasure.
Mirko Novakovic: And Yeah, let's chat when you have released this product into production. Yeah.
Eli Cohen: Definitely. Thank you very much. Mirko. It's been a pleasure.
Mirko Novakovic: Thanks for listening. I'm always sharing new insights and insider knowledge about observability on LinkedIn. You can follow me there for more. The podcast is produced by Dash0. We make observability easy for every developer.