[00:00:00] Chapter 1: Introduction to Code RED and Guest Jonah Kowall
Mirko Novakovic: Hello everybody. I'm Mirko and welcome to Code RED. Code because we are talking about code, and red stands for requests, errors and Duration the Core Metrics of Observability. On this podcast, we will hear from leaders around our industry about what they are building, what's next in observability, and what you can do today to avoid your next outage. Today my guest is Jonah Kowell. Jonah is one of the most experienced observability leaders on the planet. He was an end user, moved to covering the space as an analyst at Gartner, and more recently, building technologies at AppDynamics, Kentik, Logz.io and Aiven while being a maintainer of Jaeger and CNCF project. He has great insight on the space and I'm excited to hear his perspective. Jonah, welcome to Code RED.
Jonah Kowall: Thank you for having me on, and I think this is a great podcast that you're kicking off. So I'm excited to listen to all the other exciting conversations you're going to be having. Appreciate the invite.
[00:01:03] Chapter 2: Code RED Moment and Insights into Observability
Mirko Novakovic: So we always start with the biggest Code RED moment in your career. I expect you have many of those, but do you imagine or can you remember one that was really.
Jonah Kowall: Stressful? Yes, definitely. There's probably one that comes to mind, which is I worked at a startup that was being acquired by a large company, Thomson Reuters, and we had to do a very large data center migration while we were serving a massive amount of traffic. We were actually one of Akamai's top five customers at the time. And we came down to the wire where we identified that there was some kind of performance issue. Go figure. And no one could figure it out. We spent six hours on it. We were up all night, and I eventually solved the problem by actually using some Wireshark and doing some packet analysis and figuring out the problem in our switching. But it was definitely a very stressful, very long night. But once we solved the problem, the sense of accomplishment was amazing. And we did. The migration didn't affect any customers at the end of the day, and everything worked out at the in the end, but it was extremely stressful. So that one probably comes to mind in my end user days.
Mirko Novakovic: Yeah, I can totally understand that. By the way, I still have Wireshark on my notebook. It's a great tool. And I think we first met, or as I remember at AppDynamics when you just joined from Gartner. So you covered observability or at that time, I think it was still APM, right. Application performance management for Gartner. You also were one of the creators of the MQ, the Magic Quadrant at Gartner. Right. Building that and interviewing the companies. And then you joined AppDynamics. Let's talk about that a little bit. How was it as being on on the Gartner side and, building that magic quadrant?
Jonah Kowall: Yeah. So, I covered a lot of different monitoring technologies. The APM Magic Quadrant was probably the most, let's say, most read out of them. But I also did a network performance Magic Quadrant that was also pretty popular. That covered a lot more of the networking side, just because I do kind of everything having to do with infrastructure. After working at Gartner for over four years and working with the companies, I really liked the leadership and the vision at AppDynamics. And I spent lots of time with many vendors and just found that they that their vision of the future, of really understanding the business resonated with what I thought monitoring needed to turn into, which is what observability is. It's about monitoring things and collecting data from things that are not just performance metrics. They're really business metrics. And today we all do that normally. But ten years ago it was not very common for most organizations. So I think that's one of the big shifts thanks to OpenTelemetry and other technologies like Prometheus, it's much easier to collect that business data and really look at it together with your monitoring data.
[00:04:25] Chapter 3: Evolution of Observability Technologies
Mirko Novakovic: Yeah, I mean, I was one of the biggest resellers of AppDynamics. I joined the team very early. Right. And what I can just second that. Right. The term business transaction already was a really smart selected term. Right. At that point of time, and it was also the technology behind it was really groundbreaking, I would say. Right. Without manual instrumentation. So I think Jyoti and Bhaskar did a great job in building that core technology, and I also think that every vendor has this core DNA. And I think at AppDynamics, it's clearly the business transaction and then everything else evolved around it. So how did you see the product and how do you see it evolving?
Jonah Kowall: Yeah. So it started out with that core understanding of the business transaction, but it was really building the warehouse around it. That was the most interesting part of the journey. While I was there, we were able to extend that into really more of like a data warehouse of business data. And there were even companies before AppDynamics that tried to do this, but they were too complex. You probably remember some of the companies like OpTier, who was probably the most cutting edge in that concept of really going deep into the business, but it was just too difficult. And the consulting, you know, was very heavy at AppDynamics. We made it easy to do the collection. And I think now with OpenTelemetry, it's much further ahead in terms of enabling developers to collect that data that's relevant to their business. And that's why OpenTelemetry is such an important change that's happened you know, in observability over the last five years or so.
Mirko Novakovic: I totally agree. And I have to say, I mean, after I founded Instana and we had a proprietary agent, right? We had an agent that was really built around auto discovery auto instrumentation and doing a lot of magic on the agent side. And when OpenTelemetry and first it was OpenTracing came out. I wouldn't say that we were scared, but it was kind of scary and a really transformative change for this whole industry that the magic of the agent was now public domain, right? It was democratized by this project. And it changes everything, right? It really changes everything. And as I know, you are part of the community. You are a driving part of some of the projects. So tell me how did you see the project the first time? What was your thought?
Jonah Kowall: In the beginning we actually worked on something called trace context. And I know Instana was involved in all the vendors were involved where we tried to make interoperability better between our proprietary agents. So that was like a good first step in trying to normalize things between proprietary solutions. And you also had OpenTracing and OpenCensus, which were two projects that kind of took different angles at trying to solve the problem. And the idea of OpenTelemetry was, let's all work together and make these things standardized. So I was involved in a lot of the beginning parts of OpenTelemetry and how that came together. Just to make sure it was done the right way. And then all the vendors obviously invested huge amounts of resources to make it happen. But when you look at how much engineering resources the large vendors and I'm sure you at Instana were the same spent on agents, there were probably 5000, 10,000 developers across all the companies just building agents instead. In the open source, we only have probably 10% of that if, if not less, working on it.
Jonah Kowall: And everyone has the same data and everyone has the same standards, and it's in fact much better for developers with OpenTelemetry than it was with proprietary agents and SDKs and all of the other things. So I think that was a big change. But as you said, it's really disruptive to the vendors that grew up in a different time. They're really struggling with how do we deal with this new type of data? How do we charge for this new type of data? Is the value different? Because, as you know, one of the things that makes APM different from distributed tracing tools like Jaeger, I'm wearing my Jaeger shirt today. Is profiling and the depth of being able to go into the code and really understand the depth that is not something that OpenTelemetry has today, but something that community is working on. The real question is, do you need that in a true microservices architecture or not? It's really questionable. And the depth of profiling is something that separates an APM tool from just a distributed tracing tool, like Jaeger.
[00:09:34] Chapter 4: OpenTelemetry's Impact and Challenges
Mirko Novakovic: Yeah, absolutely. And I think what you mean is, if you think back to the AppDynamics times or then most applications at that time were basically running on Java or .Net and in Java you had like 2 or 3 application servers, like a WebSphere WebLogic, JBoss Tomcat, and then you had a relational database mostly behind it. So most of the core code was running inside of one Java virtual machine. So looking inside was really valuable right. Because most of the problems were inside. And now with microservices I think that's the thing right. Where you split up that code into I don't know, 50 services, then it's not anymore. So much about the details of each individual service. It's more about the connections and the interaction between those services, which is what distributed tracing is really good at. And profiling is more focused on the inside of one service, right? I think both are valuable, but I totally agree that in a distributed world, distributed tracing is the more powerful tool compared to profiling.
Jonah Kowall: Yes. Definitely. Agreed. But it also creates a data explosion where you have much larger sets of data because of the number of transactions that are occurring. So we deal with this problem in Jaeger all the time, where people have a trace that potentially can have 10,000 or more spans. That means little transactions happening in one trace. And they say, why is my browser so slow when I load this? It's like, well, you know, 10,000 different spans on a trace, some of them even bigger. You know, it's hard to visualize that. It's hard to really make sense out of it with a single transaction. So it can get really crazy in terms of people's architectures and the way the transactions flow.
Mirko Novakovic: Yeah, absolutely. But just to second what you said about the vendors and the agent, right. I mean, it's not anymore about comparing the performance of overhead, etc. right now, I think the whole agent side of the story is really covered by OpenTelemetry, and I can say there's zero. We will not develop any kind of proprietary agent anymore. We will 100% rely on the OpenTelemetry agents, which are now five years after the foundation of the product, are pretty stable. For most of the languages they support traces, metrics and logs for most of the languages. So as you said, at Instana, we had probably more than 50% of our developers working on agent technology, right? For the different languages, for different technologies. So now this is solved, right? This I mean, it's good for the end user, but it's also good for the vendors. It's just now it's shifting more towards what you do with the data. And it's not anymore about how you get the data right.
[00:12:32] Chapter 5: Visualization and User Experience Innovations
Jonah Kowall: Exactly. It becomes more of a classical analytics problem and less of a computer science. How do we, you know, extract information out of something that's running in production, which used to be the big problem to solve? Yeah. So it's kind of shifted away. But one of the things that I had a lot of respect for at Instana that you built is that visualization of the data center and being able to, like, look at it like a 3D model. I think that is the type of thing that we can use to help push the envelope in terms of making sense of data is really how you visualize it, how you help people grasp the complexity and really create that additional value. I know that that probably wasn't a feature that people used day in and day out, but on a demo, I mean, that's the kind of thing that makes people say, wow, that's really, really impressive. And we had the same thing in AppDynamics with the flow map, and that was the first thing that people saw, and they said, wow, that's what I want. Do you need it every day? Do you use it every day? Not really, but it gives you that data, the view of the data that's really unique and differentiated.
Mirko Novakovic: No, absolutely. I think on the UX side, there's a lot of innovation, right? As you said, if you have a trace with thousands of spans, how do you visualize that? Right. And to be fair, if you look at the industry today, a lot of the visualization are copy paste, right. There's not too much innovation. If you if you take ten trace views of ten different vendors. Yeah, sometimes the colors are different and the notes look a little bit different. But there is not too much innovation on the UX side. I think that's definitely something we see as a, as a really interesting part of innovation for observability is how do we visualize all that data. Because it will become more and more. And how do you get the right information out of that data?
[00:14:30] Chapter 6: AI's Role in Observability
Jonah Kowall: Yeah, definitely. And there's been a lot of attempts at kind of obviously we're going to talk a little bit about AI and helping how that helps solve this problem. But even if you go back quite some time, companies like Dynatrace that have amazing products, they started building this intelligent agent that they called Davis many, many years ago, and they've evolved it over time with their own models to try to make it, you know, think a little bit ahead and start helping the user. Does it work that well? It's okay. I mean, but it's definitely a great selling point because it's like, you know, this can help augment your team to make your engineers solve problems faster. I mean, that whole concept that they've been working on for a decade or even more is something that's now starting to become mainstream thanks to LLLMs.
Mirko Novakovic: Absolutely. But also other AI technology. Right. That that is used. And I agree. Right. I think Dynatrace has really done a great job for, for years now, building that technology. But overall, I have to say before, let's afterwards talk about LLLMs and what that could. But before that, I would also say there was this term AIOps, which I don't know, maybe when you were at Gartner, you had a better understanding of what that term really means. But I think I always was struggling to really understand what AIOps really meant. And there was a huge kind of hype in the industry that this will solve a lot of problems. It will automate a lot. And I think it never really delivered. And my kind of feeling is I mean, the term exists, but I have the feeling that it's kind of gone. Right. It's not anymore the big thing everybody's talking about.
[00:16:17] Chapter 7: The Rise and Fall of AIOps
Jonah Kowall: Yeah. So we had a lot of debates internally because that research came out while I was at Gartner. And one of my colleagues kind of spearheaded it, but the idea behind it is to take a bunch of AI and machine learning technologies and apply it to operational data in general. So examples of that could be things like clustering log messages that look similar together. That's pretty basic. It's a good feature. It makes sense. You're using some machine learning, but it could also be like correlating events together. A lot of organizations have a lot of monitoring tools, and they struggle with making sense out of all the noise of the alerts. When you have a major Code RED outage, you probably get alerts from 4 or 5 different monitoring tools. Which one do I action on? Which one is the most important? Where's the problem? You're just flooded with alerts and issues and you don't really know how to deal with it. So you know, these technologies still exist today. But as it was never really a market, it was more like a set of capabilities that are inside other tools. So you'd see them in APM tools, you'd see them in event management tools. You would see them in log analytics tools, security tools. But in general, it was more about making sense out of the noise, which today with AI and some of the models that we have that we didn't have even a couple of years ago, I think we can do a much better job as an industry in solving some of those problems.
[00:17:53] Chapter 8: Data Management and Warehousing Evolution
Mirko Novakovic: Yeah, I totally agree. But let's start with the data problem. I mean, you mentioned something that at AppDynamics, and I remember that I really remember when this whole warehouse thing and this sounds so obvious today, but only like ten years ago, I think most of the vendors and most of the tools in the APM space, they used relational databases at the back-end. Right? And they used a MySQL or Postgres in the back-end. And then I think if I remember correctly, it was kind of a Hadoop based model at AppDynamics at the beginning. So the first thing building a warehouse, which if you look at it today, it's pretty much conceptually what people use today. Splitting up the data. No indices and and and do a MapReduce when you search and analyze the data. So I think it was pretty well thought through. But at the time it was really complicated. Right. Because everything was on premise or mostly on premise. So you had to manage that complicated stack on the customer side. So I think after really most of the vendors moved to the cloud. And I think today most of the vendors are 100% cloud based. And with all the technology shifts that you have there and cheap S3 storage and different database model like Clickhouse and others, I think that has really turned on kind of the basis for what what you can today use to train data is like keeping a lot of the logs, metrics and traces and being able to analyze that.
Jonah Kowall: There was a lot of evolution, so I would say Hadoop was back more in the days of Optare and those companies. But AppDynamics and others adopted Elasticsearch, which today is OpenSearch, but that also has challenges with scale and cost. So then people move on to other kinds of warehousing technologies that are more efficient, like Clickhouse. The challenge that you run into is that a lot of those systems that are great at storing events and unstructured data are not great at dealing with metrics. So often you have this challenge of metrics stored differently than, let's say, traces, for example. And we deal with this in Jaeger today too, where we store metrics in Prometheus and our traces are stored in Cassandra, OpenSearch, Elasticsearch, Clickhouse. And you know, that's kind of one of the challenges that people deal with. Clickhouse is so fast. It can be used for metrics, and it works well, but it doesn't really conform and can't do the same kind of queries that you can do with Prometheus that everyone's really happy with today. So yeah, it always challenges there.
Mirko Novakovic: As a side note, we actually combined both. We basically put the PromQL layer and put it on top of Clickhouse so that you can perform normal PromQL queries against the Clickhouse layer. And so we have all the data in one basically warehouse, which allows us especially to use the semantic convention, which I think is one of the really interesting parts of OpenTelemetry two because, I mean, you remember that when you had tagging, everybody was using tags differently, even instead of one customer, right? One developer names the host name, host, underscore. Name the next host name, the next just the host. And now we have a standardization for hundreds of different of technologies and tags that you can now use to basically aggregate and create context, which is a really powerful feature of observability.
[00:21:38] Chapter 9: Natural Language Processing in Observability
Jonah Kowall: Yeah. Totally agree. It's definitely a good approach to solving that problem. That definitely exists. But languages are always a challenge. I also work on OpenSearch, which is the open source version of Elasticsearch, and it has a very open query language processor where you can actually query it not only in in the Lucene query language, but you can query it in SQL and natural language, and you can actually combine them. So I do think that that flexibility and query language helps users. And ultimately if we put an LLM in front of it, which we actually have that on OpenSearch. You can let the user just ask questions and it translates it to the query and pulls back the data. So that's great to see that innovation happening.
Mirko Novakovic: So you just mentioned natural language to ask the right question to OpenSearch. And I like that idea. But I also see a few challenges. Right. So I do think that LLMs and natural language will become one of the front ends or modes where you can query an observability system because it just makes sense. Right? But I also see that sometimes, I mean, sometimes the queries are really complicated and they are kind of mathematical, and I think sometimes even easier if you know the query language to do the query in that query language, then trying to put that into natural language, right. Because it's just mathematical complicated. How do you see that? Do you think it will be a system that replaces the query languages, mostly for users, or it's it just an additional helper?
Jonah Kowall: I think you'll probably see both. I've in fact seen both of those approaches taken to it. And when you play around with LLLMs and ask them math problems, they're actually pretty good at it. The nice thing about it is that they visualize the math for you and give you the logic of what they went through to get there, which I like. The challenge with a calculator is it can give you a wrong answer and you don't know, but with an LLLM it explains to you what it's doing and how it got to the answer, which I like, because then I can say, oh, you missed this. Or maybe I missed telling you about this point that I wanted you to incorporate into that formula or calculation. So I think based on the user's comfort level, query languages are always going to be there. But going back to the example, if I speak PromQL and your system speaks clickhouse, the LLM can translate that query for me. I don't necessarily need to write the code to do it. So there are also ways where if I like influx query language and I'm running on Prometheus, I could query an influx and get the answers back using an LLM, for example. So I think there's a lot of use cases that just open it up to technical users that prefer something. And all the way to business users who, you know, don't know a query language and don't care to learn one.
[00:24:51] Chapter 10: Choosing Query Languages and Their Benefits
Mirko Novakovic: One of the things, by the way, that was when we we had a you can imagine a long discussion which query language will we use for Dash0. Right. And there are many options. You can take something like a Kusto type of language where you use these pipe languages, which is kind of nice. You can use SQL, you can use PromQL, you can do your own query language. And one of the interesting arguments for PromQL was also that there's a lot of documentation, Stack Overflow and other things available, which actually makes LLLMs really powerful, right? Because they learn a lot about it. And I just got the feedback from a customer who says, oh yeah, when I do PromQL queries, I use ChatGPT and it works pretty well. And with others. And for example, they said for lock QL, for example, it has a lot of hallucinations because actually there's not so much documentation and things available on the internet. So this will become a competitive advantage if you are using a query language that has a lot of documentation and information available.
Jonah Kowall: Yeah. And I think you'll see other vendors create small models to augment the large models that have their own proprietary learning. So going back to Dynatrace, because they've been building their bot for so long, they actually have a huge knowledge base behind that bot of things that they've learned about applications over the years, and that becomes kind of their small model that helps them answer specific things. And I think that pattern is going to repeat itself, not just in observability, but in general, where the small, small model that you decide to build that has your proprietary information or perspective is going to make the value, you know, substantially higher than what you would get with a public LLLM that is trained on general purpose data.
[00:26:43] Chapter 11: Tailored AI to Enhance Observability Tools
Mirko Novakovic: Absolutely. You could even train it for a single customer, right? On customer specific things.
Jonah Kowall: Yeah. And you'll see that, I mean, it could crawl their internal documentation and build a model on that and help you answer that in context. And there's a lot of great things operationally you could do there, like ask who owns an application or which team owns an application where you could ask that right in your observability tool, and it might pull that data from the knowledge base of the company, you know, that exists in that private model. So there's a lot of things that could really shortcut some of our troubleshooting that often is kind of gated by people and slack messages. And how long does it take for this person to reply to me kind of thing to do the discovery?
[00:27:32] Chapter 12: Innovative Observability Solutions and AI Applications
Mirko Novakovic: Yeah, I totally I mean, for us, it's a very interesting topic. And there are some vendors like flip AI who are really focused and also published a few interesting publications on on that topic. So I think the space will rapidly evolve and and change a lot of the especially user experience and, and troubleshooting pattern detections, etc., and observability. What I also like is that you can do things even without AI that has a lot of value for the customer. Right? I'm a big fan of the bubble up feature of honeycomb, for example. Yes. Which is super powerful. It basically, if you select something that seems wrong, right slow, it will compare that set of data against the other data to see if there are differences or things different in those two types of data. And it will pinpoint it right. So for example it would say oh, all the slow requests have a country code China where the rest not so. There seems to be a problem in China, right. And this is just comparing it's basically mathematics and it's not really AI but it's really powerful right. So I do think that we shouldn't forget about really solving some issues for the user. Right.
Jonah Kowall: Yeah. And we have a compare trace feature in Jaeger. That's also very useful that lets you do similar things. And it highlights the differences in the traces. So you could be comparing different traces from the same microservice. One is slow, one is not. Show me what's different and will basically highlight the differences there.
Mirko Novakovic: Oh, I love that feature by the way. It's also pretty unique, right? I mean, I would always love that to have at Instana, right? A comparison feature of traces and having that in the UI. And I think you did a really good job in Jaeger showing the differences of a trace in one trace view. Yeah, I like it.
Jonah Kowall: Yeah. At AppDynamics there was some similar functionality, but we just didn't spend a lot of time enhancing it from kind of the early days, but it would even compare like static configuration values because one of the big challenges back then less so. Well, it still happens even with microservices. But one of the microservices is misconfigured. And so you see a random slow request. And it's because that's when it happens to hit that load balanced service that has the problem. Those things are really tricky to figure out when you have, you know, something that was either manually configured or misconfigured on one of the nodes. It's really hard to track those down. Those can be really frustrating. You know, operationally to deal with. So comparison tools help where you're like, oh, everything seems to be going to this host that's problematic. And, you know, that's where the issue seems to be.
[00:30:23] Chapter 13: Future Trends and Innovations in Observability
Mirko Novakovic: Yeah, absolutely. So where do you see the big innovations in the next few years in the observability space?
Jonah Kowall: I'll probably break it down into three main areas. The first one is about the simplicity. How do we make it easier? All of the tools are very complex. And the second one, which is somewhat related, is the complexity of OpenTelemetry itself. It's you know, we get so many questions on slack about configurations and why doesn't this work and how does this work, and what's the best practice for this processor. And it's just on and on and on. Questions of config and the best practices are just not well known. So that's, you know, kind of the second challenge is managing OpenTelemetry as it gets bigger and there's more stuff, more complicated. We need help. We don't want to do this stuff ourselves. And that's why they go to vendors. And then the last one is obviously going to be related to this AI and machine learning, where today everything is about lLLMs. But I do think there's going to be significant enhancement on time series based models as well, where we'll have better predictive analytics and better types of analyses on time series that I think are long overdue. Sure, we can do statistics. It's easy to do that in Prometheus. But, you know, how do you take it to the next level where you're comparing lots of different metrics together? You're asking questions like, where is the anomaly? Or why is this anomaly happening? You know, I think we need much better models on time series. I've seen a time series, like large language model out there. That's very early stage, but that's an area that I'm also going to keep watching because I think that can make a big difference. Those are probably kind of the main three trends that, that I would point out.
Mirko Novakovic: Yeah. I think that's the, I would say interesting part of the whole observability space that it's really in. I would say in 5 to 10 year cycles, the whole industry changes itself all over again. And I can see that right with the technology shift, with OpenTelemetry, with LLLMs, I, I think there is so much possibilities to really change and make, make, make troubleshooting easier. Right? Coming back to your initial, you now have this stress moment where your system is not working, everybody's looking at you. And basically what we want to do, we want to help you fix that issue and find out what the problem is quickly. Right. And I think we have a really good opportunity here that we can make this better.
Jonah Kowall: So the one thing that we didn't talk about at all is cost. And so the other trend that I think is out there today is how do I lower my cost in these pipeline tools, I think are doing great things to help make observability data more compact, which obviously reduces cost. And I think that's another big trend, is that these tools and you've probably seen the same Twitter discussions that have happened around what was it, the $20 million Datadog bill or something crazy or 30 million?
Mirko Novakovic: 60.
Jonah Kowall: 60 million. Sorry, I was being conservative. So I think everyone deals with cost. And cost is a big driver, especially in economic times where people are being more conservative and cautious. So I do think that there's an opportunity to greatly reduce costs by reducing data and moving to tools that are more cost effective and more efficient. So that's the other thing we didn't talk about. That's a clear macro trend.
[00:34:15] Chapter 14: Addressing Costs in Observability
Mirko Novakovic: It's a very interesting category, right. Tools like Cribl who are really taking off who that are basically there to reduce the data that you put inside of the tools. And I was discussing that. So my theory is it's basically an innovator's dilemma, right. If you are a vendor and you do $2 million in $2 billion in ARR, and now you would put out a feature that reduces the amount of data by 30%. You are churning ARR. Right. So you as a vendor, you basically can't do that. So that basically created the opportunity for tools like Trimble to sit in front like a proxy and do that job. Right. I think it is very interesting that, is not really covered by the vendors.
Jonah Kowall: Well, I mean, Datadog does have a similar service. They've had it for a while. I haven't heard a lot about it. Obviously Chronosphere acquired Calyptia. You know, the folks behind fluent D and Fluentbit, and they want to build that. But I do think that if you're an innovator and AWS does this all the time, whether it's coming out with new processors or new storage or new services, they have to disrupt themselves because otherwise someone else is going to do it. So I think it's shortsighted of the legacy vendors to not think ahead and not think about the problem and leave it open the door. And Clint and his team at Cribl have done a great job in solving the customer problem. It's about the customer. It's not just a vendor thing. And I think a lot of great companies are not afraid to disrupt themselves. And if they don't continue doing that, then they're going to end up having to acquire these technologies over time.
[00:36:00] Chapter 15: Conclusion and Closing Thoughts
Mirko Novakovic: Absolutely. But I am pretty grateful that they leave the door open because it creates opportunity to build something new and innovate. And so this is one of the angles that we will take here at Dash0. But you're right. I think at the end of the day you have to disrupt yourself in this very fast changing world. And it was really fun, Jonah, talking to you and having you on this podcast. It's obvious that you have a lot of knowledge in this space. And yeah, maybe we do this another time in the future.
Jonah Kowall: I'm happy to do that and appreciate you having me on. And best of luck with the new podcast.
Mirko Novakovic: Thank you.