[00:00:00] Chapter 1: Introduction to the Podcast and Guest
Mirko Novakovic: Hello everybody. My name is Mirko Novakovic. I am co-founder and CEO of Dash0. And welcome to Code RED code because we are talking about code and red stands for request Errors and Duration the Core Metrics of Observability. On this podcast, you will hear from leaders around our industry about what they are building, what's next in observability, and what you can do today to avoid your next outage. Hello everyone! Today my guest is JJ Tang, co-founder and CEO of Rootly, an Enterprise Incident Management solution that is used by companies like LinkedIn, NVIDIA and yeah, I'm happy to have you on this show today.
[00:00:46] Chapter 2: JJ Tang’s Code RED Moment
JJ Tang: Hey Mirko excited to be here.
Mirko Novakovic: No, I'm super excited. I just told you before this it's the first time. It's the 19th show, and it's the first time that I didn't know the guest before. So I get to know you on the show. That's great. Before I ask you a ton of questions, I will start with my always my first question. What was your biggest Code RED moment in your career?
JJ Tang: Biggest Code RED moment? It's a great question. I think when I was working at Instacart previously, and that's where I was born out of. I was there for the very beginning of Covid overnight. The lockdowns started in Canada and the US, and what became a luxury service, grocery delivery to your door suddenly became this essential service. And overnight, our demand grew by 6 or 700%. Everything began crashing or servers were unable to handle load. Things that you didn't think would break at the scene started to be overloaded. And we were dealing with, you know, ten, 20 incidents at any given second, all of which were mission critical. That was probably the most insane time of my career. And, you know, working 24/7 for, you know, a month, two months straight. I think my the one that stands out the most to me was, you know, everyone during Covid wanted to buy toilet paper. Everyone was obsessed with buying toilet paper. And so the shelves were empty, and we were sending Instacart shoppers to these grocery stores trying to buy toilet paper and nothing else. And they were coming back empty handed. And so we started trying to develop this, potentially a real time inventory algorithm to say, hey, you know, don't send a person here. But there was no time. Everyone was ordering. So what I did instead was we just deleted toilet paper off the catalog. And so for three months, no one could buy toilet paper. But that was certainly a pretty Code RED moment for us.
Mirko Novakovic: And by the way, I lived in San Francisco during that time, and I remember trying to buy toilet paper, and I found a store right around the corner, and they were selling one toilet paper for, I think, $8 right at I mean, at that time, if you had toilet paper, you could make a fortune, right? It's It's so interesting. And also what I found interesting when I, when I moved back to Germany and they had the same problem. No toilet paper. So I, I never read a good summary why that happened. Right. Why why was toilet paper the thing that people bought when they came into a lockdown? That's pretty interesting.
JJ Tang: I think the funny story also is my co-founder and CTO who I met at Instacart. He when I went over to his house, I looked in his bathroom cabinet was just stocked to the brim with toilet paper. And I said, dude, you were part of the problem. You're the one that caused this incident.
Mirko Novakovic: That's a Code Red.
JJ Tang: RWell, everyone else was buying it.
[00:03:42] Chapter 3: Founding Rootly Amid Covid
Mirko Novakovic: Great. Okay. JJ so I saw that your company, I think was founded in 2021. So essentially you founded it during Covid. Was that one of the kind of responses to what you have experienced during that time, or how did you come up with the idea of building an incident response tool or knew I knew incident response tool?
JJ Tang: So before starting Rootly, I used to work at Instacart myself and my co-founder Quintin. He was the first SRE there very early employee. We knew that our incident response process was not great, and it was very much so exaggerated by the onset of Covid especially. And who else better to complain to about your incident response process than the first SRE at Instacart? I was on the product side. I owned about a quarter of our revenue. I built Instacart 0 to 1 business. So Quintin and I on the side, we began hacking together some of the first versions of Rootly as this internal tool, and we saw good adoption throughout the organization and we honestly fell in love with it. And so we started working more and more on it and on the weekends, and just started talking to other companies and very quickly realized. The way that we were solving this problem was not unique to just Instacart. There was it would be applicable to many other companies. And around the same time, my my manager or my manager's manager came to me and told me to stop this building. Incident management tools as a grocery delivery company was not our core competency. Go buy something. Go see what's out there and figure it out. And when doing my search, I kind of realized as well, no one was really solving it the way that we were or how excited that we were about it. And so we decided to quit and started the business despite being in Covid. So we worked pretty remote. The companies, headquartered out of San Francisco, where my co-founder lives. I'm in Toronto, Canada. And so it was very much so born in this remote world. And yeah, luckily luckily, it's gone decently well so far.
[00:05:52] Chapter 4: Distinguishing Rootly in the Market
Mirko Novakovic: Let's talk a little bit about tool. I mean, I checked out the website, read everything I could find. And it's obviously I mean, there are some big players out there like Pagerduty, Opsgenie now incident io, which are kind of in the same category of incident management. Maybe we can also talk. I saw you just you have incident management. You have on call, you have status pages which could also be seen as some different types of tools. But you see it more and more in one platform, right? So talk about what when you did your investigation, what was the thing that was missing and what did you do differently? I saw a slack native basically. Right, the tool that you have built. But what is kind of what was the thing that you were missing and that Rootly made it different.
JJ Tang: We knew from the beginning we wanted to be the end to end reliability platform that teams and companies depended on whenever things went wrong. But we also knew the first step in doing so is we had to build a wedge into a company's existing stack. Just like how you mentioned there was many incumbents out there. The data dogs of the world, the pager duties of the world exist. And then companies trusted and relied on them. The gap that we first identified in the market was Pagerduty. A tool like Pagerduty or Ops Genie is does a very good job at waking you up in the middle of the night. It's a pretty expensive phone call, but it'll wake you up and it does it fairly reliably. Everything afterwards of how you orchestrate, communicate, and collaborate on the incident is kind of choose your own path. Every company was doing it somewhat differently, and the problem these organizations ran into was it was very hard to drive consistency or reduce their mttr effectively. And so we saw the opportunity to start from a collaboration standpoint. So the company was born as a more slack native tool. And what we told companies was, hey, we help you manage incidents on slack. So we were this very, very hyper focused slack tool and we begin building a ton of traction. We started working with companies like Webflow and Repl.it and Patreon, and soon we started working with enterprises like LinkedIn and NVIDIA and so on.
[00:08:08] Chapter 5: Rootly’s Incident Management Process
JJ Tang: And over time, it became important to us that we needed to build a platform. We needed to build more horizontally. We needed to deepen our moat, in with these organizations. And so we started doing the other parts. We started doing the insights, the learning, the postmortems. And subsequently last year, we launched our on call product, which is our fastest growing product. And so now we built this end to end reliability platform, but it's still very reactive. And the areas that we're investing in deeply now and effectively, where all of our roles that we're hiring for now is how do we turn that reactive reliability to more proactive reliability and making that shift in that direction? And, you know, when I think about. Some of the other tools in the space, like the ones you mentioned. I think it's important to not view incident response as a single player game the second you view it as this SRE only task for on call engineers. You lose sight of how these incidents actually are ran. They're ran by engineers or ran by support people, or customer success, or account executives or legal or finance. Everyone gets involved. And by focusing on a very multiplayer experience and platform, had fueled a ton of our success now. But at the beginning it was just important to get our foot in the door.
Mirko Novakovic: Yeah, it makes sense. And let me try to understand. I mean, how how this all works. Let us walk the people who listens through this case. Right. Let's say you have something like Dash0, not Datadog. And and we trigger an alert, right? And say, oh, the number of errors of service A, B, C is high, then we would integrate with your solution. Right? And and what you would do is then orchestrate the incident. So you would open an incident. You would categorize say oh, this is actually an incident. You would probably select something like an owner of the incident or something who is responsible for it. And then you bring the team together, right? You need multiple people to work on it, and you have kind of a workflow until you fix it. Then you fix it, you have a postmortem. And meanwhile you probably also have to inform your customers by updating the status page. Right? So walk us through this process, how this looks like and how Rootly supporting that process.
JJ Tang: Before companies use Rootly. It looks very chaotic and different and fairly fragmented. But when companies start implementing our tool, they have a very consistent way to respond to incidents that works effectively with their whole stack seamlessly. So, for example, exactly what you were mentioning, we'll get some sort of alert from an amazing observability system like Dash0 that comes in. And, you know, the system realizes that something is broken, that we need to wake someone up in the middle of the night to go fix it. And so that alert flows into Rootly we call your mobile phone and we say, hey, something is broken. We'll Rootly wake you up in the middle of night, and we'll do it very reliably once you realize, hey, this alert is real and I need to start responding to it. We will help you create that incident as well. So we help you spin up. For example, if you're a company that uses slack, we'll spin up a slack channel. And around the same time, we also announced to all of the relevant parties and stakeholders, your leadership or other engineering teams, that this is now happening. And while that incident is being spun up, we'll do a few things. We will automate as much of the manual admin work as possible. We will loop in the different routines that need to help you respond to the incident. Or maybe you have a communications liaison that needs to drive things like the status page. We'll spin up integrations like a zoom room for you to hop into and have a conversation around the incident, or you need to create a record inside of JIRA to start logging and documenting the incident, or surface any relevant playbooks that the team should be following.
JJ Tang: One of the interesting areas was thinking about how we can tastefully incorporate AI into our tool as well. So, for example, if you run into, let's say, a website outage that's caused by some sort of memory issue, and we've seen the same incident happen to you guys three months ago, we will surface that as a related incident, so you can start learning from your past, and you don't have to be in that incident previously. We'll let you know. And as this incident progresses, we'll help you do a few things will help you start documenting your post-mortem timeline in real time as it occurs, as new people join the incident, maybe they come in an hour or two hours late because they were having fun recording a podcast. We will help them catch up to speed. They could run a quick command and say slash Rootly catch up, and we will use AI to scan through all of the channels or the or the incident channel, all of the zoom conversation and say, here's what's happening, here's who's doing what, here's what needs to happen next. And as you get closer to the resolution of this incident, maybe you say, hey, we need to roll this back. We'll automatically pull in the PR, we'll give all the context related to it, and as you resolve it, we'll mark it as done.
[00:13:29] Chapter 6: Leveraging AI for Incident Response
JJ Tang: And when you mark it is done, we can automatically communicate out to your status pages or whatever you need to do from there. Then you're ready to learn from the incident. So you've been alerted, you've responded, and now you can learn. And that learning portion happens inside of our tool or inside of another platform like Google Docs or Confluence. We're not too picky about it, but that postmortem is largely written for you. We can pull all of the contacts, what we believe were the probable root causes, the things that we think you might have missed as action items and surface that to you and you can say, yep, yep, nope. Yep. That looks good. And so really, the responder only needs to focus on sharing the key knowledge that they know and not worrying about formatting, not worry about generating timelines and all of that good stuff. And of course, we pull a ton of insights around this so you can see how you're trending hotspots in your systems. You can even detect things that are a lot more interesting that humans generally would struggle with. Like anytime I have an incident related to maybe my database, where do people commonly get stuck? Where are people asking the most questions around, where are people getting the most frustrated? And that could be good indications for you on how you can improve your process.
Mirko Novakovic: You said that you basically show me Related incidents. When I have a new incident, how do you utilize the postmortem in that context? So when I had this incident and I learned about that incident right in my postmortem, is that link from the incident or are you are you somehow incorporating that into the process? How do you use that postmortem in that incident case?
JJ Tang: That's a great thing about using a single platform to do it all is we have very good visibility and access to that sort of data. We have access to your postmortem data where you've described what the ultimate contributing factors were. You've written a detailed description. You have a timeline. You have action items. You have more information and context added on from your review meetings. The other so that's one data source. The other data source is all of the metadata that you're not documenting as a human. That metadata can be who is joining the incident. What are the service dependencies that are being mentioned inside the conversation? Maybe it's not relevant for the postmortem. It's certainly relevant for us, or even the live conversations where people are talking about here's who Azure polling here's, you know what we think this service can be doing. Here's the suspected problems. And so we combine all of that information and we're able to paint a picture of, hey, is the current incident you're in. Does that look similar to a previous incident now. And so we use AI to do a lot of that matching for us. And we use our existing data. We chunk it in very intelligent ways to present it back to the users. So we'll give the users or the responders, I should say, a quick summary of what it is, why we think it's related. And we'll also ask them, hey, here's here were the key people that help fix it last time. Do you want us to page them and bring them into the incident? And you could say, yes, these are the people I want to help me. And so a lot of companies, what we've noticed over time and the reason we built this is when they don't know if it's related to something else. They waste a lot of time there, but they also just start paging random teams until they find the right one that can help them. And so we want to shortcut that inefficiency as well.
Mirko Novakovic: I remember so many conversations with customers asking me, okay, we have this problem. The problem they describe is we are a large enterprise. We have one team. They had an issue with a database. They fixed it right. And it took a lot of time. And then a different team runs into exactly the same problem. How do they know that a different team in the company has fixed it. Right. And they used like confluence a wiki page or something like notion. But people don't search for it or can't find it. And so you're actually fixing exactly that problem, right. That if a problem already occurred and I fixed it, you will notify automatically, as far as I understood, using AI that that already existed in the company. And then you will also tell them which people were involved so I could involve them in helping me. Right. That's actually really cool, I think. Yeah.
JJ Tang: I think the exciting thing is how far you can take something like this. The first iteration that we went through really stares at your existing incident data. Everything that happens after that alert is fired. The parts that we are investing are most time in is, how do we now gain insight from all of your different potential sources of information your logs, your telemetry information from your observability tools like a Dash0, for example, your code repository all of that. How do we now mesh that data together with your incident data and paint an even brighter picture? We want to be able to, you know, help you course correct throughout the incident. As time goes on, as new information is It discovered the incident could start one way and you quickly realize this. This was totally unrelated to X, and we think it's dead. It's why over the course of that incident, we want to keep being able to present you with relevant context from these data sources. Our thinking is you're going to be most effective when you're not switching between 30, 40 different tabs. You're going to be most effective when we can bring that information to your fingertips. And in this world of, you know, data overload, let us help you parse through that needle in the haystack type problem and whatever is most relevant to the incident. But of course, it is very much so predicated on great data. I think in any AI tool you innovate on effectively one of three things. You're either going to innovate on a better model. I don't think I'm going to build a deep seek tomorrow. You're going to innovate on compute. I don't think they're going to build an NVIDIA although they're a customer. So I need to innovate on better data at the application layer. And so that's the way that we think about it. And obviously sources like, you know, a Dash0 feeding into our systems is very critical. If that data is false or inaccurate, then the results we ultimately provide, no, LLM is going to be smart enough to fix that.
Mirko Novakovic: I always thought about and this could be also talking about this proactive thing. I'm not sure if you train your models only on the data. One customer. I was always thinking about if you have a lot of customers and you could train something on that data. So let's assume the case. Somebody updates to the newest, let's say Java version. And that creates an incident. And then you have another customer upgrading and it creates an incident. So you learn oh, once you update your Java version you will run into that problem. In that context you could proactively inform all the other customers, hey, we have seen this in our customers. If you upgrade from version, I don't know, 15 to 16 and you are running this database, then you're running in this incident and this is how you avoid it, right? So but for that you would need to train basically the model on all the data of all the customers, which is sometime tricky, right. If it's possible or not. And how do you see that.
[00:21:10] Chapter 7: The Future of Incident Response with AI
JJ Tang: Yeah. We have a very unique set of data. If you think about when you see something get updated on the on a status page, let's say slack goes down, it gets updated on their status page. We've typically have known about that for over an hour already, or sometimes even prior to it. The information we've gathered over hundreds and hundreds of customers serves as a really good early detection system. I remember when Circleci had their incident. I believe it was something related to you needed to rotate your API keys. Otherwise it was this big security vulnerability. You know, we saw the difference of customers being able to fix that within ten minutes, and we saw customers that struggled to fix it, you know, with two hours. But we saw all of that start occurring within our system. The spikes and the usage in our system happened before anything was talked about from Circleci standpoint. Not to say they're not a great company, but we have an incredible map of the internet, and thinking about how we can leverage that is very tricky because privacy for us is very important. One of our biggest engineering and product principles is, and I'm sure it's probably not too different from Dash0, is we're in the business of doing the boring things, right? When we built on call, we thought about reliability, not the flashy scheduling features. When we build any sort of AI into our tool, we think about enterprise grade adoption, privacy and governance. Customers trust us a lot to ensure their data is kept safe and doesn't transit the wire necessarily. But I think there is opportunity in a world where, hey, you can help each other effectively while keeping everything safe and locked down and incredibly secure. We have a full transparency. We haven't venture as much into that territory, but all the information exists now to to take advantage of for sure.
Mirko Novakovic: Same for us. We have never done that, that we use data of different customers to train models, etc. we always kept data very private to the customer, but I was always curious how you could leverage all that data between different customers. And probably the solution is somehow transparency, right? To allow the customer to basically select which data they want to share. Right? And they get some kind of benefit for sharing it, right, so they can decide what they want to share, maybe even anonymously. Right. Just sharing. Hey, I did an update of that version or I had this outfit with Circleci. I can share that. And then for that I get some extra feature feedback out of that data, right? I don't know, it's I think it only works with transparency, right? You can't just use the data in transparently to train models and do that. Yeah.
JJ Tang: I think over the course of last year, we've seen a lot of companies get into hot water because of it. They assume opt in on their AI features and such and, you know, and mine and yours type of business. Trust is everything. And it's the only thing that without it, it's quickly lost. And I think the business would tumble. Effectively without it. But even, you know, we see a lot of interesting use cases, even for ourselves. Last winter or two winters ago, we recently launched our AI feature into our product. We were using it ourselves first, and because we're in Canada and one of the pipes in our office burst And it started flooding or office and we thought about, okay, well, none of us have ever encountered this before. So we, you know, we declared an incident on our own platform and we started asking our copilot tool within, what should I do? What do I need for this insurance claim? What should I document? What are the first steps? Help me draft an update to update my employees that need to come in, draft an update for my HR team to start rescheduling, or my recruitment teams are rescheduling some of the interviews that we were doing on site. I'll find a location. And so it was very helpful in helping us understand a incredibly novel problem that we've not encountered before. And I can imagine the same is the same with in software world that we were talking about as well.
Mirko Novakovic: Yeah, absolutely. But in that case, you actually had a model that was trained with data from people who have experienced that problem outside of their organization. Right. So this is a problem that probably has a lot of public data where the model was trained with, but it would be cool if that also exists for concrete incidents in our space. Right. Yeah, but but I think it will come. Right. I think probably the models will get better and better and we'll have those data included naturally, somehow. Right. But I totally like the idea. So what what do you see as the next evolution of that interest response in AI? So how do you see is that a lot of automation. You talked about being proactive instead of reactive. So how do you see that evolving?
JJ Tang: I see evolving to the world of proactivity. Honestly kind of slow. The folks that we work with largely infrastructure folks, SRE, DevOps platform engineers, probably no different than Dash0 as well, tend to be very skeptical about big wide promises. We certainly don't want to go out there and say, hey, this whole self-healing machine software stuff, we fixed it. But we have, you know, we run an AI lab internally effectively that tests all of these different models. Our own experiments and the product that we are investing and building in, and we see a ton of promise with how LLMs can be leveraged in a safe way, in a confident way that doesn't go and try to promise the whole world. One of the first things that we are really doubling down on is how can we help you conduct real time investigations very quickly? You can imagine a world where, hey, you have 30 different data sources plugged into your system. Dash0 you have sentry, you have other telemetry logs, you have postmortem data on confluence. You have just conversations from your team meetings last week recorded on zoom. All of that information can play into an incident, and we want to help you be able to when an alert gets fired, understand all of the relevant and right context to help you understand the probable root causes that are creating this incident and what you should be doing next. And the world of react.
JJ Tang: Proactivity is quite exciting. I'll give you an example. You can have a very low level alert. Come in. Maybe it's a P4. It doesn't look like anything to suspicious or concerning. Let's say your database is under a specific amount of pressure. But the one thing you maybe the human doesn't know based on their experience or they just don't see it is tomorrow you actually have this big sales event that is happening, and this P4 can actually end up being this P0. And there is also new system dependency changes that your infrastructure team has announced eight days ago, inside of a meeting that was documented somewhere inside of confluence, and you actually didn't realize that, hey, although this alert is quite low level, there's also a new dependency that is occurring and that system is not ready for the load that we're going to see tomorrow. And piecing those two parts of the puzzle together is actually incredibly difficult, unless you were there in all of those meetings and you also had the foresight to pattern match that together. But that is something where an LLM can do in a split second. And that's the impressive thing that we are getting towards. And the results that we've been running internally and with our design partners and customers have been very impressive in detecting that. Where we are really doubling down in focusing on is we need to reveal that chain of thought, how we are getting to the answer, almost like a student showing their teacher the math homework and how they got to the answer and proving that they didn't cheat in order to get there.
[00:29:40] Chapter 8: Personal and Sectoral Use of AI
JJ Tang: When Deep Seek was announced, that was the thing that blew everyone's mind away was the chain of thought. Here's how a machine actually thinks and talks to itself. And that's the part of this system that we're very excited about. And revealing is how are we getting to these potentially magical answers. And I think only in doing that are you building trust with your customers. But the world of proactivity, I think, is very real. I think how companies will leverage LLMs will become very real. And all of these skeptics that I've worked with over the last 4 or 5 years are very eager to see how real it is. Maybe they're not ready to adopt today or tomorrow, but they certainly want to be proven wrong. Everyone wants to be proven wrong. I don't think I haven't seen many people be bullish and outspoken that this is completely not useful because they're seeing real life applicable use cases, how they're using ChatGPT or llama in their day to day lives. Be effective. And so they think, hey, maybe the application I'm waiting for isn't there yet, but there's something there and that's something there is enough for us to investigate and go deep on.
Mirko Novakovic: I totally like that. I was always I liked the idea. Right? I was just thinking because I always said we have a lot of situations where people say, oh, the marketing team ran a Super Bowl ad, right? And my, my application exploded. And we were not even aware that that was what happened. Right. And many of those situations, but taking this new set of data that you can get and actually digest now with LLMs. Right. By having all the meetings recorded transcripts and now, you know, because you you are parsing the marketing meetings. Transcript. And you know that there will be Super Bowl ads and now the LLM can match that together. Other. That's that's amazing. Right? That really is amazing that you could see those information and you could actually proactively I like that much more than saying proactively is something technical automation thing. Because I totally agree with you that she is in engineers. They don't really like that, right? They are suspicious about machines taking over the control and automatically restarting or doing things without their interaction, right? Runbooks are okay, but they had they have to be manually triggered, right? That's kind of what most what I've learned in the past that people just don't like too much magic and automation. But the way you described it, getting this kind of different type of information into those systems makes a ton of sense, right? That's really, really cool.
JJ Tang: Just out of curiosity, how do you how do you use AI in your day to day life as you're building a very complex and an impressive company, and I know you're much smarter than me, so I'd be curious to learn from you.
Mirko Novakovic: I mean, we use it at the moment. I'm so to be honest, I'm not a big fan of these chatbots in observability tools. Right. Or using natural language to do queries. So what we do is we use it in situations where we can simplify the process for the user. I'll give you an example. You put in an unstructured log where you have a status code, a URL. I don't know a date in it, but it's just text. And what we are doing, we're using an LLM to basically parse that log, understand that this is actually an HTTP status code. And then we translate that not only into a status code. So make the unstructured log structured. We also translate that into the correct Semantical convention of OpenTelemetry. And that creates context, right. Because now you can match the status code to an error or something. So we use we use it like that right. Making these little things that seem trivial, but normally costs a lot of manual work to define those patterns. And then out of the box we can create a bigger context of a problem, for example, right and understand data better and help the users to find the needle in the haystack. Right. So that's kind of how we are using it at the moment. Or we use it to reduce trace trees. Right. Traces can be very complex. Could be thousands of spans in a trace. And maybe you have a pattern, right? Something calls and it creates 50 database calls. And then you see this is actually something that's together and it happens 20 times. And instead of having then 400 spans you only have five, right? And these are the things how we use it at the moment really to, to fix day to day problems that you normally have. And that can be fixed by using an LM.
JJ Tang: I think that's a very thoughtful approach. I think those are, you know, the problems that these engineers are, are facing day to day. I certainly have been in that shoe and know how frustrating parsing through some of that stuff can, can be. So I do love that. And I think, you know, what's also nice is, you know, you guys, you know, similarly aren't just saying, hey, is this like magical wand. It's your end all be all. It's like, hey, you know, we built a very core foundational, amazing observability tool. And, you know, because you're working from some such great foundations, people are going to be a lot more encouraged to to adopt what you have because you've taken a thoughtful approach to it.
Mirko Novakovic: Though I have to say it's at the moment things are moving so fast. If people ask me what observability will look in two years, it's really hard to say, right? It's really hard to say how far this will go, how much we can do. And this it's very exciting. So I'm not scared about it. I'm really excited about all the possibilities. And I mean, this conversation again created so much food for thought, right? By by using different data sets as you do. This is super smart.
[00:35:43] Chapter 9: Closing Remarks and Future Prospects
Mirko Novakovic: JJ this was so fast. It was so interesting to talk to you. Maybe we have to follow up and, and do another conversation. And I definitely want to create an integration into your tool with, with Dash0. That's that would be perfect.
JJ Tang: That sounds like a great plan. And I think we'll do something pretty special there. And I appreciate you having me on the show. And I think when we do our follow up, the landscape will look completely different again. So.
Mirko Novakovic: Absolutely. Thanks for the conversation. JJ.
JJ Tang: Thank you so much.
Mirko Novakovic: Thanks for listening. I'm always sharing new insights and insight and knowledge about observability on LinkedIn. You can follow me there for more. The podcast is produced by Dash0. We make observability easy for every developer.