[00:00:00] Chapter 1: Introduction to Code RED Podcast
Mirko Novakovic: Hello everybody. My name is Mirko Novakovic. I'm co-founder and CEO of Dash0. And welcome to Code RED code because we are talking about code and red stands for requests, errors and duration the core metrics of observability. On this podcast, you will hear from leaders around our industry about what they are building, what's next in observability, and what you can do today to avoid your next outage. My guest today is Luca Forni, co-founder and CEO of Akamas, which focuses on autonomous performance optimization. We will talk later what that exactly is. And before that, we have a very similar history. Luca also worked in consultancy and worked with the major brands in monitoring APM and observability and did a lot of performance optimization projects. Luca, happy to have you here on my podcast.
Luca Forni: So thank you, Mirko. Thank you for having me here and it was great to be on this podcast. I really loved and enjoyed all the other episodes. So really excited to be here. Thank you.
[00:01:08] Chapter 2: Challenges in Performance Optimization
Mirko Novakovic: Yeah. And as you know, if you listen to them, the first question is always, what was your biggest Code RED moment.
Luca Forni: As you mentioned, dealing with the performance optimization, troubleshooting and so on. And as a consultant for more than 20 years right now, the worst one, I think it was about dealing with with mainframe where a customer asked me, hey, we moved the mainframe at the location to consider it everything. And now everything got crazy at the end user size, so. But we can't understand why. So at the end, after spending in, in troubleshooting and so on, it just turned out that moving the mainframe means I did a little bit of latency for every single calls. So talking about microseconds, but because of the the number of calls to the mainframe was something like 3,000,000 per day. At the end. The result was the front end completely crashed due to the, you know, a lot of requests that was on the frontend side, queuing. And so it was very hard at that time because it's. Yeah, 15, 17 years ago there was no APM tool that give you exactly the tracing from mainframe to the front end. So we need to do that manually trying to correlate logs because Spline was not on the field at that time. So that was the first, I think the first moment where I say we need to think about full stack. So having something that is able really to have in the observability world, monitoring APM, something that really give you all the metrics from the mainframe, from the hardware to the final end user, the rendering phases of the application, because otherwise you can't really understand how system works. You spent a lot of time dealing with these kind of issues.
Mirko Novakovic: I love that you are 20 years on this space and you talk about mainframes because now I'm not feeling that old anymore, right?
Luca Forni: Yeah. Yeah.
Mirko Novakovic: I did a lot of projects where customers, especially in the banking and insurance sector, where the mainframe is still part of the back end or still the main back end, the database six and IMS transactions, and you call them from a web front end or whatever, and you need to trace through it. There are not many solutions. By the way, when we sold Instana to IBM, that was one of the biggest projects that IBM added all the tracing for mainframes so that they can handle that traffic for their customers. I think it's an amazing tool today, so that you can trace from the front end.
Luca Forni: Exactly.
Mirko Novakovic: To the mainframe in one, one platform.
Luca Forni: Not sure how, the customer, are evolving in the last five years in terms of requirements for observability, because we moved in a different side area of observability. Maybe we talk about that, but yeah, definitely still remember that customer wanted to have something that covered all the different technology, everything, because they need to have a clear picture and they need something that can correlate everything, because sometimes it's like the butterfly effect, you know, sometimes something that happens in a specific part of the, the, the data center or the cloud or your system that maybe are distributed can impact the users in a different country, in a different location. And it's very hard to, you know, follow the path and the full stack is a matter of having the full stack observability that mix up very old technology with the latest microservices, Kubernetes and cloud native solutions. So yeah.
[00:04:48] Chapter 3: Akamas and Autonomous Performance Optimization
Mirko Novakovic: Absolutely. Yeah. But you have an even harder job, right? We observe things and you say you optimize them. Right. Which is a next level in my point of view. I think we will talk about it, but I guess you get the data from observability tool somehow. Also, And when we had some email interaction in one mail, you said you're basically the next gen Turbonomic, right? I had Ben Nye on my call and and Turbonomic got also sold to IBM. So I guess it's time for a next gen player in this space. And I know how turbo works, right? It was initially built especially for VMware and optimizing virtual machines. So it's a full stack thing, right, that you look at the lowest level infrastructure host level, and then you also get the metrics from the application, because you actually want to optimize the performance on the application and minimize the basically or optimize the hardware usage and the infrastructure usage.
Luca Forni: Yeah. As a matter of resource efficiency, that at the end is improving performance while reducing cost, even if I totally agree with what you say in a previous episode with Ben, I would say sometimes or almost all the time, performance is the key. Not really cost optimization, even if cost optimization for a matter of, you know, go to market sales reps say, hey guys, I can save you cost. It's easy. But, you know, there are several ways to do this kind of job. If you can pitch that, you can really optimize things that create improve efficiency while reducing cost. That's just one of the addendum of the full equation. But the real value is if you are able to observe all the data, create correlation between all the data you collect from at the full stack level, in all the pieces of the puzzle that is your stack. And sometimes there are hundreds of pieces in this very complex puzzle, you can really understand how the system works, and then optimize the things that need to be optimized in order to create the perfect balance between all the, you know, part of it. It's like an F1 car. I don't know if you never heard about Jackie Stewart. That was a F1 driver. That's I think I know it's a legend that he created this, this concept that's called the mechanical sympathy. So it was not the best driver. It was a very good driver. Maybe not the best one, but it was very intimate with the engineers with all the mechanical. So we know how to drive the cars because we know how the brake works, the shift works, the steering wheel works, the engine works.
Luca Forni: So it was able to adapt is way to drive the F1 cars in order to make all the parts of the cars work smoothly together in a balance. It was exactly what. When we invented Akamas, we tried to create something that is able to understand deeply how the system works in terms of all mechanical parts and, you know, A cloud native application can be compared to an F1 cars that need to to work perfectly in every single elements, and sometimes our external third party elements. So you have not a clear control of that. And the result is that, okay, you can do that with your brain if you are an expert. So we need the AI. And the spoiler is that is not an GenAI because there's no way to train a model for that. So when we started Akamas in 2017, we start from scratch with a totally different approach for AI. We patented the AI that was one of the part of the we are so proud of, because it's a totally new approach for solve this kind of problem. And at the end, the problem is solved by improving performance. And as a side effect, it reduced the cost because you improve efficiency. So the same system, the same hardware, the same unit of you know, elaboration can give you more transaction, more throughput at the same cost. So at the end is much of efficiency. So it's cost reduction.
[00:08:59] Chapter 4: How Akamas Operates
Mirko Novakovic: Yeah. Makes sense. And can you explain a little bit how Akamas works if I have an application with my stack. What do I have to install something or do I have to have a observability tool that sends the data to you? And then what? What are you doing? Right, actually. How do you optimize autonomously?
Luca Forni: So the idea is that because we were in this this area since 20 years, myself and Stefano, the, my founder and CTO of Akamas, say we don't want to reinvent the wheel, so we don't want to address the observability market that is full of great technology. At the same time, we don't want to address the automation part because Akamas wanted to autonomously change things on your system. And so we say, okay, we need to rely on what the customer already have or suggest the customer. So it comes from consultancy mindset. So we continuously try to suggest customer the best tool in the area. So at the end we say to the customer we requires an observability tool in place if required, that we have some partners that we professional services can help you in creating or suggesting you the best one for your needs or implementing the best way. The observability tool for you, and the same for automation and our AI. Act like the brain that observe data. Grab data from observability tool. So first of all, no agent at all. Install it on your system. We can grab data directly from your environment. But 99% of customer already have something in place and don't want to give you the asset to your system. And then our AI elaborate the data to analyze the data coming from the observability tool. Understand how the system works, that why it's very important to have a full stack observability tool because the AI can app let me see work trying to get the mechanical sympathy.
Luca Forni: So try to create the balance between all the pieces of the puzzle are configured and then we act. So via automation. Don't want to change things in your system directly, but use what you have in place. So relying on your pipeline, relying on Ansible, ChefPuppet, whatever, Terraform, whatever you already have in place. Also, in a matter of policy, security is exactly what the customer do. Usually they do after someone, an expert analyze the data and suggest things and so manually implemented in the pipeline. Now it's the AI that in a matter of snap, analyze the data, create live the model. So it's not a training model for the AI. understand other things as well and apply that and then get back the feedback from the observability tool, because a change in the system results in a change in the matrix of the probability, and AI learned and continuous work in this way. So is autonomous because it's a cycle that no no never ending cycle that continuously analyze data, calculate how to make changes in their system and apply the change via automation, analyze the feedback. And so is a reinforcement learning. Of course, the important part of this story is that we do not change any code on the system. We only work on configuration. Why configuration? Because it's something that was never approached by any other tools. So usually there are a lot of tools that try to improve the code, especially now with AI, improvement of the code can be done.
[00:12:36] Chapter 5: AI's Role and Unique Approach at Akamas
Luca Forni: But you know, if you think about a standard application that runs on Java, JVM, Kubernetes, JVM, something more, more than 600 parameters. Of course, not all of them related to performance at the end, but a part of them are related to performance. Kubernetes itself has a lot of different parameters. You can set databases, caching system. You can have several runtimes, several microservices that runs into pods. Every one of them is their own. You know, components sometimes are MongoDB, Redis, cache, spark itself. All of these kind of tools brings with them hundreds, sometimes thousands of parameters. But you need to tune them in a mechanical sympathy approach. So all of them need to. So the need is someone that is able to say, okay, because I changed something over there. I need to change something over there. Otherwise I can result in the butterfly effect. So I change over there, create a mess over there. But if you have some the brain, the AI brain that is able to balance everything, you can really have a system that is worked perfectly, very smoothly with the maximum efficiency, and the result is also something that give you a good you know, ASG flavor. Because at the end, also, the carbon footprint is very low because you are at the maximum efficiency possible for your system. So that's the idea behind Akamas. And that's a very simple idea.
Mirko Novakovic: That's very interesting. So as far as I understood, I'm trying to always understand what that means. Right. So you take the data of an observability tool, all the data like metrics, logs, tracing and then you probably have to build something like a graph of the dependencies of, of the components. Right. If a JVM is running in a pod in Kubernetes on a few hosts, you have to understand that relationship, right? Because you kind of have to tune all the parameters of that JVM and the underlying infrastructure, and then you have to change the configuration and do some sort of a B testing. Right. I for example, you optimize the garbage collector and then you will look at the metrics as they improve. That's how I see it. Or is that totally different.
Luca Forni: Yeah. Generally speaking that's the main concept where we started with. But at the end creating like a chart of my graph with all the relationship, you know, like a cmdb way or smart scale, like the Dynatrace one. Yeah. It would be so heavy in terms of competition. So we force our research team to try to avoid that. And part of the patent that we got, we currently have five patents is the fact that we create an AI that is a total black box approach. So do not require one to know this kind or to have this kind of relationship written somewhere. but it's something that is this current life. So the fact that the model is treated live include also in a very light wave approach. Also all this kind of relationship. So of course what we understood and we tried that having this kind of relationships is the beginning create a boost in how the AI get the best result, the possible, the best possible results. But our AI is also able to do that without having this kind of information. So one of the claim of Akamas that sometimes can be for a customer, it can be very strange, is that we can address any technology, even if the AI never encountered this technology.
Luca Forni: Why? Because for us, every technology is that, let me say, a matter of new parameters that need to be tackled. And then is the AI that understand that these parameters that maybe come from the same technology somewhere Related in some way. So that's a that's part of our mantra to avoid, to need to be strictly related to a cmdb or something like that. Of course, what we created as soon as Akamas started being used on the customer side, that we can create what we called optimization pack. So you can create these kind of libraries that for every single technology and something that we provide out of the box, but the customer can create, the community are creating and supporting us in this way that for a specific technology, give you the full set of parameters. If there are any dependency or the parameters between each other. Sometimes, you know, if you set a specific garbage collection type, then you can set some parameters that are related to the main choice you have done. So there are some simple YAML files that exactly created these rules but are that's super, super lightweight. And the customers ourselves you know, system integrator can create them.
Luca Forni: We ask her to, to the community to collect them. So we bring them all together. And the idea is that improve the speed of the AI in order to get the results as fast as possible. But let's consider that we have some enterprises that, thanks to the Akamas in literally hours, that means sometimes 20 hours less than one day, we're able to save and reduce the cloud costs by 80%. Wow. So reduce cloud cost. That means improve the efficiency of the number of transactions on the same EC2 instance that required them, or allow them to cut off eight machines up to ten to, to exactly serve the same workload. So, yeah, it's getting back to performance is a matter of improved dramatically the performance that at the end you get a lot of money saved back because you are on a, you know, pay per minute approach. So if you can turn off something, it's great. Otherwise you can say, okay, we can serve more and more, load on the same hardware and you have free space for other load, other applications and so on. So it's not always a matter of improved performance.
[00:19:07] Chapter 6: Potential and Impact of Akamas Implementation
Mirko Novakovic: And this is done totally autonomously.
Luca Forni: The optimization this is done. Yeah. Autonomously. So you give us your application. So the way your Akamas can interact with the parameters of the application, the observability tool. So we have now supported some tools, the most common but also a CSV file if you want, because sometimes it's the best way to have everything in one unique space, like a NoSQL database or time series database or something like that. So we have a lot of customers that their data lake where everything is put on coming from different data sources. We grab data from the exporters in some way and continuously get the results from this kind of metrics, analyze the data, change things, get the feedback and so on. So yeah.
Mirko Novakovic: And then the automation. Do you do the scripts or does the are you reusing existing scripts of the customer. So if I have puppet or Chef you mentioned that or Terraform.
Luca Forni: We we we we usually ask if you want to change like a new release. It's not a code release. It's a configuration release. So if you want to change a parameters in your database, how usually you do that? I will have a PR or I set something there. I have my repo in an XML file in a database system, and there's a trigger that as soon as I change occurred, trigger the new release, the pipeline, as soon there's some policy that allow you to start the release and you release apply. So it's something that I can invoke the API. Yes. What I need to do, you have to change this this property in the XML file and that's it. So we tend to avoid to change the process, the current process, for a matter of security, for a matter of policy, and also for a matter of trust. Because, you know, if you say to a customer, my AI is able to change things in your production system, in your bank, they are a bit scared. You know, I can't trust them, of course. So if you say, okay, the process is exactly the same, which is different.
Luca Forni: We have an AI that is like 1000 x expert than your most expert because it's able to analyze tons of data in a matter of seconds instead of having big war rooms with all the people on on a table discussing the impact of a specific change, something that I can learn very fast, and then can also decide in order to make a change, to create the best balance, especially for a matter of avoid to crash the system. So one of part of one of the patent is the safety. That is the guarantee. What we guarantee that our system won't crash your application sometimes can create a little bit of degradation because it's part of the training process. But that happens in life. But it's a way to let the AI learn that this kind of change in that direction, you know, improve, improve, improve or change a little bit of parameters, creating something that is, you know, one millisecond slower. So it's not the best way to work with. So at the end, the AI immediately understand that get back and continuously learning this way.
Mirko Novakovic: And how you deal with pricing this kind of service because I could imagine, like, if you say, hey, you come in in 20 hours, you optimize 80% of my infrastructure and say, oh, look, that was nice. Thank you. Now you have optimized my infrastructure. I don't need you anymore. Right? That's.
Luca Forni: To be honest, that's a continuous fight with my sales rep. Of course, because I personally invented the pricing model, and still I earn my sales rep that are complaining with me because because, you know, it's very hard to match the infrastructure you are changing with the fact that you are changing the infrastructure. So sometimes it's which is the real infrastructure. The first one, the second one that I was able to reduce. So we start from a very simple pricing model. That is the number of concurrent optimization. And the optimization is this cycle applied to an application where application means what the customer considers an application. Why? Because of course you will say, oh my full data center, my full set of application are the one applications where I want to pay one, just one license. Okay. But you bring a lot of different elements with hundreds of parameters made AI requires two months to start giving you interesting results. If you split your big application in microservices, set of microservices, set of, you know, pieces of the stack, you maybe get the results in some hours. So you pay more, you know, in terms of license, because every single license is one year subscription license for every single concurrent optimization running.
Luca Forni: But at the end, you get the saving in a matter of hours. So the total result is great. Then having a just one single license, you pay Akamas, Slow, but you need to wait for, you know, two months to get the benefits from applying Akamas. Unfortunately, sometimes it happens that the customer just, you know, having one license can save millions happened in a matter of days. So the result was a very large big data system where Akamas change Kubernetes, spark runtimes. And in a matter of hours later, a matter of hours was able to reduce one third the total cost of the application. That was 3 million per year. So at the end, it was 1 million per year in a matter of hours. This was saving per minute measured by the cloud provider. So yeah, that happens. I think in the future maybe we need to think something else. But currently it is also the way to start addressing the market. Make everything simple and let them trust your technology.
Mirko Novakovic: So what's your experience of a normal customer. So if I would use it or whoever. And what is the normal kind of. Optimization potential, you see, with Akamas that you can really get out of the stack. Is there kind of a rule of thumb? I would say 80% is probably a lot. Right. But is it more like 20% or.
Luca Forni: I think it is very complex. There's a super nice question, of course, but it's very complex because. We have customer. It's a Giants in the United States that sees us. I think if you are able to save me. One 2% is a tremendous result because we spend months or years manually fine tuning the code, fine tuning everything. But we serve millions of customers. So just that 1% I was able to say something like, is a 13% improvement in performance that I calculate the a matter of millions. So 13 is a oh, it's not so good, but it's a giant that say we usually get something that is less than 1%. On the other side, we have customers that create an old style lift and shift approach with the clouds, and they do not really work well. You know, they created their own EC2 instances with a rule of thumb, really. So Akamas was able to because I really trust the infrastructure as a code approach. Akamas can also consider other parameters the size of EC2 instance, the family of EC2 instance, the number of EC2 instances. So we include this kind of parameters.
[00:27:11] Chapter 7: Customer Experience and Feedback
Luca Forni: Akamas was able to change the way the architecture was created. So instead of having three big machines for the databases four medium eggs, eight something for the other, totally the balance between the front end, the back end, the middle were changing the family of the type of EC2 instance. It was able to say like 91% of the cloud cost, 91%. Just changing the family, the number and the size of the machine without touching really the front end level parameters. Then when we also this layer at the real full stack, the improvement was greater, but it was just a matter of the customer that was not aware about which is the best one. So some somehow reading some forum that for databases this kind of machine is good. So if you address customer that are, you know, starting with the cloud native journey, sometimes the results are dramatically if you deal with very, very mature customer, sometimes it's a matter of 10%, 20%, but they are so big that the results is tremendous anyway. So that's cool.
Mirko Novakovic: I mean, even 10% is really good, right? If you basically don't have to do a lot. Right. It's the I and the tool does the optimization for you. And as far as I understood, the risk for doing something bad is very, very low, right? Almost very, very low.
Luca Forni: So we combine two different approach. If you don't want to work directly in production, we can use pre-production environment or clone because our AI is able to drive stress testing. So we don't want to invent yet another stress testing tool. So we start in this way avoiding to to because, you know, if you start we started in 2017. If you start telling to a customer that we change things in production, they never trust us. So at the end, we started saying, okay, pre-production, we integrate with your load testing tool that you already have in place Loadrunner, Jmeter, K6, Neoload, whatever you have. So we integrate with also this kind of tool, we continuously integrate with your observability tools that interesting because we force the customer to install observability tools also in pre-production environment. So that's also why Observability Company love us, because we force the customer to improve their assets at the end. Because cover full coverage also of the pre-production environment. That's something that is not so usual, especially for a matter of cost. But that's cool because now we have a full coverage of the pre-production environment. As soon as I must find the best configuration, we can move to production. So that's the first way to be very safe. If we work directly in production, Akamas works in a slower way, so it's not so aggressive.
Luca Forni: That's also parameters that you can set about the aggressivity of the changes. So if you immediately understand that you are overprovisioning the memory and they can cut by half the memory in production, usually what they say a start reducing the memory by two 3%. Everything is good. Yeah. So move and move and move step by step, of course, is lower. Maybe it's not a matter of hours. It's a matter of a week. But you are working in production. You have the guarantee that everything is safe and you get tremendous results without having all the process to clone the environment and work in pre-production. So we usually ask the customer, it's up to you. The license is exactly the same. So an optimization working in pre-production or running in production consume the same amount of license. So there's no issue about that. And then we have customers that mix the approach. So for very critical application prefer to use as in pre-production and then move to production and reuse what was learned in pre-production. So then the AI is already a little bit trained. So it's another matter of having the guarantee that you do not crash anything.
Mirko Novakovic: No, it makes sense. I just listened to an interesting podcast about cursor AI. I don't know if you heard about it. It's this AI code editor where it suggests you code that you should write, but it also optimizes things and they have the idea of doing shadow repositories. So basically they clone the file system. And then an AI agent will actually maybe through the night, work on your code, test things out because it's just a clone. And if it works or if it's better then they will tell you next morning. Look here, I found a bug and I optimized this code. I refactored this I found that idea pretty interesting. And as far as I understand, this is similar here. You're taking a non product environment to do these changes. And then once it works in the load test environment and you're kind of shadow environment then you can apply those changes to production.
Luca Forni: Yeah. Yeah it was exactly the same approach. So I know there are a lot of interesting similar complimentary approaches to the Akamas that is on code. I think it's a total different story over there. I think that I can help a lot. I we we totally are pretty 100% sure that I cannot help in our specific domain was funny because in one of the when Bert the, the CTO introduced the different I working in dynatrace exactly the same things. I love that because it says for specific topic in the operation in the ops area, there are some area where the JNI cannot deal with. One is optimization. It was perfect because it is exactly what we are claiming. And so I say that dinosaurs are is composed by several different AI that address different area of the operations. And I agree for the optimization part, every single customer is totally different. Every single application of every single customer is totally different. So you cannot reuse the same training. So you can't reuse what an AI I learned. That's why we need to learn instantly so exactly on the fly. And this also is very small in terms of footprint, because no training model is just a set of microservices that work very close to your environment, connect to your environment so it can be deployed on Kubernetes as a matter of reports.
Mirko Novakovic: So, so you can be deployed on premise or is it a cloud service or both?
Luca Forni: Currently it's only on prem. Why? Because customers are too scared to have an AI in cloud that can change things in your production system. So we decided to move directly to start on prem and say as soon as the market is pushing us to move on SaaS, we are ready to do that, but we don't want to. You know, we are a small team currently because we are less than 20 guys in, so I don't want to waste time currently if there's no need from the market to be on SaaS. And to be honest, currently it works because one of the first question that I got from the castle. Hey, come on, I can be. You can't be on cloud. No, we are not on cloud. We can't be. We only be on, on prem. So it's up to you. If you want to install it in your cloud environment, it's fine for you. But it's not on your Kubernetes or your EC2 instance that is on cloud. But it's not up to us to offer a SaaS service currently. Maybe in the future things will change.
[00:34:51] Chapter 8: Pre-Production Testing and Risk Management
Mirko Novakovic: But yeah, I can see that. And it's probably also that your customers is more larger enterprises at the moment. Right. Who have larger data centers and applications.
Luca Forni: Maybe we are addressing large enterprise. Why. Because they feel the pain or they already tried other tools that only what I always the full stack mantra we have because there are other tools that work in this way, I love them. Turbo itself for some reason is similar to Akamas, but what are the big difference with all this kind of tools and the unique of Akamas that we really use in our I old matrix at the full stack level. So we need old matrix. That's also, you know, a showstopper. Sometimes if a customer has really. No. They only have system metrics. So their observability tools is just old style observability through the only import CPU, Ram, usage and so on. They have no end user metrics and or high level Java virtual machine metrics and so on. I usually can work, so we suggest customer before moving their ask to a consultant company to improve your observability approach. Otherwise Akamas is blind so it will work. Maybe. But there are also other open source tools that can give you exactly the same. So a little bit of recommendation based on the fact that it considers CPU and memory. But it's very risky because if you only consider memory and CPU and you do not look at the end user response time, the errors, or some high level metrics, you can create a mess in your system. So that's the full stack mantra. And also that's why I love the hotel approach, because the promise of a hotel is that every single technology itself will expose their internal metrics in a standard way. That's exactly the dream of having all the technology, all the pieces of the puzzle that without having an agent that grab data from them. But the technology itself with exposed data in a standard way, it is the, the, you know, the best way for us to have everything in one unique place in the same standards without creating parsers and so on. So.
[00:37:09] Chapter 9: Conclusion, Reflections and Closing Thoughts
Mirko Novakovic: So we both root for OpenTelemetry. So Luca, thank you a lot. This is one of the stories I really love. It's a European story, which I like. It's technology, AI, technology out of Italy. And it's sounds so amazing, the stories that you have shared that I'm. I would definitely try it. Right. I want to see how it works. It's really. It sounds so amazing that you sometimes can't believe it. Right? That it's working out of the box.
Luca Forni: Right. So that's the feedback that we got from customers say this is a dream. So it can be possible. So say okay try that. You need to be very mature in automation. You need to be very monitoring observability. But if you are mature enough to have this kind of tools already and processes already in place, I can be installed in a snap and give you results in a matter of hours. So it's just a matter of try that.
Mirko Novakovic: Yeah. Thanks, Lucas. It was nice having you here. Thanks for joining us on the podcast.
Luca Forni: Thank you. Ciao. Have a nice day. Bye.
Mirko Novakovic: Thanks for listening. I'm always sharing new insights and insider knowledge about observability on LinkedIn. You can follow me there for more. The podcast is produced by Dash0. We make observability easy for every developer.