Episode 2633 mins5/29/2025

#26 - Rethinking Query Standards: Jacek Migdał on SQL Translation, Portability and Observability

host
Mirko Novakovic
Mirko Novakovic
guest
Jacek Migdał
Jacek Migdał
#26 - Rethinking Query Standards: Jacek Migdał on SQL Translation, Portability and Observability

About this Episode

Quesma CEO Jacek Migdał joins Mirko Novakovic to explore the future of querying and database interoperability. They dive into how Quesma rewrites SQL on the fly for cross-system compatibility, and they tackle the lack of query language standards in observability. Jacek also shares his thoughts on AI-assisted querying and why decoupling applications from databases is key to long-term flexibility.

Transcription

[00:00:00] Chapter 1: Introduction and Guest Overview

Mirko Novakovic: Hello everybody. My name is Mirko Novakovic. I am co-founder and CEO of Dash0. And welcome to Code RED code because we are talking about code and red stands for request Errors and Duration the Core Metrics of Observability. On this podcast, you will hear from leaders around our industry about what they are building, what's next in observability, and what you can do today to avoid your next outage. Hello everyone! Today my guest is Jacek Migdał. Jacek is the CEO and founder of Quesma, a database gateway tool that helps developers teams root, translate and optimize their queries. Before that, he spent a decade at Sumo Logic, where he led software engineering when the company went public in 2020. I'm excited to welcome here at Code RED, and I always start the conversation with my initial question, which is what was your biggest Code RED moment? Well, you know.

[00:01:01] Chapter 2: Jacek Migdał’s Code RED Experience

Jacek Migdał: Probably, you know, my biggest Code RED were like outages, like, you know, that kind of like a big production issues. And there was like one which I was guilty because only I wrote the code that was buggy. But I have to deal with after match. And it was once upon a time when S3, like a blob service from AWS, was eventually consistent. We kind of like decide, okay, this is like a mechanism how we do our encryption keys and we use it like we'll just use it to store our encryption keys. And the plan was perfect until the moment we realized, okay, this is like eventually consistent. If you create those encryption keys, you want to make sure that you have like one encryption key, right. And you know, of course that was a little bit complicated. But you know, ZooKeeper we take like a log-like leader generate them once per day. And this looks like a perfect plan. And I wrote the original code. It worked for a few years until one day in the middle of night, you know something broke down and there was, like, a lot of errors, like security issues in our service, you know, like a flood of alerts.

Jacek Migdał: And people get spooked because, you know, somebody may have hacked and hacking us, but, you know, luckily, it was not looking like a hacking attack. But it was like data corruption, you know, a little bit less, but still quite spooked. And the story was that we were not able to decrypt data while it was going on. And it looked like, you know, that concurrency and consistency is like really hard. So we double created those keys because, you know, there was like by hardening ZooKeeper look, some computers lose machine. So there was like some rare edge case when this could happen. And we have those two issues and it ended up being a very hectic outage with several hotkey deploys and later like writing custom data procedures to encrypt data with right fix. Kind of like very hectic coding. It took until like a few days, but luckily we did not lose like a byte of customer data.

[00:03:10] Chapter 3: KubeCon Experience

Mirko Novakovic: That's always good. And we met last week at KubeCon in London at the Dig Ventures breakfast. Tell me a little bit about how KubeCon was for you? What stood out for you? I also know that you did a talk. Tell me a little bit about your experience.

Jacek Migdał: So KubeCon is one of our primary audience, I believe, like, you know, a lot of even though we do with databases, a lot of, like, decisions are still made by DevOps and they feel the pain. So we are a little bit earlier than Dash0. We got like, awesome Booth, but we're still enjoying a lot of those conversations with like early adopters, early users or prospective early users, as well as like it's good audience to have like a talk and educate them about some topics like we are passionate like observability, query languages.

Mirko Novakovic: And talking of it. I mean, you have been at Sumo Logic. So one of the public big players, I mean, not public anymore, right? Got acquired by a PE. But by the way, I was yesterday, I had a conversation with Ramin, the former. Yeah.

Jacek Migdał: Oh, nice.

[00:04:11] Chapter 4: Genesis of Quesma

Mirko Novakovic: Ceo. He's now at DFJ. Right. Venture capital fund. And we were chatting and so you have been there for ten years? Yeah. Is that kind of the where the initial idea of Quesma came from? Did you run into that problem or how do you come up with it? Because I saw you spent ten years, then you did a break and then you found a Quesma. So it must stand out to you that this is a problem you want to attack?

Jacek Migdał: Yeah. I wrote like a blog post about how I was looking, but I came up with two routes. It turned out that I was seeing a lot of, like, data stacks from when I was like, teenager. My dad was introducing Oracle everywhere and I saw how it was awesome first and now how it's bad. At Facebook, I changed data format and some of my successful projects at Sumo Logic were data migration projects, which could bring enormous efficiency gains or like a data inside gates. But they are very tricky to do. And I was so surprised that, like, you know, there's zero product offering around that, like almost all the service business, how to data migrations or like how to make changes in data. And most of those projects got terrible and engineers hate them. So I figured out I like it sometimes. I have a love-hate relation with that. But you know, I figured this is like a very important to solve.

[00:05:30] Chapter 5: Quesma’s Core Functionality

Mirko Novakovic: Let's start by before we deep dive a little bit into observability and querying observability, explain a little bit what Quesma does right and what problems it solves and how it works for me as a developer.

Jacek Migdał: Yeah. So for you as a developer, usually the problem is like with making any changes to your database, they like very coupled. You need to make and especially on analytical sites like in transactional, you may be lucky because one application one Database, but in analytical usually you have like a lot of integration points, how you put their data, how you query that many clients, and any time you try to make any change, something may break. So the vision of quest is like to put like a layer between many of your application and your data warehouse or analytical database, and which will allow you to make changes or add extra functionality or make that easier by decoupling them to smaller problems.

[00:06:30] Chapter 6: Data Interoperability and PromQL

Mirko Novakovic: Okay, but it's not ETL, right?

Jacek Migdał: It's not ETL, it's more like rewriting SQL on the fly.

Mirko Novakovic: Okay, so if you query the database with the old query, you could, in that layer update the query to some sort of new data.

Jacek Migdał: You could update the query. You could make changes. Sometimes you could like when you change technology you could like keep the same queries and update them on flight, debug it, or verify what would happen if something will change. And if you rewrite it, would it work? Would it not? Would it return the same data? It's kind of like a blue green deployment for data warehouses.

Mirko Novakovic: And you could also translate between different query technologies or.

Jacek Migdał: Yes. That's very common. Like you know people are asking that like I was like part of a vision that the core of SQL is like same. But there is a lot of like little things here and there that make it either you need to make some changes or sometimes it's like a tiny semantic difference. And for some people that's fine. And for some people that's like one thing that will break your for migration.

Mirko Novakovic: Okay. So what we did, for example, just as an example, we use PromQL as the query language and we will discuss about that. But we use Clickhouse as our database storage. And so what we essentially so what my developers essentially did, they took the SQL query language layer. Yeah. Forked it and then translated all the access to the database from the PromQL layer to the PromQL storage to Clickhouse. Right. So that you we basically used exactly the same PromQL layer, but now the data store is Clickhouse. Is that something where we could essentially put Quesma in between and then do exactly that?

Jacek Migdał: Yeah, that's like actually like, you know, that's one of the good things because this is like interoperability use case. And a lot of people sometimes ask it, oh, I would really love to use PromQL with let's say Snowflake or Databricks. Can you do this right. And the story is yes, you can do this. Of course there's like sometimes like threshold. But for a lot of use cases that just work fine, right. So one of the use cases like this interoperability of course we have not implemented this PromQL particular yet, but we implemented very similarly, like Elasticsearch on top of Clickhouse. Right. So people want to use Elasticsearch APIs, but either they don't want to migrate or they use some tools that just work with Elasticsearch APIs, and that layer could transparently translate that.

[00:09:00] Chapter 7: Standardizing Query Languages

Mirko Novakovic: This whole podcast came from a discussion we had on LinkedIn, right, about standardization of query languages, especially for observability data. And there is actually today no standardization, right? There's a lack of standardization. Opentelemetry there is an idea to standardize it. I also know, and I today I read everything around what happened at KubeCon. There were a few discussions right from people from eBay and others how to use, for example, SQL as a standardized language for hotel data. But I think it's very early and it will be, in my point of view, very hard to have all the vendors agree on a query language. But tell me a little bit about your opinion. Does it make sense to standardize on a query language and how do you see that?

Jacek Migdał: Yeah. To be fair, it's like a very hard problem on people. There's been some committee for over two years and there's no specification yet. And any time you try to make some From move. There's a lot of, you know, architects with a lot of opinions. So it's not an easy problem. I would say there is like I would argue prom is like a little bit of standardization on metric scoring. Still there's like other format, but in the lock even line is completely wild with like a dozen of languages for no good reason. My thinking is like, yes, there should be. It should be more standard, but it's not like a standard that we should specify everything because that would be terrible for innovation. Then you cannot add new functionality. I believe this is like a futile attempt. I believe in more standard like SQL, that there is like some core that, you know, this core would work the same. And this kind of table stakes for any observability vendor is not hard to implement. But of course, not limiting people to innovate on top of that core or the other parts. Right? So it's more like, you know, SQL and people will still diverge a little bit. But at least you could also like to write applications on top of many observability vendors.

Mirko Novakovic: Yeah. Makes sense. Yeah. When we looked at query language, when we started Dash0, we basically looked at four different options. We looked at building our own query language, right, which a lot of vendors do. And we said, okay, let's not do that right. Let's look at existing standards. And so we looked at SQL. We looked at Kusto type of language. Right. The piped languages that Splunk uses or also elastic I think today. And then number three we looked at PromQL. Yeah we decided for PromQL and we can discuss why. But I think there are pros and cons for all of these approaches right there. And some vendors do multiple of them. Right.

Jacek Migdał: Well I'm kind of like thinking that SQL will need to evolve. Yes, I'm definitely like, you know, big SQL fan for my life. I would say the fact is just that the SQL tends to lack like there was a JSON and people say, oh, maybe SQL will be irrelevant because JSON and data. But what happened is usually like SQL lacks a little bit, but then people find ideas how to incorporate it back to SQL. So I don't believe vanilla SQL will do it, but I believe there is a good idea, especially by Google BigQuery team about their pipe syntax. How can we evolve a SQL to be as good as Kusto and other parts to serve this use case really well?

[00:12:21] Chapter 8: The Kusto vs. SQL Debate

Mirko Novakovic: Yeah, you told me about the pipe specification and SQL at the conference. Can you elaborate a little bit on it? What does it do? Is it a little bit like mixing Kusto with SQL or.

Jacek Migdał: So the idea if you look at all of this log pipeline language, they all look like Kusto. If they're Splunk Sumo Logic, they are incompatible. So the idea was like, you know, okay we would like to have a language and even some ORM wrappers to SQL if you don't know, like from .NET Link. Got this idea how you should do it in more like a pipe manner. And the reason is because it's more natural. Order how you query data. This data set and I want this thing. Then I want this filtering. And you could like to add one operation after the other, which is especially more intuitive for iterative debugging, which is like typical and the story that was like, okay, so this is the flow of SQL. But on the other hand, SQL is like English, right? Like everybody use it. There's like tons of people who just know SQL and asking everybody to learn new language is like asking everybody to run a sprint. In theory, it makes sense. In practice, people are not going to do that. They are not going to rebuild their ecosystem. So how can we adapt SQL and be fully backward compatible to this new era? So they invented this kind of sugar syntax that, you know, equivalent to SQL, which is like adding this pipe and make it look like Kusto.

Mirko Novakovic: Yeah, it makes sense. By the way, we also came to the conclusion that Kusto is the most intuitive way of doing it, right? Piping. Because as you said, right. You have one amount of data and then you filter it and then you filter it even more and you filter it and you basically that's essentially how our brain works, right? So I can see it really interesting. I have to deep dive a little bit more into how SQL with pipes work. But I can see that working. By the way, we decided for PromQL because we wanted to be very open and also reuse a lot of the things that exist in the community. For example, there's a Prometheus Alerts and there are projects like Awesome Prometheus Alerts where you have hundreds of predefined alerts, and they are all based on PromQL. Yeah. And so we thought, okay, what is currently the one that's closest being a standard in terms of adoption? Right. And that's why we have chosen PromQL, though we know that from a language perspective, it's not a close being as intuitive as, for example, a crystal, right?

[00:14:58] Chapter 9: Role of AI in Query Optimization

Jacek Migdał: Yeah. I believe with PromQL you make the right choice because you are like a busy founder. Like you get like a product to market, like, you know, and the product seems like an obvious choice, like the most standard one, I believe, to some degree, like a regular expressions like, you know, people like they're easy to write, they have their flaws, but they're like still so ubiquitous that even people like, you know, they always put somewhere in their YAML or here's like my PromQL based alert. On the other hand, the biggest problem with PromQL is like they don't seem to scale that well, like, you know, the attempts to write like, you know, log based languages like logQL, this language is not really popular. Even though Grafana tried for a while, it's very hard for people to write something more than one liner in PromQL, and a lot of it's more like a language optimized for writing than reading. And a lot of people have trouble of reading PromQL unless they were very, you know, trained to read it. So a lot of people need like a GUI to read like a PromQL. Right. Why? With some other pipe languages, that's much easier. And it's also much easier to visualize step by step what's happening, right? If you have line by line, you could show, okay, the problem is line five here. What happens this filtering filter all of your data right. You really want it right. Like with PromQL is too concise to sometimes see what's going on.

Mirko Novakovic: But how do you see what would you say from experience? Sumo. So my point of view is in I don't know, I don't want to say a percentage, but in more than 90% of the cases people shouldn't use any query language, right? So the UI should be so good that in behind the scenes, you basically generate the query language by providing a user experience with this, essentially like a query builder or something very easy. And the query language itself should be more like if the UX or if your query is really complicated, you will probably not be able to create UX that can create everything for you. Then you switch into an expert mode and do query. So do you think from your experience, the query language should be something for 80% of the use case, because the query language should be so easy that it's natural to do it. Or do you also think it's more an expert tool that you use?

Jacek Migdał: Here I got like convoluted answers. I believe most people should not write the query language because it's kind of like, you know, as you say, like many use cases, you could just generate it for them. But on the other hand, most people should be able to read this query language and explain what's going on, right? And I've seen this in many cases that yes, there are some advanced users in security that will always write it. Or if you want to build some UI tool or something, this language is like a very useful representation to what's going on. It's very useful to representation what this guy is doing. And a lot of people get confused by the UI of the vendors, especially some edge cases. They get confused sometimes. Okay, your AI agent got this hypothesis. It should always give you an explanation why it came to that conclusion. And like I've written in formal language. So I believe this language needs to be optimized for people to read, not to write.

Mirko Novakovic: I agree, we by the way, we implemented a feature because we have the same issue, right? I'm not an expert PromQL user. I'm not using it every day. I did the course, I experimented with it, I understand it, but any time I see a more complex PromQL query, I'm looking at it. I'm like, oh my God, right? I have to kind of relearn it. But then what we did is we and that's coming to a topic that's probably interesting in this space, we actually did an AI based translation tool where you can click it, explain button, and then we use an AI to actually explain you the PromQL query to tell you, oh, this actually queries a number of services for this attribute, etc.. So how do you see AI going in the field of queries? Because it seems to if you look at all the coding editors and stuff, it seems to be a perfect field if it's well documented for AI.

[00:19:14] Chapter 10: Natural Language Processing and AI Agents

Jacek Migdał: Yeah, and to be fair, that's like even in this paper pipe. Usually the AI needs a good context. You know, asking AI to write like a long query is like a hard exercise. There's a lot of paper that AI is, like really good. Also, like if it's write the queries as humans or write a little bit query, add a little filter. And this AI sometimes even much simpler I can do this work. There was like a paper by snowflake. There is a paper by Google that you know, don't ask AI to write a page of SQL. Ask it to write like a page, give it feedback and do this like iterative looks. And the story that I like pipe you can pretty much give. I also like intermediate results. Samples from each stage of the result. So it could also like reason about each stage of result and figure out oh, this operation you are doing is actually doesn't make sense because you have like somewhere and this column is almost always null, right. And then you could like to highlight the user. Okay. The bug was in this line or automatically I could show you. Oh and now I'm trying to write in this line. So yeah, I believe it's happening. And I believe that's also one of the motivation that a different languages would be good to collaborate with AI versus the language just for humans.

Mirko Novakovic: But would you CI generating the query essentially would or would you say I would query a natural language, right. I don't ask the AI to generate me a query, but I would ask the query directly in natural language. How would you see that?

Jacek Migdał: I think always AI will translate into some formal Language. So you would ask, okay, why my servers are crashing. And then I say, oh, I look for this, this and this thing. That's how I translated your request. I ran this query. Here's what I found. Right. There will always be some formal language explaining that even I will use to ask about data, because natural language. I don't believe it will be ever precise, and especially in outage debugging a lot of cases sometimes like, okay, something bad happened and it was because of time zone shift, right? Things like that. So the little thing that you make in the query makes something like a huge difference.

Mirko Novakovic: In precision. I can definitely see. I'm always asking myself if that precision is needed. If you ask a question and can the AI figure out the right precision that's needed, right. So not saying, hey, give me the logs of that service yesterday 6:00 pm, 12 minutes until then or just saying, hey, give me the logs that were problematic yesterday evening for the outage. Right. And then you don't need the exact precision because the AI will understand what you're looking for.

Jacek Migdał: Yeah, but the AI behind the scenes, the biggest advantage it has, it can run dozens of queries. Not like one as a human. It doesn't, of course. And then look at all of them and look, okay, these results were not interesting. So we don't even need to bother you Mirko with them, but here is one query I ran that shows some interesting trend I dig down. Make a follow up one. And here with this query, I'm suspecting this one Kubernetes pod is misbehaving, right? And then I believe in the high stakes scenario that's happening. Also, like in the real outages with real humans, we sometimes ask, why do you want to do this risky operation? And somebody gives you like, you know, the reasoning how they came to that conclusion. And the other engineers say, yeah, that sounds reasonable. Let's, let's, let's delete that pod or they came to conclusion, oh no, that's actually not the problem, right? That would make it worse.

Mirko Novakovic: So it's more you are basically talking about a genetic AI, right? Giving an agent that does the queries for you evaluates, then gets more data like a human essentially. Right? Yeah.

Jacek Migdał: I'm not a big believer in a text to SQL. That's like another big problem as people thought. Because again, like you mentioned, as before, there's a lot of creators. So you don't even need a text to SQL. You could just click through group by this in any BI tool or in any observability tool. But I believe the biggest value in AI, they could ask several queries like look through different dashboards, make some additional filtering on problematic values, and then came with like three steps that will show the path to figuring what what has happened.

[00:23:41] Chapter 11: Quesma's Current Use Cases

Mirko Novakovic: Yeah. Which is essentially more than querying then. Right. It's really giving it a task like searching for the root cause of the problem. And then it does the queries and figures out itself how to do it.

Jacek Migdał: Of course, I always believe humans will need some time to provide those systems. So you'd like to have some visibility, why it's happening during outage or why it has after it has happened. What have you actually tried? Right. And are your conclusions right? And sometimes these tools will be right, but expecting them to be 100%, no. Like humans are even not 100%. So they'll have a lot of mistakes or wrong insights along the way.

Mirko Novakovic: Getting back to Quesma. Right. And how you see AI and how you see your current use case I saw that you have also different Quesma different stacks. For example you have Quesma for Elastic Stack right. Tell me a little bit what's your current use case. What are you doing with it. With Kibana and all Clickhouse and elastic.

Jacek Migdał: So yeah I would say I believe Dash0 is attacking. I would say rather people if you want really sophisticated experience, but they charge a little bit premium like a Datadog. But I'm also seeing the other spectrum of the market, like telecoms gaming companies, like big brokers of some kind, which they have a huge amount of logs, they could not really afford many solutions because it's too painful. But still, logging for them is mission critical. And currently a lot of them sits on using elastic search, sometimes even self-hosted elastic search. And it's just not the right technological solution because Elastic Search was designed for Google searches. When you have like a top ten searches, while what you need is like really fast analytical searches and even for some reason, like, you know, Dash0 is using Clickhouse, not Elasticsearch behind the code. So there are like a dozen time gains in many use cases. But these stories, like for other people, it's like a painful migration because they get used to Kibana. They already built some integration. They even pulled many queries out of Kibana and build their own software using that queries. And for them, even though they sometimes spend like millions on that system, they don't like maintaining that they know there is a better way. The change is very uncomfortable and Clickhouse seems, for a lot of people, very low level tool to deal with. Right. And a lot of people will say, I would like all of the benefits from that technology or like similar technology like similar SQL engine, but I would like to keep my ecosystem because it actually works for me.

[00:26:19] Chapter 12: Observability Storage Solutions

Mirko Novakovic: And then you put Quesma in the middle.

Jacek Migdał: Yeah. And then the Quesma Intermediate could provide this visibility in all of your previous tools seem to work right out of the box, but you get way easier platform ten times cheaper, easier to maintain. You get like a standard SQL. You could use some data lake which people like a lot.

Mirko Novakovic: And what are you seeing? I mean, you probably see a lot of customers doing migrations. What do you see Clickhouse there? There's also this company that data are quite quick with. So what are kind of the technologies that you are seeing in the market that are evolving in that space, or is it something self-built on top of S3 serverless. So what do you see as the stacks currently that people go from something like elastic towards these massive, highly scaled and performance systems?

Jacek Migdał: So I'm definitely seeing all of those big users desire to leave Elasticsearch. There's quite a lot of like I believe QuickWeed was quite good, but they tried to break the money they got acquired. So it seems to be great technology, but, you know, kind of like a dead end at this point. A lot of people have like this, like hydraulics vendors. There's people sometimes building their own system using data fusion. There are some people willing to use Databricks, but generally the idea is like, use fast analytical SQL engine, right? I would say popular choice, but also like what people don't like about Clickhouse open source. They really like compute, storage, separation, which is just guarded in their enterprise license, and a lot of other solutions like hydraulics or data lakes. You don't have to even like a natural fit.

Mirko Novakovic: Yeah. And do you see iceberg as a standard in that space or. Yeah.

Jacek Migdał: I believe there is like a two ways. Like either people will go a long term iceberg with parquet and a lot of people will. It's not there today. The tooling is a little bit too painful for observability use case. For example, there is like an iceberg. The schema is just recently evolving to have like a variant like less rigid schema. Right. And a lot of observability use case. Oh I want to have custom fields. And the iceberg was a little bit painful with that. And the other people which say oh, I'm actually owning data format, but then you are still obliged to provide sometimes like SQL access or like, you know, also like spark access. And then you could do whatever, but then people still want access to that underlying data platform.

Mirko Novakovic: Okay. And most of them I mean I think Clickhouse spark snowflake they're now supporting iceberg.

Jacek Migdał: So yeah.

Mirko Novakovic: There's probably a migration path. Or is the query changing with that. And Quesma could help with that.

Jacek Migdał: Yeah I believe query is changing. And I believe iceberg for example got a huge issue with real time data. So okay you create files but it got optimistic update. So you by default implementation you have data 5 or 10 minutes behind because you create those blobs right. So the story that is not really good for observability where you have system down. And actually your ten minutes may be the most important ten minutes in your data set. So I believe iceberg is more seems to be more like a cold storage you still need even for some last 15 minutes, some hot storage. So sometimes you will need to some kind of data stitching business.

[00:29:39] Chapter 13: Cold vs. Hot Data Management

Mirko Novakovic: And the hot storage would then not be iceberg format. It would be like a caching technology or.

Jacek Migdał: Yeah, it may be that underlying NIV. Your database will take data from iceberg but also keep like another parquet file or another from recent data from somewhere else.

Mirko Novakovic: Or we did that at Instana, but also Dash0. We also have some data like the last, I don't know, exactly 1220 four hours. We have basically on a very immediate storage right where we can access it quickly because that's where, as you said, right. I have a problem, I have an incident. I want to have access to the data in real time, and then we put the data after that 24 hours, we put it on a more yeah, not cold, but colder storage, which probably takes a little bit longer. Exactly.

Jacek Migdał: But it's very common if you look at so many observability use cases like the last 24 hours is like 99% of queries. All of your alerting, all of your security CM business just during that window, you still want to access, okay, what happened when you do postmortem, but it's not something that you are constantly willing to pay a lot of money. It's more like, you know, okay, If investigation came, you spin up some extra clusters, but the moment they stop, you want to tear them down while load for the last hours seems sometimes to be quite consistent because there's always, like, some alerting checks going on.

Mirko Novakovic: Great. And because we are coming to an end here, what are the next step for you and your team? What do you want to build? Where do you see Quesma in one, two, three years.

[00:31:13] Chapter 14: Future of Quesma

Jacek Migdał: I kind of like you know we started with one use case which is kind of like I wish we would evolve to more use cases. And that's what we are hearing. Like you know that you know okay nice that you do one time of migration. But this seems to be narrow problem. I wish we would have many use cases that get market traction. I also believe these days interoperability is kind of like a table steak. It's like a good enterprise motion business that's mostly big companies. I believe we should also have some features that, you know, just give better insight. And that would be my dream to also build some protocol and something that yes, they will be like open source part, but also like a way for people to build on top of, oh, I'm just providing threat intelligence. Or maybe I'm like a companies could share how they debug outages. So we could have some common set data sets for AI agents. I believe there's a lot of innovation like that possible, but we just like the protocol or just like some standard how to make it happen.

[00:32:16] Chapter 15: Conclusion

Mirko Novakovic: Yeah. Sounds good. I mean, I really enjoyed the conversation because it's a really deep dive into a certain part of this whole technology stack. Right. And you are very much into the details. I love it, I learned a lot. Jacek thank you for the conversation.

Jacek Migdał: Thank you so much, Mirko for your time and thanks for. It was a pleasure to talk to you.

Mirko Novakovic: Thanks for listening. I'm always sharing new insights and insider knowledge about observability on LinkedIn. You can follow me there for more. The podcast is produced by Dash0. We make observability easy for every developer.

Share on