AWS Bites Podcast

26. What can you do with Kafka on AWS?

Published 2022-03-03 - Listen on your favourite podcast player

Luciano and Eoin explore the wonderful world of data streaming using Kafka on AWS. In this episode we focus mainly on Managed Streaming for Kafka (or MSK) and discuss what are the main differences between MSK and Kinesis. We also explore the main features that MSK provides, its scaling characteristics, pricing and, finally, how MSK works in conjunction with other AWS services.

We conclude the episode by providing a decision tree that should help you to decide whether you should use Kinesis or MSK or avoid streaming services entirely in favor of something like SNS or SQS.

In this episode we mentioned the following resources:

Let's talk!

Do you agree with our opinions? Do you have interesting AWS questions you'd like us to chat about? Leave a comment on YouTube or connect with us on Twitter: @eoins, @loige.

Help us to make this transcription better! If you find an error, please submit a PR with your corrections.

Eoin: Hello, today we are going to answer the question, what can you do with Kafka on AWS? We're gonna take you through managed streaming for Kafka or MSK and the main differences between MSK and Kinesis. We're also gonna talk about all the features MSK provides and the advantages over other Kafka options. We'll talk about scaling characteristics, pricing, and then how MSK works with a lot of other AWS services.

My name is Eoin, I'm here with Luciano and this is the AWS Bites podcast. This is the final episode in our AWS event series. So the last time we talked about Kinesis and streaming data, and we're continuing to talk about streaming data today with Kafka. So I think Luciano, the last time we said that streaming is all about processing patches of messages that are retained in a stream for a longer period of time so you can replay them.

And it's good for a lot of use cases. Like we talked about real-time analytics, stream processing, cool things like event sourcing and then audit logs, that kind of thing. I guess Kafka became very popular for microservices communication as well because it has such low latency, good delivery guarantees and has now a really rich ecosystem. One of the things before we get into the details is, I think it's fair to say that we don't have as much experience with Kafka as we do with all the other services we've talked about in this series. Like we've both used it in the past, but a lot of features that we're gonna talk about today, particularly around MSK, are things we've used in production. So we have done the research and evaluated MSK in various different ways, but we're really interested if you have any hands-on experience with Kafka and MSK. Want to share your thoughts and opinions and how it compares to the alternatives, please reach out.

Luciano: Yeah, that'd be awesome.

Eoin: What are the options? We've talked about, we're not just gonna talk about Kafka because it's AWS, we're just gonna focus on MSK, but there's other options out there for cloud-based Kafka if you don't want to manage it yourself, is that right?

Luciano: I know at least about two of them that are Confluent Cloud, which historically probably the first one that they came up with a service like this. So managed Kafka for you and Confluent, they are the experts in the market. They're all about Kafka, they build numbers of plugins, they contribute to the project itself. So probably they know their stuff, but there is also a very new one that is called AppStash. We mentioned it previously regarding their serverless offering for Redis and they recently launched also an MSK, equivalent let's say, so managed Kafka on AppStash servers. So you might want to look at this other alternatives, maybe they have a different feature set, maybe different pricing. So if you're looking for managed Kafka, don't limit yourself to look at AWS for sure. Yeah, do we want to start to have a quick walk through the features of Kafka?

Eoin: Yeah, let's do that. So I think with Kinesis you've got, obviously the AWS API, the SDK for putting messages. We talked about all that the last time around. And I know that Kafka has like a producer API, which does what you would expect. It's for producing messages and a consumer API, right? So those are fairly similar concepts to what we talked about with Kinesis, but it's got some other APIs as well. What are those?

Luciano: Yeah, there is a streams API, which is like a consumer on steroid and allows you to build like processing pipelines in real time where you can do aggregation, filtering, transformation. And this is probably an alternative to Apache Flink. Like I don't really, I'm not really sure how it compares like pound to pound to Flink, but it seems that there is a good overlap in terms of feature set and things that you can do with this.

Then there is also a connect API, which is kind of a simplified way to put data into the Kafka streams or to read and consume this data from the streams and maybe move it somewhere else. Examples that we can find are, I don't know, for instance, get data from S3 or write data from Kafka to S3 or integrations with Elasticsearch, maybe for implementing search features or the Bizium, which I think is a change detection system that allows you to basically store all the change logs from your databases into Kafka. And then you can probably do, build like event-based systems from changes happening in your databases.

Eoin: Yeah, that's cool. So there's a lot more in terms of the rich feature set around Kafka than Kinesis, which is, I suppose, more of a single purpose streaming, right? It's just about producing and consuming events.

Luciano: Yeah, I think there is a little bit of an overlap with Kinesis Fyros, but Fyros is only thinking about once you have the data in your stream, how do you move it somewhere else? Here you can also have like data sources and let them push data into your streams. So I think it's a richer ecosystem with more use cases being supported.

Eoin: Yeah. And with Kafka, you have the admin API as well, I suppose that's worth mentioning, for creating topics and managing your cluster. Of course, with AWS and MSK, you can also use the AWS SDK and API for managing those things as well, but it doesn't allow you to create topics. That's something you would do with the Kafka API. I think Kafka often is so closely associated with the like enterprise Java ecosystems and Java based community. So there's a lot of Java based libraries and Scala based libraries, which provide really rich capabilities and a lot more than you would get with just the consumer and producer sending messages and receiving the messages.

Luciano: Yeah, absolutely.

Eoin: I know you've used like the Node.js client and there's other APIs or software packages out there for working with Kafka. They're probably not as rich, I guess, as the Java ones, right?

Luciano: Yeah, Java, I think is kind of the default and the one that gets all the new shiny features before everyone else. But I suppose, depending on your use cases and the languages that you are using for your project, you'll find good enough clients for pretty much most of the mainstream programming languages. Yeah, probably maintained by the community, I guess, right?

Eoin: Rather than the Kafka core teams. So what are the two different terms? I know that they tend to use different terms for the same thing between Kinesis and Kafka. So what are the one-to-one mappings here?

Luciano: Yeah, that's an interesting topic because if you are coming from Kinesis and looking at Kafka or vice versa, coming from Kafka and looking at Kinesis, it might be a little bit confusing to get used to slightly different terminology for similar concepts. But the first concept that exists only in Kafka is the idea of a broker, which is not really applicable to Kinesis data stream because that concept is totally abstracted to you.

A broker is literally an instance that is part of your Kafka cluster. And we don't get to see that in Kinesis because AWS is kind of hiding all of that complexity. Then we have the concept of a topic, which in Kafka is called topic and it's pretty much equivalent to stream in Kinesis. So it's the idea of one logical stream where you put your data, you will call that topic in Kafka. Then we have the idea of partition.

Again, partition is the Kafka topic, but we used to call that shard in Kinesis. So the idea of once you have a topic or a stream, how do you distribute the data in that topic into multiple instances? And then we have the concept of producers and consumer, which surprisingly is the only terminology that matches in both systems. And finally we have offset, which is a Kafka terminology and iterator is the equivalent in Kinesis. So the idea that as you are consuming the data, you are like reading a transaction log. So you have a pointer that is basically used to keep track of where are you at reading all this data. So the data is always coming in. So you're trying to catch up and process the data real time. So that offset or iterator is what tells the whole system what to read next, basically.

Eoin: Interesting. And I know we were talking about watching the iterator age when you were talking about Kinesis. I think there's also like this offset lag metric in Kafka, which is, I guess, pretty much one-to-one.

Luciano: Yeah, probably it is, yeah. Okay, so should we maybe mention more differences with Kinesis? Is there something else that comes to mind for you?

Eoin: Yeah, I think when we were talking about this earlier, you made the point that comparing Kafka sync, Kafka's, Kinesis and Kafka is like comparing SQS to RabbitMQ. One is much more simpler, one is much more feature rich. And so Kafka has a lot of features and configuration options, but in exchange for that richer set of features, you get increased complexity as you might expect. And you also talked about these brokers. So Kafka has this cluster provisioning model where you need to scale brokers and think about disk size and memory and network, all those wonderful things. And you can create as many topics as you want, and you just need to scale your resources accordingly. And you can also create lots of consumers. There's a whole complexity around managing cluster state as well. I wanna go into that. Does this whole duality with Kafka where you need to think about your Kafka configuration and your Zookeeper configuration, what's that all about?

Luciano: Yeah, so basically the way that Kafka works, because of course it's a distributed system, it needs to replicate data across multiple nodes. And also you have consumers and the system needs to keep track of the state of each consumer. So there is a lot of information that is kind of distributed and needs to be kept in sync across different nodes. And all of that, as many other Apache projects, is managed by another system that is called Zookeeper.

And Zookeeper is something that needs to be provisioned in a multi instance mode as well. And we need to make sure it's highly available and resilient because of course, that the cluster is healthy only if Zookeeper is available all the time. So it's an additional piece of complexity that you get with Kafka. But the interesting thing is that in MSK, all this complexity is managed by AWS for you. And also the pricing, this is actually the interesting bit, is something that you don't pay any additional costs for Zookeeper.

So it's something that it's somewhat included in your MSK offering. So AWS is kind of absorbing that cost for you or kind of abstracting that cost in different ways in the whole MSK offering. But it's not something you need to think about in terms how many instances I'm gonna use for Zookeeper, what kind of size and how that is gonna impact cost. It's not really affecting the cost scheme in MSK. An interesting thing is that I think this has been a long running conversation in the Kafka community on whether they should get eventually read on Zookeeper and have this kind of internal mechanism to synchronize all the data. And as far as I can tell by reading some blog posts, there has been a lot of progress. And since version 2.8, I think it starts to be feasible to run a Kafka cluster without Zookeeper at all. I don't think it's the recommended approach so far. And also in MSK, it's not really clear what happens if you use 2.8. I think it still uses Zookeeper, but you don't get like a flag, use Zookeeper or not.

Eoin: It doesn't matter, I guess, with MSK anyway, really, right? If it's all managed for you. Exactly. Yeah. Okay, cool. I know that as related with all AWS services, Kinesis uses HTTP, but Kafka has its own TCP based protocols. I guess some efficiency can come from that. There's also changed a difference in the delivery guarantees. We've talked a lot across this whole series about at least once processing and at most once processing.

Kafka is one of the rare things that actually has support for exactly once delivery of messages. But I think this doesn't work for all consumers. You need to be really sure of what you're doing and understand how Kafka transactions work. But it is supported in things like Kafka Streams. So that can be important for me, right? If you don't want to have to build IAM potency and you really need those guarantees.

So the provisioning model then, we talked about brokers and everything. Kinesis uses throughput provisioning. We talked about that and it's very clear. The number of shards, a single shard has very clear throughput limits. And if you want more throughput, you need more shards. But you have limits with the number of consumers then, right? Because if you've got a consumer, you can only read like a megabyte a second from that shard and you've got these enhanced consumers to help a little bit. But Kafka, you can really have as many consumers as you want, right? You just need to again, make sure you've got the CPU storage and partition setup.

Luciano: Yeah, I think Kafka is a little bit more traditional way of thinking about a cloud service where you have a set of instances that are taking the heat for everything you want to do with them. And you might have, I don't know, very small topics and very few big topics, and maybe your cluster will be able to deal with all of them at the scale you need. So you don't really get to think in terms of topics, but more what are, like is the system under stress? Is the CPU or the storage enough for my workload? So you need to look at all these metrics rather than having like a fixed unit and you just scale based proportionally on that unit. Yeah, so it could be much more complicated, but also I suppose more flexible if you have very diverse type of topics and very diverse throughput logics or functions, I guess, across different topics.

Eoin: Okay, okay. And what about retention? Because Kinesis has, they actually increased the maximum retention from seven days to one year, not so long ago, but Kafka doesn't have any limits, right?

Luciano: Yeah, I suppose the idea with Kafka is, again, it's up to you to decide. And if you have enough disk space, you can start the data as long as you want. There is no intrinsic limit after which the data is lost.

Eoin: Yeah, there's a clear benefit there if you're using it for advanced sourcing and if you want to rebuild your state at any time in the future. Are there any other differences between Kinesis and Kafka that we should cover off?

Luciano: Yeah, an interesting one we mentioned is a little bit already is that being an open source project has been around for a long time and it's probably has been like the promoter and like the first real project in this space that then maybe started Kinesis and everything else. There is a lot of history there and of course the ecosystem is really good and there are a lot of open source tools. For instance, you can find all sorts of different admin UIs that can help you to build the data in a cluster and understand what's going on. Or also there are tools that allow you to define schema and have this kind of validation that all the data you ingest in your Kafka is somewhat compliant with schema you defined or to do discovery of you have been ingesting different messages that they will extrapolate the schema from your messages and you can easily visualize that. So all this kind of interesting stuff is available to you because there is an entire community that are building tools and building products on top of Kafka and sharing what they learn.

Eoin: Okay, yeah, I guess that sounds like it. I mean, other benefits, I think in terms of latency they both have a pretty low latency, you know, like 100 millisecond latency. So they're pretty similar in that regard. So let's talk about how you get going with MSK and how you set it up. So there's two modes we're gonna talk about because we've got MSK as it has existed for a couple of years now but we've also got the preview for MSK serverless.

So for the brokers first, I know that we talked about what you have to scale. So when you set it up, you have to create, you have to select your instance type. So you get a number of options there and the minimum kind of production level one is like an M5 large. You can also, they do offer a small one for development workloads but generally because it's a distributed system and it needs a core room in order to make sure that state is reliable, you need to kind of want three brokers minimum.

You probably wanna set up three brokers across three availability zones. So you might think about that. I'm not sure what the story is with inter AZ traffic. Usually that's something you have to pay for. So I would be observing that if I was using a cluster in AWS with a lot of traffic, think about the cost there. And then you can set the number of brokers in each AZ. So you might end up with a six broker set up by default.

Probably get away with three, you need to then think about EBS volumes for your data storage, right? If you're gonna use infinite storage, you need to know where it's gonna be held and of course, because it's an instance based thing, you need to set up a VPC. It needs networking and security groups, private subnets. So there's all of that to set up. In terms of security then, I think you'll have to select which authentication mechanism you support and it supports five options.

One of them being no authentication at all. So I could probably exclude that one. Don't do that. But you have username password authentication to have this SaaS protocol you can use. And you put this as with like you would with RDS, you can put the password and username in secrets manager and MSK will use that. And then you can use that on your clients. You can also use TLS authentication. And the interesting one there with MSK, which is different to other options is that you can use IAM authentication. So they, you can imagine AWS have patched Kafka to support IAM as an authentication mechanism. So that's setting it up. And then once you set it up, you would create a topic. So this is a slight difference because when with Kinesis, you would configure a stream as a resource, as an AWS resource. With MSK, you don't, right? You configure the cluster as the resource and then you use the Kafka API to create your topics. And when you create a topic, then you can specify how many partitions and how many brokers you need to replicate that across. So how do you think that sounds? There's quite a lot in there.

Luciano: It feels again, a little bit more traditional, like comparing, I don't know, RDS to DynamoDB as well, where RDS you provision, like I want to use, I don't know, an instance of Postgres, but then you maybe you don't create any table in it, right? Creating a table means you connect to the database, you run SQL and you create tables. While in DynamoDB, when you decide to create a table, then you are creating an AWS resource that represents that table. So I think it's a similar kind of mindset when it comes to comparing MSK with Kinesis.

Eoin: Yeah. Yeah. So to make all this easier, last year we had an announcement that MSK serverless was in preview mode. So, and it's in preview. When I used MSK serverless, it was only available in US East to Ohio, but hot off the press, it's now available in EU West one, Dublin, Ireland as well. So how, what kind of a difference do you think that will make and how does it function compared to the laborious configuration, how do you set up by just talk through for the provision mode?

Luciano: Yeah, my expectation is that MSK serverless will try to remove a lot of these concerns that we just discussed in terms, how do you even get started? How do you, like before you create a topic, what do you do? So I think this will try to give you a more immediate usage and provisioning of MSK that is probably similar to the user experience you will get with Kinesis. And in fact, there is a very clear unit of scale, which is the right throughput. And you can, you have like limits that are more set in stone because of course AWS will take a lot of the work for you. So they will need to work with certain limits and you have storage limits, I think it's 250 gigabytes per partition, one day retention, and then you have a maximum of 120 partition, I believe, which maybe can be increased. Correct me if I'm wrong.

Eoin: Yeah, I think this is probably just because it's in preview mode and they just put a cap on it, but yeah, you would expect all those limits to increase because they're not particularly high.

Luciano: Yeah, yeah, yeah, definitely. And then you have IAM authentication only. And one interesting thing is that you might think, okay, I'm gonna, maybe I want to work with Kafka because this is what I'm using for my product. I'm migrating to AWS. Probably the safest bet is to start with MSK. And then if you are starting to do okay, but eventually I would want to serverless because that will remove a lot of the complexity for me. Then what you could probably do in the future is you start with MSK and then you can transition to MSK serverless by migrating all your data. And one of the tools that I think is one of the most common used in Kafka is mirror maker to move data across Kafka clusters. So you can probably do that to migrate your data from traditional MSK to MSK serverless. Yeah, should we talk about monitoring maybe? Like what do you do once you have everything up and running? How do we make sure it's actually doing what we want and it's healthy?

Eoin: Yeah, well, more configuration will inevitably mean more things to monitor. And that's something you'll get with Kafka. So you could configure the monitoring level three different options. You can set it to be like per broker, per topic or per topic per partition. And that will give you, I think it's like between around 40 and 90 metrics to monitor with MSK. And that depends on the level of the, obviously the monitoring level you have set.

So the fact that there you have up to 90 metrics to monitor will give you some indication of the kind of infrastructure and maintenance complexity with a traditional Kafka. It's also worth mentioning that built into MSK support for open monitoring with Prometheus, a lot of people will be using. And then in terms of logging, you can set up your broker logs like to go to CloudWatch logs or S3 or firehose.

So that's obviously you want to create a lot of alarms and keep an eye on all those metrics. And there's also a lot of integrations between MSK and for something that's relatively new, it's pretty impressive the list of integrations. I know that Kinesis Data Analytics not doesn't just work with Kinesis Data Streams. You can use it with Kafka as well. So then you can do stream processing. If you don't want to use the streams API in Kafka, you could use Flink on Kinesis Data Analytics because that is essentially a managed Flink. You can also run Flink on EMR. So you can integrate your streams with EMR. And I also noticed that you were talking about, when you're talking about schema registry support within Kafka, there's a product called the Glue Schema Registry as well, which is essentially a schema registry for real-time streaming. So you can have like Avro schemas and JSON schemas and protobuf schemas and enforce the data structure on the producer side and consumer side using that.

Luciano: But I think the most interesting is probably Lambda, right? Lambda integration.

Eoin: Yeah, yeah. And this is again, something which is, has they've had a lot of features because they haven't just added Lambda integration for MSK. They've also added support for Lambda integration with your own managed Kafka. So you don't have to use MLK to integrate with Lambda. And it uses the same event source mapping feature that we talked about when we covered Kinesis and SQS. So it also supports MSK, but not different options are supported depending on what your event source is.

And we obviously talked about Kinesis and how you've got your shared level, but then if you want like 10 Lambdas processing messages from each chart, you can set this parallelization factor configuration and get more parallelism. You don't do that with MSK or with Kafka. Instead, you have to think about it a bit differently. By default, you just get one consumer for your MSK topic. And then Lambda can scale up based on the number of partitions in that topic.

But the maximum scaling is one consumer per partition per topic. So again, this is all very use case specific and it depends on what your partitioning level is and your volume of messages, but it'll only scale up every three minutes. So it might not be reactive enough for your needs, which is a pity because I think Lambda is the ideal use case for these things. If you look at like the Streams API or the Consumer API, you still have to have something running, something that's pretty stateful to consume these messages. Whereas with Lambda, you could have a lot of power, but the scalability, you just have to check if it's gonna work for your use case.

Luciano: Yeah, exactly. Especially if you don't have like a continuous stream of data with more or less constant throughput, but it's very, very bursty ingestion of data. You'll probably suffer because of these three minutes chunks of scaling because yeah, it will kind of go very slow at the beginning.

Eoin: Yeah, and then as regards security between Lambda and Kafka, it's kind of different to what you'd expect. And there's authentication, you can use IAM authentication, but you can also use your username password authentication and TLS to integrate Lambda with your cluster. Obviously is required if you're not using MSK, but also you need a VPC for your cluster, but your Lambda function doesn't have to run in that VPC.

So it's slightly different than the mental model you might imagine for this setup. Instead, you need to give the cluster's VPC access to the Lambda service and also the STS service. So that usually means either you give it access to the internet with like a NAT gateway, which might set off a few alarm bells based on their pricing episode a few months back. But so you might instead use VPC endpoints and create a direct route between your VPC and Lambda service. So that's one thing you might not expect. You might think just because the resource runs in, cluster itself runs in a VPC, that the Lambda has to be in that VPC, but it doesn't. And it kind of works the other way around. Yeah, that's interesting.

Luciano: I'm gonna talk very quickly about pricing. And basically, well, the first case is provisioned. You are of course paying by the number of brokers and the size that you pick for these brokers. For instance, if you go for an M5 large, that's about 24 cents per hour. And depending on how many do you have, you can do the math and see how much are you gonna pay. But of course there is also storage in the equation.

So you have certain amount of cost, I think it's around 10 or 11 cents per gigabyte per month. And yeah, that will add to the cost. If you go serverless, there are actually a bunch of dimensions that will influence your cost. So there is a certain fee that is $075 per cluster per hour. So you pay a certain fee per hour just by speeding up a cluster. Then depending on the number of partitions you have, there is an additional fee on partition per hour.

And of course, storage as well will add costs. So the number of gigabytes per month. And then you also pay for data transfer, both data in and data out in terms of gigabytes. Yeah, so just do the maths and based on use cases, try to figure out what's the pricing. The interesting thing that I guess we realized, it might change depending on your use cases, but instinctively it looks like that Kinesis is way cheaper for lower volumes, but then MSK can be more effective if you're really running many, many shards, if you're really running data processing at scale and ingesting a lot of data. So that can be interesting. Maybe it's for a startup, you can start with Kinesis because it's easier to start with and you probably don't need the big volumes, but if you're running serious workloads, maybe that the investment in Kafka could be worth it.

Eoin: Okay, so it's another serverless option that doesn't scale to zero in terms of pricing, which is a bit of a pity. So that's maybe something we can hope for at some point in the future.

Luciano: Should we close this episode by trying to have a recap between why would you use MSK over Kinesis or vice versa?

Eoin: What's the decision tree? Yeah, exactly.

Luciano: So yeah, I'm gonna say that first of all, when you have a large number of consumers, probably MSK is a better solution because you can be more flexible there. Also, if you need Kafka Streams or Kafka Connect because you already built a solution that uses those technologies, or maybe you have expertise with those technologies and you want to leverage that expertise, of course, again, MSK is an obvious winner there.

And similarly, if you just have experience or you prefer to work with a technology like Kafka because it's open source and you can easily port it also to other cloud providers, again, that's a winner over Kinesis. But of course there are disadvantages, like we already mentioned that there is more complexity, so you need to keep that into account. Also in terms of MSK, so Kafka on AWS is a relatively new service.

So sometimes you might struggle to find documentation or examples, so that's also something else to keep in mind. And it's less serverless because you need to think about, as we said, VPC, EC2, rather than just thinking about how much are you scaling, give me one dimension for scale and everything else will be managed for me. Here you really need to think about many, many concerns, many, many metrics, so everything becomes more complicated.

And one final remark that I have is, again, you might not even need stream processing at all, so always keep that in mind. Sometimes you can go a long way with just using like SQS, SNS or EventBridge. So keep that in mind because if you don't really need streams, whether that's Kinesis or Kafka, doesn't really matter, but if you don't need that level of complexity at all, and you can go with SQS, SNS and EventBridge, you can probably make your life easier. And those services will also scale to zero, so you could also save a lot of money by using those as an alternative. So try to really nail down your use cases and the technology you could use for your use cases.

Eoin: Yeah, yeah. Plus one for me from starting with SQS and EventBridge, for example, you can get a lot done and you can always migrate. So to conclude then, we have a few resources. I know we've collected a few links here, which we'll put it in the show notes, just some really good ones. One of the interesting things that you might find is that Amazon actually provide a pricing spreadsheet, an Excel spreadsheet that you can use for pricing your MSK. We have a link to that, and we have some pretty good talks. So we've one on whether your startup should use Kinesis or MSK, even if you're not in a startup, that's an Amazon talk. Even if you're not in the startup, that's useful comparison between the two. We've got another intro to MSK talk way back from its launch in 2018. And you wanted to highlight the Frank Munns talk as well, right Luciano?

Luciano: Yeah, I really like it because it's not just an intro with a demo on how to use MSK, but there is also a little bit of a preamble that gives you a lot of insights about why would you need stream processing and all the value of data over time and the value that you can extrapolate if you are able to make sense of that data as soon as it's available in your system. So I really like that not just as a technical introduction to MSK, but also as a way to reason about whether you really need that type of capability or not and what kind of advantages you could get from it.

Eoin: Okay, maybe the last one we can mention to wrap up then is there's another really useful versus comparison, which the Cloud and Not guys do pretty regularly. They've got a comparison table, but also an episode comparing Kinesis and MSK, which we reckon you should check out. It's got a demo of MSK in there as well. I think this wraps up not just this episode, but the whole series on AWS event services.

And the next time we'll be back with something completely different. So we've really appreciated all the great feedback we've got on this series. It's actually helped to change how we present them. I know some of these talks have been, these episodes have been longer than previously, but hopefully it's been worth it for people. And we're really interested in your feedback on how we can make shorter or longer episodes in the future and what topics you want to cover. So thanks for all your feedback. We really appreciate it and we'll see you in the next episode.