Help us to make this transcription better! If you find an error, please
submit a PR with your corrections.
Eoin: AWS has recently launched LLRT, the low latency runtime, a new experimental Lambda runtime for JavaScript. Now, you might be thinking one of two things, either this is amazing, we've got a new runtime for JavaScript, it's going to be faster and cheaper than the existing ones, I'm going to rewrite all of my Lambda functions right now. On the other hand, you might be thinking, oh, no, didn't we just stop publishing new JavaScript frameworks every week, only to start publishing new JavaScript runtimes every week?
Or maybe you're just somewhere in between. So if you're curious today, we're going to give you our perspective about LLRT. There's a lot to talk about with LLRT. There's a lot to love about it. But there are also some concerns that are worth highlighting. And we'll try to describe these in more detail and talk about what LLRT is, how it works, and what the specific problem is that it's trying to solve.
So let's get into it. My name is Eoin, and I'm here with Luciano for another episode of the AWS Bites podcast. AWS Bites is brought to you by fourTheorem, the AWS consulting partner with lots of experience with AWS, serverless and Lambda. If you're looking for a partner that can help you deliver your next serverless workload successfully, look no more and reach out to us at fourtheorem.com. Just to set the stage, let's just do a quick overview of the AWS Lambda service and talk again about what a runtime is.
Lambda is a serverless service, and it's a service that's built on the AWS Lambda platform. So let's go back to the service and talk again about what a runtime is. Lambda is a serverless compute service in the category of functions as a service. You can write your code in the form of a function that can respond to specific events, and AWS will take care of provisioning all the necessary infrastructure to run that function for when the event happens.
Lambda supports a lot of different programming languages, and it does that using the concept of runtimes. And every language and language version has a dedicated runtime. And this is logic that AWS maintains for specific languages, strap your Lambda function, orchestrate events and responses, and call your code in between. A Lambda runtime also includes the specific runtime binary, Node.js, Python, et cetera.
For example, with the Node.js one, you'll get the Node.js binary and all the system libraries it needs as well. Now, it is possible to build custom runtimes, for instance, to support more esoteric languages or specific language versions that are not officially supported. AWS itself uses custom runtimes to provide support for compiled languages such as C++, Go, and Rust. So this should give you a reasonable base to understand more about LLRT as we go on and have this discussion. But if you're curious to know more about Lambda runtimes and how they work, and even how to build your own custom runtime, we have a dedicated episode for that, and that's episode 104. The link will be in the show notes. So given our context, we've talked about Lambda runtimes as you know, you've been looking into LLRT in some more detail. What have you found out?
Luciano: Yeah, I think a great place to start is the LLRT repository, and we'll have the link in the show notes, because it gives, I think, a very good introduction to what this runtime is about, why it exists, and a bunch of other interesting things that we are going to try to cover today. So first thing is that this is a JavaScript runtime that is built specifically for Lambda. So it doesn't try to compete with the likes of Node.js, DIN, or BAN, which are much more generic purpose.
So this is kind of a very important leader because some of the design trade-offs make a lot of sense looking at it from this perspective, that it's not competing with all the other ones. It's something very, very specific that makes sense in the context of Lambda. So the first trade-off is that it tries to be very lightweight, which means that the final runtime package that you get should be as small as possible, generally in the order of kilobytes rather than in the order of megabytes, which is what you get, for instance, with Node.js, DIN, or BAN, you will have 20, 30, 60, 80 megabytes of runtime itself rather than a few kilobytes, which is the case, for instance, with LRRT.
Now, why is this important in the context of Lambda? I think we need to remember that Lambda is a very dynamic environment. As you described very well, instances are started only on demand and shut down when not needed anymore. So AWS is going to be provisioning all these necessary resources all the time, bootstrapping and killing those, depending on requests arriving into our account. So it is very important that AWS can do all of that as quick as possible, because every time that you are starting a new instance of a Lambda, the whole process of bootstrapping the infrastructure is called cold start, and it's something that's going to affect the latency of your application.
So the choice of runtime is something that is very relevant when we discuss about how to improve cold start. And the bigger the runtime package, of course, the more time is required for AWS to download all the necessary files and load them into memory. So the bigger the runtime, most likely, the longer is going to be the cold start. So the choice of trying to make the runtime as small as possible, of course, is something that tries to reduce the cold start, which is one of the biggest problems that people always talk about when we talk about problems with Lambda and serverless in general.
So this is definitely a step in the right direction in that sense, and it's a trade-off that makes a lot of sense. Another interesting aspect is that it is built using Rust and QuickJS as the JavaScript engine, and these are two very interesting choices. So I'm going to try to give you a little bit more detail about both of them. Rust is actually not too unusual, because if we look, for instance, at Deno, it's also built in Rust, but if we also look at Node.js, it's written in C++, which is somewhat similar to Rust in terms of most of the trade-offs that the language takes.
And very similarly, if we look at BUN, it's written in ZIG, which is another alternative to C++ and Rust. So in that sense, it's nothing special, I guess, but it's still important to try to understand what Rust brings to the table in this particular case. And the first one is that Rust is a language that is built for performance and memory efficiency, and these two dimensions are very, very important in the context of Lambda, because, yes, on one side, you might argue that nobody likes memory-hungry software or slow software, but in the context of Lambda, this is even more important, because these are the two dimensions that are going to affect price.
And it's worth remembering that with Lambda, you pay a unit amount that depends on how much memory you allocate for your Lambda function, and then you have to multiply that unit amount to the number of milliseconds that are used by your Lambda whilst doing something useful. So while your Lambda is running, you take the number of milliseconds and multiply for the amount of memory that you have pre-allocated for that Lambda.
So of course, if you can keep the memory footprint very low, and you can be still very, very fast at doing the execution, that means that you are going to be using Lambda in the most effective way from a pricing perspective. So your CFO is probably going to be very thankful, looking at the bill and checking that there was maybe a quite significant reduction in cost when it comes to the Lambda item in the bill.
So faster startup, by the way, is not only to be seen from the perspective of price, which is important, but I think there is another very important aspect that is power consumption. This is something we are becoming more and more aware in the industry. Probably we should do even more. We are still at the very beginning of the conversations. But I think it's important to realize that everything we run in the cloud has a cost not just from an economic perspective, but also in terms of environment and sustainability.
So we need to be very mindful that we might be able to do something to reduce that kind of footprint. And every time we have the chance, we should probably take the chance because it's something that we will need to eventually care and be more responsible. So it's important to see that perspective as well. And having a runtime that can give us very, very efficient compute, it's something that goes in the right direction in that sense.
And to be fair, serverless is also a very sustainable technology in general. So if we can make it even more sustainable, it's another win that we take from this particular set of trade-offs. Now, it's also worth mentioning that the idea of using Rust or C in order to make code more sustainable is generally kind of a double-edged sword. On one side, you get that effect that you become more sustainable.
But on the other side, there is a huge investment in terms of teams having to learn these technologies, especially if you have teams that are more versed with technology such as Python or JavaScript. That's going to become a very big investment to do. So here, there is an even more interesting trade-off because the promise is that you don't need to learn a new low-level language like C, C++, Rust, or Go.
You can stick with JavaScript, which is probably something much more well-known in the industry, and still get very good trade-off and very good performance and energy efficiency. So this is definitely one of the areas why LLRT shines in terms of a very interesting approach. Now, speaking about QuickJS, this is quite of a novelty in the JavaScript runtime space. We have a link to the QuickJS website where you can find a bunch of details.
And it's probably worth looking into it if you've never heard about QuickJS. But I'm going to try to explain very quickly what it is and what kind of trade-offs it provides. So QuickJS basically implements JavaScript, meaning that it's able to interpret and execute JavaScript code, and it does it in such a way that it's almost like a library that you can take and embed in other programs. So it doesn't really give you any core library, so to speak.
It's just able to understand the JavaScript syntax and execute it correctly. And this is something that every JavaScript runtime needs in a way or another, but the big ones, Node.js, Deno, and BUN, none of them use QuickJS. In fact, Node.js and Deno both use V8, which is the Google Chrome JavaScript engine, while BUN uses JavaScript Core, which comes out from WebKit, which is the project that's by Apple that is used in Safari.
So QuickJS is somewhat novel in the space of JavaScript runtimes, and the reason why I believe it's being used here is, again, because it tries to fulfill that promise that it needs to be as small as possible in terms of inventability and as easy as possible to embed in an application. It's also quite modern and feature complete. In fact, already supports ECMAScript 2023, including ECMAScript modules, including other advanced features like async generators, proxy, begin, there are even extensions to have, things that are not even in the ECMAScript specification yet.
Another interesting trade-off is it doesn't have a just-in-time compiler, and this might seem like a negative thing because I think all the modern runtimes are expected to have a just-in-time compiler, and generally something that helps a lot with performance, but I think it's important here to understand the trade-off. So let's try to explain quickly what a just-in-time compiler is. Generally, with interpreted languages, what you do is as you scan the code, you try to evaluate it, and that's basically run in the program.
And of course, this is not going to be extremely efficient because most of the trade-offs that dynamic languages have is that you don't necessarily have strict typing, so the runtime needs to make a lot of assumptions to be as generic as possible and to support a lot of dynamic range of functionalities. So generally speaking, the interpreted languages will at some point introduce a just-in-time compiler that tries to, as you read the code and process the code, figure out what are the patterns and try to generate machine code which is much more optimized on the fly and start to swap out part of your scripting language with actual compiled code that can run much faster on your specific architecture.
Now, while this is very good in the long term, so if you have computation that needs to run for a long time, if you have a computation like in the context of servers where you're trying to optimize for small event-driven pieces of computation, sometimes it's a little bit of a waste to do all of this optimization just to shut down your computation after a few seconds or even milliseconds in most of the cases.
So here it's a very interesting trade-off because we are giving up on that just-in-time capability because we know that most of the time we are going to prefer to have very small and fast lambdas that are going to do something very quickly, mostly glue logic, and therefore we don't necessarily need that level of optimization, which comes with a little bit of upstart price that you have to pay to do all of that compilation up front. So I think this is something that makes a lot of sense in the context of all LRT, but I guess we can start to discuss about how much performance are we really talking about? Can we figure out what are some numbers or maybe some comparison with Node.js?
Eoin: Well, we haven't had the chance to try it ourselves in any great detail, but there is an interesting benchmark on the LLRT repository, and it's based on a fairly simple lambda function that puts a record into a DynamoDB table. So even though it's minimal, there's a bit more realism to it than the usual hello world style benchmarks, and it compares the performance of running this function on an ARM architecture, so graviton-based lambda with 128 megabytes of allocated memory, and the other side of the comparison is Node 20.
So LLRT results, if we look at the... The results are kind of presented with, you know, P100, P99, so you can see the maximum cold start time and the maximum run time, as well as like P50, so the 50th percentile, and we can see that for the 95th percentile with LLRT, you're getting 76 millisecond cold starts, which is pretty good. On Node.js 20, they're reporting 1600 milliseconds of cold start time for 95% of the cases, the maximum, and then warm start executions are looking at 33 milliseconds for this function with LLRT compared to 100, just over 100 milliseconds with Node 20.
So the full tables and set of benchmarks is available on the website. It's kind of interesting that it's only comparing ARM, and it's only using Node 20. I think it would be great to have a more comprehensive set of benchmarks, but in general, what this is showing is that in this permutation, at least LLRT is noticeably faster than Node 20, particularly when it comes to cold starts. There's another very well-known benchmark, which we've mentioned, I think, before on a few episodes, that tries to compare the cold start memory footprint and the execution latency of different runtimes, and they recently added support for LLRT in their test suite.
LLRT scores very well in most configurations there, and it's generally the third fastest runtime behind C++ and Rust. It's even faster than Golang in this case. Of course, you have to bear in mind that C++ and Rust are very mature ecosystems, comparatively go as well, and this is still an experimental beta product. In the benchmark, we can also see the difference in memory usage, and if we compare LLRT to Node 20, we have 24 megabytes versus 63 megabytes, so it's about a third of the memory needed for the same Lambda function.
If the performance is the same, it might mean that you can reduce your memory allocation and save cost even further. So this seems pretty exciting, and I've been using Node.js for a long time, so the idea of this kind of explosion in runtimes is a little bit exhausting to think about, to be honest, because so much investment has gone into Node.js, into JITs, into optimizing. I mean, whenever I hear people from V8 team or the Node team talking about the amount of effort they put into optimization of single functions and single libraries, I think, how can these runtimes ever get that same level of maturity? But maybe if they focus on a specific problem, maybe there is a use case where we should be thinking about them. So, Luciano, you're a Node.js aficionado. How does this make you feel? Does it make you think that you should use LLRT for every single Lambda function now, or where do you stand?
Luciano: Yeah, I think that's a great question, and it's a bit difficult to give you a 100% answer. I think we will see as we go what happens to the project, but as it stands today, there are a few things to be a little bit concerned about. First of all is that the project itself is labeled as experimental, and we don't know exactly what that really means, but we can make some assumption and also try to interpret what we can see in the repository.
So the repository marks the release as beta. So, again, not really indicative of any kind of promise, but it gives us a first idea that is not something that we can consider stable right now. So maybe let's not use it for everything we have in production just now. Maybe let's wait to see when it becomes a little bit more stable in that sense. Also, the repo says that it is subject to change, and it is intended only for evaluation purposes.
So, again, don't use it for your most important production workload. Maybe if you have a very secondary workload and you want to use it with something that is a little bit more relevant to your business, that could be one way of approaching it, but definitely use it for the most sensible business case that you have because you might have unexpected surprises. And I think there is, in general, no guarantee that AWS or the current maintainers are going to invest more on this project as it stands today, and even if they do, maybe they will change everything, or they will change a significant amount of the code base that might require you to do a significant amount of change on your side.
If you want to keep using the project. So that's definitely something to keep in mind as a starting point. There is another problem that is also very important, that this project is not Node.js. So it's not packaging Node.js in a smarter way. It's just a totally different implementation of a JavaScript runtime. And the reason why this is important is that on one side, it doesn't come with all the baggage of Node.js, and this is why it can be very fast and very performant, as we described, but on the other end, it doesn't have all the ecosystem of libraries that Node.js has, and that has been for over, I think, almost 15 years at this point.
So what that means is that you don't have the full Node.js standard library at your disposal, and that means that you might have problems with some of your code. Even if you're using third-party libraries, those third-party libraries might rely on some functionality that exists in the standard library of Node.js that doesn't exist in LLRT yet. And when I say yet, it doesn't mean that there is a promise that eventually LLRT is going to have future parity with Node.js.
Actually, if you look at the readme, they state very clearly that this is not a goal. They are not going to try to compete for future parity with Node.js. They have some degree of support, but there is no promise that they will try to improve the percentage of coverage in that sense. So I guess for the foreseeable future, we only have a partial implementation of the Node.js standard library, and another thing to keep in mind is that even that implementation, there is no guarantee that it's matching 100% the same level of functionality that we have in Node.js.
You might have surprises, for instance, subtle differences on how certain APIs actually work in certain edge cases, and that means that all the code you write, you need to be very careful testing it specifically in the context of LLRT and not just run Node.js tests with Node.js and assume that everything is going to work as expected when you package it into LLRT. Now, speaking of libraries, you might think, what about the AWS SDK, right?
Because most likely, this is the main library that you will need to use in a Lambda. And actually, interesting enough, this runtime comes with many AWS SDK clients already baked into the runtime. There is a list on the repository. Last time we counted was 19 clients supported, plus the Smt library from AWS. So if you need to use one of these 19 clients or the Smt library, you don't need to install it yourself.
Those are already prepackaged in the runtime. And actually, the repository goes as far as saying that it's not the standard package itself, the one that you would get from npm, because there are extra optimizations that the authors have put in place, replacing some of the JavaScript code that exists in the standard version of the library with some native code, supposedly Rust, I imagine. So I guess that could give you an extra boost in performance when you use these libraries.
Now, they also say that not all the methods are supported. For instance, if you try to get a stream from a response coming from the SDK, maybe... I haven't tested this very thoroughly, but I imagine if you're trying to read a big file from S3, that might be a little bit of a problem if you cannot really stream that output into your program and you need to patch all the data into memory before you can actually access to it.
I'm not really sure if this use case is supported or not, but there might be similar cases like that where not being able to stream the response coming from the SDK might become a limitation in terms of the memory usage, depending on your use cases. So again, it might work in most cases. It might actually be even faster in some cases, but you have to be really careful testing all the use cases that you have in production.
Now, last thing, what about tooling? Because this is always the main thing when it comes to new programming ecosystems. It takes a while before the tooling is good enough for you as a developer to have a very good experience and be productive. So what is the starting point that we get here? It's actually not too bad, even though we haven't played enough with it to be confident in saying that. But just looking at it and just playing with it a little bit, there are a few things in place that are already quite useful.
For instance, there is a Lambda emulator that you can use to actually test the runtime locally. So all the code that you write, you can immediately execute it locally and see if it's performing and be adding exactly as you expect, which is great because it kind of reduces the feedback cycle of always having to ship to AWS to be sure that your code is actually working as expected. There is also a tool that allows you to package all your code together with the runtime into a single binary.
So you are effectively building a custom runtime that includes not just the runtime, but also all your code into one binary. And this is actually the preferred and recommended approach to deploy Lambdas written using this runtime. And the reason why this is convenient is because that's going to more likely impact performance positively because it needs to load only one file and then everything is already in place and ready to start.
And finally, there is also a Lambda layer available. If you prefer to take a little bit of a more experimental approach where you say, okay, I'm just going to put this layer into the web console and just going to play around with it this way, that could be another approach to start using OLRT and see what that looks like. Now, again, it's worth remembering that this is not an officially supported Lambda runtime, it's a custom runtime.
So what you deploy is effectively a custom runtime and you are responsible for it, meaning that if there is a new update or if there is a security concern and maybe you need to install something to patch a security issue, doing all of that work is on you. So you need to be ready to take over that additional burden that you don't have, for instance, when you use the official Node.js runtime. So what is our recommendation again?
Just to try to summarize all of that. I think this is a great initiative, so it is definitely worth playing with it and see what it looks like. And for your specific use case, how much performance can you squeeze out of them? But again, because it's so early and experimental and it's not really clear what is going to be the future of this project, use it with cautious, use it with the idea that you're not going to re-implement everything with this runtime.
Maybe you're just going to implement a few functions that you use a lot, but they're not the main ones for your business. So I guess if all goes well, we would have gained major performance benefits without having to switch to C++ or Rust, which would be a big win for the serverless and the JavaScript community. But again, we have to be seeing exactly what is going to happen. It's also an open source project, so if you are really excited about this kind of initiatives, you can contribute to it. And at that point, you are also a little bit responsible for the success of this initiative. So this is always a good call to action to people that if you feel like you want to contribute, you want to see this project successful, your contribution is definitely going to be useful to achieve that larger goal. Now, what other concerns do we have, Eoin?
Eoin: Well, we already mentioned that it's experimental, and I think that's fair enough because they state that explicitly. As well, if you look at the contributions, it's built mostly by one person. And I think we have to credit the amazing engineering effort here. But Richard Davidson is the amazing developer who has done an incredible job here. But there's obviously a risk associated with having only one main person behind the project.
So let's see if AWS decides to invest more on the project and form more of a cohesive internal team as the project evolves. It's good to see that in a few weeks since its public release, there have already been contributions from open source members of the community. So we can expect to see that grow, and that will be a healthy thing. The lack of feature parity with Node.js and other runtimes is going to be a concern.
And there isn't really an intention to reach parity, so you just have to be aware of that. You mentioned as well, Luciano, there is some AWS SDK support. I kind of wonder, since there's already the C-based common runtime from AWS that's highly optimized, as well as the C AWS SDK, I wonder why LLRT wasn't able to leverage those to get complete service support. I suppose as well, QuickJS, being one of the main dependencies, may also be a bit concerning.
It has an interesting history as a project. It was mostly written and maintained by another outstanding engineer, Fabrice Bellard, and Fabrice is also the same author of other great projects like QEMU and FFmpeg. Again, same problem with single owner projects. There's a risk with it. In fact, the Qix.js project hasn't received, well, it didn't receive any great updates in the last few years, and the project really looked to be stagnant with a lot of forks emerging in the open source community, most notably Qix.js NG. There has been some activity of late, but there is an interesting community conversation on, I suppose, whether this project is alive or dead, and we can link to that conversation on GitHub in the show notes.
So there has been a recent spark of activity, as I mentioned, in the repository, and Fabrice has introduced some significant new features, such as support for top level of weight, and a couple of new releases have been published. So hopefully, a larger community will form around the project, and that will help to guarantee long-term support, because I think it's interesting. Previously, there were various different JavaScript runtimes. There was JavaScript Core, you had V8. Microsoft had their brave effort for a while with the Chakra Core, but the idea was that Node.js could use any of these runtimes, these JavaScript runtimes. That seemed like a healthy thing with good competition, but it seems like everything has kind of converged on the Chromium ecosystem, and that's not a great thing for the future of JavaScript, I feel. Luciano, you've kind of given your recommendations, but what's your final assessment?
Luciano: I think, in general, I'm very happy to see these kind of initiatives coming out from AWS, because everything that can make Lambda more efficient and powerful for JavaScript developers is absolutely welcome. I think everyone should be happy about that. It is a very ambitious project, and if it becomes stable, and there is a team maintaining it consistently, it's going to be a win, definitely, for the server landscape as a whole.
But I think we need to talk about another problem, which is the JavaScript ecosystem fragmentation. It's something that we have been seeing a lot in the JavaScript community for I don't know how many years at this point, and it seems like it's getting worse and worse rather than getting better. So this... Sometimes it's called the JavaScript fatigue. It's definitely real, and it was associated with the idea of frameworks and libraries.
Now it's being associated even with runtimes, which only makes things worse. It's already hard to pick and learn a single runtime like Node.js. Imagine if you also have to learn Dino or BUN with all the different core libraries and characteristics, and now there is also another Lambda-specific runtime, which will have its own characteristics and things to learn and mistakes and patterns. But even imagine that now you are a JavaScript library author, and you want to build a general-purpose library that you might want to make available across all of these runtimes.
Node.js, Dino, BUN, the browser, and maybe now even OLL or T, right? Because why not allowing people to even use your library in the context of a Lambda? How much work there is involved in just testing that everything works with F3, just fine-tuning all the edge cases, maybe patching for all the missing libraries and different behaviors that exist across different runtimes. So this is a problem that's just going to keep getting bigger and bigger if the ecosystem doesn't converge into kind of a more comprehensive standard that all the different runtimes will adopt.
There are some efforts in that direction. For instance, the Winter CG that we can link in the show notes is an initiative that tries to figure out exactly what is a common set of APIs that every runtime needs to have, especially the ones running in the cloud and on the edge. So there might be, I guess, a bright future there if this kind of initiative is successful. But as it stands right now, as a developer, it's just a very confusing landscape, and there's a lot to learn and so many edge cases.
So that's definitely a problem. Another point that I have, and this is more directed to AWS, it's great to see this kind of initiative emerging from AWS, but at the same time, I would love to see AWS investing more on the larger Node.js ecosystem. We know these things that are not super nice to see. For instance, if you look at the performance of the Node.js 16 runtime and compare it with the Node.js 20 runtime, even though Node.js itself is generally considered faster in the Node 20 version, when it comes to Lambda, somehow the runtime is a little bit slower than Node 16, which is very disappointing because it looks like they didn't take advantage of the new advancements in Node.js, and maybe they did something suboptimal on their side.
Now, I'm not really sure what's going on there, so I'm not going to comment too much in detail, but I think the message there is that I wish that AWS would invest more in making sure that Node.js has a bright future ahead because it's effectively one of the most used languages when it comes to Lambda, so definitely a big revenue stream for AWS, and it would be nice to see AWS reinvesting some of that revenue into the project itself. And it's not just something that relates to Lambda itself because Node.js gets used a lot even in other kinds of applications, not just serverless, it will be used in containers, so something like that in ECS, Fargate, but also in EC2 or AppRunner. So if Node.js gets better, I think AWS is still going to benefit from it. So this is kind of a final call for consideration to AWS if somebody's listening there to think about this problem and maybe decide to invest a little bit more into the Node.js community.
Eoin: Yeah, we're seeing lots and lots of different ways to optimize code starts and runtime performance. I'm thinking of Snap Start, currently available in Java, and it might come to more runtimes, and then we see like with .NET, you've got the new ahead-of-time compiler, which is essentially compiling it to native code. I wonder if the AWS Lambda team are thinking about how Snap Start could be used to optimize existing Node.js runtimes and give us the kind of amazing code start times we've seen with LLRT or even better, just with existing Node.js and all the compatibility it offers.
So it's definitely a space to watch, and regardless of what happens next, I think we can agree that LLRT is already an amazing software engineering achievement, and a lot of credit has to go to Richard and also to Fabrice, the QuickJS author, too. So if you're a JS developer interested in LLRT, it is important to check compatibility and measure performance with meaningful workloads. We're just seeing, I think, the first set of benchmarks here. But if you have seen some results and you've got some success or you've decided to abandon it for now, let us know what you think, because we're really curious to learn more ourselves. So thanks very much for watching or listening. Please share with your friends, like and subscribe, and we'll see you in the next episode.