AWS Bites Podcast

Search...

30. What can you do with 10GB of Lambda storage?

Published 2022-03-31 - Listen on your favourite podcast player

AWS Lambda just got a big upgrade in ephemeral storage: you can now have up to 10 GB of storage for your /tmp folder! Before this was limited to “only” 512 Mb… But is this really useful? What can we do now that we couldn’t do before? Also, is this going to have an impact on price? And how does it compare with other storage capabilities that are available in Lambda? Eoin and Luciano are on the case to try to find some answers to these compelling questions, for the greater serverless good!

In this episode we mentioned the following resources:

Let's talk!

Do you agree with our opinions? Do you have interesting AWS questions you'd like us to chat about? Leave a comment on YouTube or connect with us on Twitter: @eoins, @loige.

Help us to make this transcription better! If you find an error, please submit a PR with your corrections.

Eoin: What can you do with 10 gigabytes of Lambda storage? This is a new feature that was released in Lambda and everyone's very hyped about it. So in this episode we're going to give our take and talk about what does it really mean to have 10 gigabytes of ephemeral storage? What can you do with this new capability? And finally we're going to discuss if this is really an advantage or if it's just something useful you might use in niche use cases. My name is Eoin, I'm joined by Luciano and this is the AWS Bites podcast. Luciano, what is this 10 gigabytes of ephemeral storage? What does it mean and how is it different?

Luciano: Yeah, so one way that I will describe this ephemeral storage is basically if you have a Unix system and you have a TMP folder this is generally where you store files that are somewhat persistent but yeah they generally have like a long and they don't have a long duration you just use them as a transient storage mechanism and this is something that's been available in Lambda for since the very beginning but the limit was 512 megabytes now this has been extended up to 10 gigabytes so technically now you can store a lot more data into this particular file system directory. Now what does that mean in the context of Lambda is something we're going to discuss more and more throughout this episode but one interesting thing that I want to mention is that because of the characteristics of a Lambda where effectively you receive based on events a Lambda is triggered and you don't really know if for that particular invocation an instance that was already available is going to be reused or if a new instance is going to be bootstrapped so imagine this as a container is probably the easiest way to understand this it's like you are running a new container or maybe you already have a container there initializer and going to reuse the same container so what happens to that temporary storage in those two different use cases effectively every time you are running a new instance you are starting with a new blank folder in the temp space if you are reusing an existing instance and you have saved something in that temporary folder you will find again the same files available there for you to use so one interesting thing is that you could be using this storage across invocations but of course there is no guarantee that your data will actually be there depending on how many Lambdas you are running if they are running all the time or if you have spikes of time where nothing happens between one invocation or another and another interesting thing that Yan Shui and we're going to mention the article of course in the show notes did some experiments and he realized that there is no cold start overhead so this is another small detail that is interesting to mention as we are talking about bootstrapping new Lambdas with more storage so yeah I don't know what do you think in terms of application what is it something that we will do

Eoin: now that we have this additional capability there's a few things you could think of I guess if you've got video transcoding applications that's one of the examples that that has come up so if you're producing a video you can imagine you might need to have intermediate artifacts you're producing so let's say you take some input it could be some images or an existing video that you need to split into frames you might want to process those frames and then stitch them back together and encode them with a video codec if with that intermediate data you might need somewhere to keep it and having 10 gigabytes of slash temp is going to be pretty useful for that there's also one of the probably more likely use cases for this slash temp is when you're doing ETL or data processing and you have intermediate data steps as well so you know in general you probably don't want to be storing things in Lambda so it's more of an optimization for those cases where you really need it and you can imagine if you're processing gigabytes of data ideally you would like to just kind of stream it in and stream it out and not store anything but sometimes again you need to do aggregations which require you to store to read in all of the data store it in one format then process it further and your slash temp might be useful for that I've heard a lot of people say as well machine learning models it'll be useful for that because some machine learning models can be quite large you know they can run into gigabytes so having being able to pull them down from s3 put them in your slash temp and then use that across multiple invocations would be useful I'd also kind of challenge that a little bit and say well if your model is part model is almost part of your code so it might be more suitable to bundle that into a container image and deploy your Lambda image as a container your Lambda as a container image but if your model changes more than the Lambda does then you might do it the other way and use slash time for it and one of the most I suppose slightly esoteric options was using Lambda for continuous delivery continuous integration and there was a tweet from Will Dady which suggests that maybe AWS step functions and 10 gigabytes of ephemeral storage in Lambda could be a better option for a continuous build continuous build performance than using code pipeline with code build I think that's definitely an interesting one I don't know if I would rush to use it I did try using step functions for continuous build orchestration before and it's a little bit clunky it's improved probably quite a lot now that you can use AWS SDK from step functions and I would imagine that the cold start time for a Lambda function to do a build is going to be significantly less than the code code build container but you still have to then go and implement your git clone and all of that stuff in Lambda if and deal with that the secrets and your access to git and everything so I maybe leave it for someone else to iron out all the kinks before I go trying that option so that's those are some of the applications that have come up I guess what do you think Luciano

Luciano: is it have I missed anything maybe we can remark again the use case that we kind of inferred to previously about caching because again if you have this storage and maybe you have produced large files that you might need to produce over and over again there is no guarantee that that file will be available across invocations but you could check if it's there you don't need to to recreate it again you can just use it so in that way could be kind of a soft layer of cache it's not going to be the most reliable but since you have it you can try to use it and maybe it will give you a little bit of a boost in your overall computation time across invocations so yeah again I don't know if it's the most useful thing but it's there and you can use it and it might give you some small advantages what can we say instead in terms of pricing is this something that as a host is it

Eoin: something we need to enable or it's just available for everyone you get the so the existing volume of slash temp was 512 megabytes you still get that for free anything above that that you're charged for and the unit prices per gigabyte per second so it's similar it scales linearly just like your lambda memory I did a pricing sheet just to compare what it would be like and see for different function sizes how much of an impact would it have if you allocated the maximum 10 gigabytes of ram and in general my conclusion is it doesn't really make a lot of a difference it's it's pretty cheap compared to your memory cost so if you've got a the maximum memory allocated of 10 gigs and you add also 10 gigs of ephemeral storage it almost makes it's an insignificant difference I would say to your cost because most of your cost is about the memory now if you've got a really frugal function let's say 128 megabytes of ram and you go for the max storage with that then it's going to make around 15 to 20 percent of a cost increase so it's still not

Luciano: particularly significant even if you're using a very low memory and high storage nice yeah that's that's interesting I did expect it would be kind of a free feature that you can just use more space but it makes sense because it's still a significant amount of more disk space right so okay let's maybe try to compare how this feature plays against other types of storage that you can use with lambda and I think that the most obvious one is of course S3 so the S3 is probably what you will be using most of the time and one of the big differences of course that S3 is durable and reliable so when you store something in S3 it's there and you are pretty much guaranteed it's going to be there so it's something you can reliably use across lambda invocations but of course every time you need to fit to fetch that data again into the lambda so that if it's a big file of course expect that that's going to take some time and that time of course becomes part of your lambda invocation time something you need to pay for something your users are waiting for so that that is an interesting comparison to make another use case is EFS which should be lower latency than a strip for bigger files I think there is again that article maybe it's another article from from Lumigo that shows that there is from five to ten times lower latency with EFS yeah and also it's a little bit more complex to set up because it requires the VPC and IOPS optimizations so EFS has pretty much the same characteristics as S3 in terms of it is a durable storage it should be reliable but it's a little bit more complex to set up definitely the other two interesting use cases are lambda layers and container images that we already mentioned a little bit but the idea is that another way that you could use to load data into your lambda is you can build your lambda with a container and include the data as part of the container data or you could use lambda layers put the data in a layer and then load the layer with the lambda but you need to keep in mind that in those cases the data is immutable so these are good use cases when you have artifacts that need to live with your code maybe I don't know assets like images or whatever you need to use inside your lambda but it's not something you can use to write into so these are just good use cases for when you need to bring immutable data into the lambda and there are different limits lambda layers is limited to 50 megabytes and container images you can do up to 10 gigabytes so for instance you already mentioned that in the case for instance you have a big ml model maybe you can just build the lambda as a container and include the model together with your code because most likely you don't need to change your code for the model as the lambda executes so yeah I don't know in the end what do you feel you will be using

Eoin: and maybe kind of a summary of what we just said you will use more s3 or other types of storage for sure I think the decision tree for this is with lambda avoid storage if you can and do everything in memory because you can get up to 10 gigabytes of ram so if do you need 10 gigabytes of ephemeral storage if you can just store everything in memory stream what you need in and stream what you need out and if you do need to have any durability stream it in and out from s3 so the question there just comes down to s3 transfer performance which maybe we can talk about a little bit but again the the rest of the decision tree is if you can't use s3 I would say yeah use efs if you do need that shared storage with more guaranteed throughput and more of a file system type model rather than the object store model of s3 and then everything else like if it doesn't change that often bundle it into your image container images are it's one of the real benefits of you've been able to use container images is that you can bundle data in it so if you've got some sort of model data something that doesn't change across implications you can bundle it in and then slash temp it's almost like the last resort so I don't want to be too negative about this feature but I would say like it's a nice to have for those cases when you need to create a reasonable amount of ephemeral data if you're we talked about the caching use case you just covered it there lee channel and I would say like it's great that you can do caching across instances but adding a caching layer is an extra piece of complexity you need to manage then you need to manage okay your cache capacity you don't want to overfill your cache you need to have a ejection algorithm for objects in your cache and you know you need to be a kind of monitoring your cache hit metrics and that kind of stuff if you're if it's really going to be a proper optimization so in general it's much better to for lambda functions to remain stateless because that's where the beauty and the simplicity ultimately comes from so this is just for some of those edge cases where you really need extra disk storage where you need to do kind of random access and

Luciano: seeks into the local file system and that exceeds 512 megs yeah you told me that you did a little bit of research in terms of how network speed could affect this decision tree so for instance if you really need to load big files and you try to do that from S3 like yeah how the other does

Eoin: it really play out for you is it better to have something in temp maybe in that case or not yeah yeah this is this is a good question i think because when you say okay well that it's a optimization that you can keep this across multiple invocations then the question is okay well how does the data get there in the first place and how much of an optimization are you going to get so let's say you're going for the maximum 10 gigabytes of storage i did some kind of informal benchmarking on lambda and the maximum speed i could get for transferring a gigabyte of data it was kind of approaching a gigabit per second but not really the average was actually around 600 megabits so around two-thirds of a gigabit say you know sometimes it got the the maximum speed got a little bit better than that it was kind of approaching a gigabit per second but on average it was around two-thirds of a gigabit so if you want to fill your slash temp on cold start with 10 gigs of data that's going to take you over two minutes at that speed so you can i guess what that says is that by caching it you're going to save two minutes in subsequent invocations but what it also kind of suggests to us is maybe what we want instead of this cache is just better network throughput a faster highway to s3 if you like um i know that with ec2 you can turn on enhanced networking and get 100 gigabytes of network performance now i haven't benchmarked that to see what if you can get 100 gigabits directly to s3 you know your mileage is going to vary but that's a significant difference from the near gigabit performance we can observe from lambda so if you can imagine if we could increase the network performance by a factor of say 50 that would make a massive difference and the need for caching would suddenly dissipate i would definitely pay for some use cases i definitely pay a lot more just to get

Luciano: that enhanced networking in lambda yeah that's a very good point and yeah will be interesting to see what other people think about this stuff and if they found particularly interesting use case then maybe we are not seeing right now i remember i i heard somebody saying oh yeah you can load a sqlite database into the temp storage and then you can do kind of more dynamic queries and analytics from that storage which is maybe an interesting use case but i don't know i would like to see some like real application built using these ideas and then you can actually see if there are benefits or maybe just a little bit of a stretch to just try to use this new feature yeah it sounds all in all that it's more on the niche side than in the let's all adopt this feature kind of uh category would you agree is that a fair characterization yeah right now yes and again maybe it's just that we are not seeing some particularly useful use case but again that that proves that maybe it's a niche type of feature like it's not something that has a generally available utility that everybody everybody's going to leverage from tomorrow right okay so should we remember that we have some resources we are going to link in the show episodes we are going to link also the official announcement which is also surprisingly dry in terms of examples so that there was another interesting thing that we picked up on it's actually well detailed on how to use it and all but it doesn't give you a lot of examples on when this could be useful and also we're going to mention the original tweet we discussed from will dady and blog post from yan chui atlumigo describing more use cases and how to use it

Eoin: excellent okay well with that please let us know if you found some really beneficial use cases that we haven't been able to spot and thanks for listening and we'll see you in the next episode you