AWS Bites Podcast

60. What is AWS Lambda?

Published 2022-11-25 - Listen on your favourite podcast player

AWS Lambda is one of the most famous AWS services these days. If you are just starting with your cloud journey you might be confused about what Lambda actually is, what are the limitations, and when you should be using it or not.

In this episode, we provide a beginner-friendly introduction to Lambda and summarise everything there’s to know about it: when to use it and when not, differences with containers, the pricing model, limitations, and integrations.

By the end of this episode, we will also chime in with some of our opinions and share whether we believe that Lambda is the future of cloud computing or not!

AWS Bites is sponsored by fourTheorem, an AWS Consulting Partner offering training, cloud migration, and modern application architecture.

Some of the resources we mentioned:

Let's talk!

Do you agree with our opinions? Do you have interesting AWS questions you'd like us to chat about? Leave a comment on YouTube or connect with us on Twitter: @eoins, @loige.

Help us to make this transcription better! If you find an error, please submit a PR with your corrections.

Luciano: AWS Lambda is one of the most famous AWS services this day. If you're just starting your cloud journey, you might be a little bit confused about what Lambda actually is, what are the limitations, when you should be using it or not. Today, we want to provide a beginner-friendly introduction to Lambda and summarize everything that there is to know about it. By the end of this episode, we will also chime in with some of our opinions and share whether we believe that Lambda is the future of cloud computing or not.

My name is Luciano and I'm joined by Eoin, and this is AWS Bites podcast. AWS Bites podcast is sponsored by fourTheorem. fourTheorem is an AWS consulting partner offering training, cloud migration and modern application architecture. Find out more at and you'll find the link in the show notes as well. So again, as we said in the introduction, this episode is mainly focused to provide an introduction for people who haven't really explored serverless and Lambda yet. So if you already know Lambda, I don't know if this episode is really that interesting for you. Maybe you can just skip to the end, where we talk more about opinions and the future of the Lambda. And in any case, let us know what do you think about all of that. So without any further ado, what do you say, Eoin? Should we start by defining what Lambda is?

Eoin: Yeah, and I went back to the 2014 announcement from AWS where they announced Lambda for the very first time, and there's a really nice description in there where they say that it's a compute service that runs your code in response to events and automatically manages the compute resources for you. And that's it in a nutshell. I really like that. It's a beautiful description. And there's a lot of features and additions they've made since then, but we probably forget that that's the simple beauty of Lambda at its core.

So they also say that it makes it easy to build applications that respond quickly to new information or new data. So that really covers it, right? It's code that responds to events. So it's really event driven, and it manages the compute resources for you. And that's really the most important part. So it's like an alternative way to do compute, right? We've gone from having to make servers, then virtual machines, then we went to containers, we got cloud computing to manage all of this for you.

So all the time, we have these new generations of compute services that are taking away the complexity and managing a lot more for you on the cloud side so that you have to do less and less. Now, sometimes that results in more complexity. It kind of shifts the complexity to other places. But with Lambda, what we're talking about is a stateless, event-driven. You hear this quite often. So what does stateless mean?

Well, it means, I guess that in some applications, you might have a server that keeps running and it's able to store state in memory, or you might have like user sessions. Lambda is very ephemeral, right? It's very short-lived. They only run for up to 15 minutes. So you don't have the luxury of storing state. And that also brings benefits because you don't end up with accumulating what we call a cruft, you know, lots of state that accidentally accumulates.

Accumulates in long-lived compute. So these things only run for a short amount of time. If you have to store any state, then you have to put it explicitly into a state store like DynamoDB or ElastiCache Redis or S3. Because it's event-based as well, it's a very different model to like a web server. If you've got a web server with a web framework, you typically are able to handle multiple concurrent requests within your unit of compute.

So within a container or virtual machine with Lambda. It's weird in the sense that you can only run one event at a time in a container, an invocation. And this is counterintuitive to people, but it also makes it very easy to reason about some cases because you know that at any given time, one sandbox, they call it, a container, is only processing one request at a time. They're very quick to scale up and down.

Now, sometimes you have the issue of cold starts, which are often overstated, I would say, but something to be aware of, definitely for sure. It supports multiple language runtimes and you could create your own runtimes. And because it's managed, the instance is managed, you don't have to worry about what machine it's running on. The container orchestration service is managed. The wiring of the events and the results into the Lambda is managed for you and the runtime itself, whether it's Node.js or Python or whatever else, is managed for you. And this is why it's called serverless, it's because you don't have to manage servers. So that's as much as I could say about the definition of Lambda. What are some of the use cases, right? If people haven't used it before, if they've only played with it a little bit, where can I start applying Lambda day to day in development jobs?

Luciano: Yeah, so the most common use case that I've seen with Lambda is probably into the realms of APIs. So creating REST APIs, GraphQL APIs, Lambda is a very, very good backend for implementing that compute layer. What is the business logic that specific API quest needs to actually execute? And it works really well in combination with other services like API Gateway. So generally, we'll define your API through API Gateway and then have Lambda as a backend for the actual business logic.

Another good use case for Lambda is Webbooks. Webbooks is basically when you have a system which generates some kind of event and it can notify another system about that particular event. And that generally works just through a simple URL. So the system generating the event needs to know a specific URL where to actually forward that particular event. And one example could be you are using a newsletter system, like, for instance, MailChimp.

MailChimp gives you the opportunity that every time you have a new subscriber, you can notify a Webbook endpoint and receive the information about the new subscriber. And that way you can implement your own custom integration. I actually use something like that and I use Lambda as a backend. You can simply create an API using API Gateway. That way you get a URL or you can also use Function URLs and that can be used to trigger a specific Lambda where you implement your own custom business logic.

Another use case could be system integration. So again, a webhook is kind of a system integration already, but you can have more advanced types of system integration using different protocols. And Lambda can be the place where you write your glue logic, for instance, even if you need to convert data from one format to another to make it possible for different systems to communicate with each other.

Other use cases are background processing, very, very common for Lambda. One of the most common tutorials you'll find out there is how to create image thumbnails using Lambda. So that's definitely a use case that in the space of background processing, you get pictures somewhere, you can load these pictures, create multiple variations of this picture all in the background while your application is still running somewhere else.

Another interesting use case, this is a little bit more of a new use case, I feel. It's something we've been talking about before. And we also did a joint blog post with Amazon itself and it's in the space of high-performance computing. It is an up-and-coming use case for Lambda, but you can definitely use Lambda also in this particular space. And we will post a link in the show notes with the blog post I just mentioned, where you can find all the details about this particular use case and how we actually use Lambda to be able to fulfill the requirements.

Very similarly is the space of ETL. We already mentioned that Lambda can be a very good layer to load data, transform it, and start it somewhere else. So you can also use it that way just to perform ETL kind of workloads. And there are some more esoteric use cases, I feel, because I mean, I've seen that in the documentation, but I haven't seen them being actually used in practice. But for instance, if you use RDS, which is the database service, SQL database service that AWS gives you, you can actually create custom functions in the database and then use Lambda as the compute layer for this custom function. So that basically means that you can invent your own Postgres function, and the business logic lives in Lambda. And AWS will take care when you're using that custom function in your SQL statements to actually invoke your Lambda to perform specific custom operations. So with that being said, we have a bunch of use cases in mind. But I suppose that there are also a lot of cases where you wouldn't really want to use Lambda, right? It's not a silver bullet for every use case. What do you think, Eoin? For sure, yeah.

Eoin: For like if you have a long running job that you just can't split, or it doesn't make sense to split it into something that runs in a few minutes, why bother with Lambda? You would just try and use a simple container service that could just stay up for the duration of the job indefinitely. If you've got something that's stateful, and that would be often legacy applications, legacy web servers that are using sessions that require in-memory storage and a server that stays up, they're just not a good fit.

So if you find yourself with a legacy stack, and you're trying to shoehorn it into Lambda, maybe you should think it's probably not worth the effort. Another thing is really important to notice. We talked about this being event-driven, right? Every event comes into Lambda over HTTP, and it comes into the service, then gets processed by your function. There's no kind of open TCP connection support.

There's no streaming support. So you can do things like Web Sockets, but you're not really using Lambda. You're using API Gateway to achieve that. So you don't have anything where you need a socket or an open connection, or you're kind of real-time streaming where part of the data gets processed. As the socket stays open, that's not a fit for Lambda. So if you can imagine a real-time game server, it's not a good fit.

But speaking of payload, when your payload, either the request or the response, is bigger than six megabytes, that's also not a fit for Lambda because that's the limit. And if you've got high constant predictable load, so from a pricing perspective, right, let's think about your traffic. If your traffic is constant and doesn't ever drop, doesn't ever peak beyond a certain level, then you're probably not really taking much of an advantage by going to Lambda.

You might as well keep stick with something, especially for pricing perspective, but it's worth doing the cost calculation because maybe it's cheap in both cases, and it's a trade-off between complexity and all the other features of Lambda. Now, when we started and we defined Lambda, we talked about Lambda as the original simple definition of Lambda. And I think it's worthwhile calling out that in recent years, as Lambda has added more and more features, you could say that it's become more complex, just because there's more knobs to twiddle, more configuration options, and more things you can do with it.

So if you look at some of the recent features, you have provision concurrency now, and you have support for destinations and different architectures including, you know, a lot of other features, and you have a lot of other features that you can do with it. So you can take advantage of that. And you can also do that with Lambda. And I think it's worth calling out that in recent years, as Lambda has added more and more features, you could say that it's become more complex, just because there's more knobs to twiddle, more configuration options and more things you can do with it.

So if you look at some of the recent features, you have different architectures, including ARM, and different options in event source mappings, and EFS support and your VPC options. And this is only going to grow and grow and grow as Lambda grows and evolves. So it's worth kind of reassessing Lambda and thinking, well, is this now too complex for my needs? I thought this was something simple that I could just fire up, and I didn't have to worry about, oh, which container or instance should I run my code on?

I could just have a nice ephemeral piece of compute that runs in response to an event. So now has it become complex? I can reconsider my options and go for containers. I think this is worth thinking about. It's a topic of conversation these days. And if we talk about containers, you could compare it to deploying with ECS or to Kubernetes with EKS or another option. And I wouldn't say that it's inherently more simple to do that with, to deploy services with container-based solutions.

I guess the difference is with Lambda is that you're more constrained in how you do events. There's very specific ways in how you do your events come in and how you do logging and how you do tracing. And container environments are less constrained because they let you do whatever you want and integrate with whatever you want and run whatever framework you want. So in some ways, they seem like an easy path to get started with if you have a comfortable set of frameworks you want to work with. But sometimes that complexity won't reveal itself until you realize you have to manage your container environment at scale and you realize you have to actually understand how that framework works under the hood and how it works when you've got edge cases and performance of security problems. So I think it's always a set of trade-offs and you have to really kind of look deeply and kind of see what your case requires. But I'm interested in your opinion, Luciano, what do you think? Has Lambda gone more complex? Is it more complicated than containers? Yeah, I have been exposed a little bit to Kubernetes and Lambda.

Luciano: Maybe a little bit more to Lambda than to Kubernetes. So what I'm about to say might be a little bit unfair to Kubernetes, but my feeling is that Kubernetes is a great tool and it's very generic. It's kind of agnostic for the most part to the cloud provider you're going to be using. So that's amazing. But at the same time, I feel that it requires a little bit more knowledge and understanding before you can be proficient with it.

So definitely the barrier to entry is higher with Kubernetes than it is with Lambda. And it's also an unfair comparison because, of course, as we said, Kubernetes is a more general purpose kind of runtime, while Lambda, it's very specific. I try to solve one problem in a very opinionated way. So, of course, it's kind of easier that Lambda has a smaller surface and it's easier to get started with. But at the same time, that surface can get very, very big as you get more into the weeds and you start to build more and more complex serverless applications.

At that point, you will need to start understanding about networking and security and IAM and a bunch of other AWS-related topics that might not be something that you have done before. So that surface might just bleed into a bunch of other AWS concepts that you just need to master to be able to actually use serverless well. So I suppose that at the end of the day, what I'm trying to say is that, yes, easier to start with serverless and AWS using Lambda, but then as you start to build more and more complicated applications, there is always a certain degree of complexity that you will need to deal with and you need to start to build a more realistic understanding of the stack that you are working with.

So this is probably equally true in both Kubernetes and AWS Lambda. So just keep that in mind. Don't just say one is easier than the other in absolute terms. And one of the interesting points that I heard many times people complain about when it comes to the complexity of Lambda specifically is pricing, because it's very easy to just say it's cheap and convenient, but that's not always the truth. You need to make an exercise and understand, first of all, what's the model and then given your specific use case, how do you actually apply that model and figure out, okay, this is more or less how much it's going to cost me.

So let's have a quick look at what is the pricing model with AWS Lambda. The first thing to understand that we didn't mention so far is that when you provision a Lambda, it's really important that you specify what is the amount of memory that you want that Lambda to have. And one non-obvious thing is that when you provision a certain amount of memory, that will come with very specific CPU configuration.

And the more memory you configure, the better the CPU. So basically, you don't control the CPU, you just control the memory, but the more memory you allocate, the better the CPU. So sometimes if you just want a better CPU, you'll need to allocate more memory, even if you don't really need that much amount of memory. This is just the model that Lambda gives you, and I suppose it's tightly related to the pricing model and to the allocation model that AWS needs to figure out when they really need to provision your Lambda, somewhere in a cluster.

So once you provision your own Lambda and you select a certain amount of memory, then the cost, there are actually two different pieces that contribute to the final cost. One is the execution time and one is the invocation cost. Execution time is literally given one Lambda, how long does it run? You are going to pay for that. And invocation cost is how many millions of invocations are you doing? And there is a price on that.

For the execution cost is literally a function of the time in millisecond and memory. And just to give you an example, even though maybe it's not really meaningful, if you go for the lowest, which is 128 megabytes of memory, for every millisecond, you pay 0.00000000, that's nine zeros, $21. So it looks like it's infinitesimally small. But if you are running your Lambda function for long and the limit is 15 minutes, you will start to actually see that cost.

So again, this is one of those false things that might be very misleading. It looks like an infinitesimally small number, but it multiplies up if you actually use this feature a lot. So make sure to do the maths to really understand what's going to be the cost for you. And when it comes to the invocation cost, you have 20 cents of dollars per million of invocations this is in Ireland. I think it might be slightly different if you go in different regions.

But again, seems like a very low cost, but if you actually use Lambda a lot, it's not unlikely that you will be doing multiple millions of invocations during your billing period. So that cost might add up as well. I suppose one kind of observation that we can make is that Lambda can be very, very convenient for when you have very spiky use cases or when it's very, very hard to predict what's going to be the actual consumption of the service.

Classic example, you are building a startup, you're trying to validate an idea. Probably two people are going to be using it the very first few months while you try to validate the idea. But if you end up being very, very successful, you might end up very easily with like thousands of users in a very short amount of time. If you are very lucky and successful, maybe even millions of users, so that might just skyrocket the usage of your platform.

And in that case, you didn't really have to make an upfront investment to support the traffic. So this is probably where the convenience of Lambda and its pricing model is kind of at its best. And the opposite case is actually interesting where you really can predict in advance the cost and the use and the load. So in that case, actually Lambda comes a little bit more expensive by how much we are actually going to link to an article that Eoin, you wrote some time ago, which has some good numbers in it and compares Lambda, I guess, Fargate, I guess, EC2.

But the bottom line of that is that when you can predict the actual usage and that usage is pretty much constant, you probably can make an effort into using something more traditional like EC2. Of course, assuming that you don't have a lot of TCO into provision in those EC2s, if you just look at the compute cost, that compute cost will be much lower than the equivalent compute cost of Lambda. And if you think about that, that makes sense because AWS is making you pay the actual compute with a premium because they take care of all these kind of infrastructures spinning things up and down for you, while in EC2, all that cost is on you. So it kind of makes sense that if you just compare compute per compute, Lambda is more expensive than something like EC2.

Eoin: It's true. It's interesting also to just note that since that article was written, they have introduced tiered pricing for Lambda as well. So if you are using a huge number of invocations for batch processing, like you mentioned one of the use cases earlier, and you're starting to get into really big volumes, there is no tiered pricing. So the pricing actually goes down in tiers. And so I think it's kind of moving in hopefully a better direction. It would be nice to see it kind of be more comparable to EC2 eventually, so that we don't have to think, okay, it's too expensive. Let's completely change our architecture because the pricing is too expensive because you hate to have to do that. Absolutely, especially because it's not going to be a small change.

Luciano: to just move from Lambda to something else. So yeah, definitely worth keeping in mind that if you grow really a lot, there will be this kind of pricing discounts that you basically will have in your billing. Another interesting topic that you briefly mentioned, but I want to give a little bit more details is what are the limitations that you have with Lambda? Because some use cases, you just cannot solve that with Lambda just because there are limitations in the architecture model of Lambda.

And you already mentioned the payload, which is six megabytes request-response, but this is true only for synchronous invocation. We'll talk a little bit more about that in a second. For asynchronous invocation, that limit is actually much lower. It's 250 kilobytes. So you need to be careful, for instance, if you are triggering a Lambda from EventBridge, that payload cannot be in the order of megabytes.

It can be up to 250 kilobytes. Another interesting thing is the amount of time and amount of source code that you can ship into a Lambda. And this is a little bit tricky because you can ship actually Lambda code in two different ways. One is through a zip file and one is through a container image. If you go for the zip file, the uncompressed sites inside that zip file cannot be bigger than 250 megabytes.

If you go for a container, it's actually much higher than that because you can ship as much as 10 gigabytes of source code. Now, why is this important? Because sometimes, especially when you're using languages that will have big native libraries, for instance, to connect to databases or to perform other kinds of operations, you might have very big binaries there. So sometimes you just try to stuff a bunch of different libraries into it.

Maybe some of them will end up with big binaries. It's very easy to just go slightly over this 250 megabytes limit. And in that case, you need to start to think, okay, how do I split my Lambda maybe into multiple Lambdas so that you kind of reduce the size of the source code? Otherwise, you need to think about using containers, which is slightly more complicated in my opinion, but gives you a lot more freedom in terms of source code size.

And finally, there is another interesting thing, speaking about running Lambda at scale. We say that one event at a time gets processed in one Lambda, let's call it container or instance. So what happens if you get, for instance, in an API, two requests simultaneously? Most likely, the Lambda runtime is actually going to spin up two Lambdas, and each Lambda is going to take care of one of the concurrent requests.

So what happens if you get thousands of requests simultaneously? Probably thousands of Lambdas will be spawned up in a short amount of time. But of course, there is a limit at some point. And that limit is by default 1000 of concurrent Lambda executions. And this is not just for one specific type of Lambda. This is, if I remember correctly, across an account and a specific region, that's kind of a cumulative limit. So if you have lots of different APIs and different users are hitting different APIs, it's actually very likely that eventually you will bump into this limit. Now, this limit can be increased. You just open a ticket with AWS, you provide reasonable modips for having more concurrency, and most likely you're going to get that. But there's something to keep in mind, that it doesn't really scale indefinitely to massive concurrency. That's a good one. Yeah.

Eoin: And while we're on that topic, I recently caught up with an episode of the Fubar Serverless podcast, which is really excellent. And I'm going to link it to the show notes here, actually, because we're not going to go deep into Lambda concurrency scaling and throughput. But there was an episode of Fubar Serverless with Julian Wood, which talks about all the fine details and Lambda scaling and throughput. And if that's something that interests you, or if you're thinking about really getting into Lambda, it's a really good primer.

Luciano: Yeah, that's a good one to call out. And one last point that I have is talking about integrations. One actually of the very positive things about Lambda is that if you buy into the AWS ecosystem, Lambda integrates pretty much with like almost any other service. So mastering Lambda gives you the ability to actually connect all the different services together. So it's truly the more abstract compute layer that you can have to create workflows in AWS where you need to connect different components, different services.

So definitely one more reason to learn Lambda, even though you're maybe not trying to buy into the, let's build everything serverless, you're still going to come across Lambda for very specific use cases. And again, talking about that sync versus async execution model, this is something that becomes important in this context, because you really need to understand what does it mean for a Lambda to run, to being invoked synchronously, and what does it mean to be invoked asynchronously.

My mental model is it is synchronous when you invoke the Lambda and you wait for the Lambda to give you a response, while when you don't really care about a response, you just want to fire off something in the background, probably you want to go for an asynchronous invocation. So it's a little bit more like fire and forget. And the reason why these details are important because you also get different behaviors.

For instance, when there are failures, you might get an automatic retry when actually multiple retries when there is the asynchronous model, while you don't really get retries, it's up to you to re-invoke the Lambda function when you are invoking it synchronously and there is a failure. So just something to call out if you are thinking about different workflows and different kinds of integrations, look into these two models and try to understand which one is more suitable for your use case. Now, I think we are getting close to the end of this episode. So let's try to get into the more kind of visionary part. And let's try to discuss what do we believe is the future of cloud computing? Is it going to be more AWS Lambda-like or is it going to be something else?

Eoin: I think that Lambda has already been a game changer and it's kind of changed how people think about cloud computing and the evolution of cloud computing. And there's no going back from that, but it's not an all or nothing thing. And it's not Lambda or containers or Lambda or anything else argument. It's just the fact that Lambda has shown people how you can build really powerful architectures, really advanced systems without having to provision servers.

Therefore, people are getting used to the idea that they don't have to maintain and patch all this infrastructure themselves and there are easier ways of doing things. So even if it's going to be using containers in the future, the systems that are running them are going to get a lot simpler. And we have this convergence of functions as a service and the container model and blurred lines between what the capabilities are between these two compute models.

We see that with Fargate becoming a little bit more serverless, perhaps, and Lambda adopting container image support. They still have very different execution models, but the feature sets are expanding so that they're kind of impinging on each other's territory. So I don't really worry too much about whether it's all going to be Lambda in the future or not. It's more about the direction of travel and how everything's hopefully just going to get much simpler. But I'm definitely interested. We're just a pre-invent time. I'm definitely interested to see where Lambda goes next or reinvent in the next few weeks. So what's in your crystal ball? Is Lambda the future of cloud computing and that's it?

Luciano: I actually think that Lambda will be the future of cloud computing, but maybe only for the next three, five years, because I expect that something entirely new might come along and there might be innovation. And this is mostly motivated because I see two big trends that are somewhat against, not necessarily against Lambda itself, but more against the idea that you need to use one massive cloud provider and rely entirely on it.

And of course, you might argue that some of these ideas are kind of politically driven or socially driven, but there is also a cost element to it. And I'm hearing about some interesting solutions where the idea is more that rather than relying on a cloud provider, you should rely more on the computer that is available on the edge. But by edge, we don't mean edge services by cloud provider, but actually the devices that people use to access the services themselves.

Mobile devices, laptops, and so on. There is massive compute available out there. And it is possible with technology that we already have available today to offload some of the computation, networking, data sharing, into the devices of people actually using the service itself. And there are actually two very interesting companies that are operating in this space and providing a lot of innovation. One is Whole Punch, who just launched a service called

Which is more of an example of something that you can build with this model. And they have a bunch of open source libraries that you can use today to actually build something like yourself. And that's going to be using totally peer-to-peer based compute and resources rather than relying as much on cloud providers. Another company that is traveling on a very similar direction and is providing more of a runtime to build these kind of prototypes or projects is

So definitely look into the websites of this company just to understand what kind of new ideas they're trying to propose and what would be possible in the future if we actually, more and more people will start to buy into this model. Another complaint that I hear a lot about serverless in general is that if you look at different cloud providers, they are kind of offering something similar but at the end of the day is not standard.

So it's not really something you can easily abstract, like build ones and ship everywhere. And I feel that there needs to be a little bit more standardization, like something similar to what happened with containers, to try to standardize more and more the kind of serverless offering in terms of events, in terms of what the compute interface is going to look like. And maybe that's something that will create more innovation, that might create new products, that might create even new contenders in this space.

And I expect that maybe a technology like Wasm can have a big impact in this space. But again, I'm only hearing kind of very early conversation, so it's very hard to predict what can happen there. So I suppose that's everything we have. And I'm really curious to know what do you think about, well, first of all, if you're starting to look into Lambda, what is your feeling? Is it something you are going to be using?

What kind of projects do you have in mind? And looking more and more into Lambda, do you see that kind of technology fulfilling your needs? Or not, and why? And if you have been using Lambda for a while, what do you think about our kind of visionary predictions? Do you think that they're going to be correct? Do you have a totally different perspective? Is Lambda going to be more and more prevalent in our future? Or we are going to be seeing something entirely different? Let us know in the comments, reach out to us on Twitter, and we will be loving to have a chat with you and explore more these topics. Until then, see you in the next episode.