Help us to make this transcription better! If you find an error, please
submit a PR with your corrections.
Luciano: Hello, today we're going to answer the question, is serverless developer experience still immature? And with that question, we are going to cover topics like what the local development looks like, what are the best tools that you can use for developing serverless applications, what is the serverless ecosystem, and how it can be improved. My name is Luciano and today I'm joined by Eoin and this is AWS Bites podcast.
So before we get started, I want to thank you, Project O'Brien, for suggesting us this topic today and we remind you that we are very open to accept suggestions, so feel free to send us your questions. But let's start by clarifying what is the context for today. We are talking about when we mention serverless, we're talking about building applications not just with Lambda, but the entire ecosystem like DynamoDB, SQS, Cognito, Kinesis, EventBridge. So in general, that large variety that you get of managed services in AWS. So maybe we can start by discussing how does the serverless experience compares with more traditional ways of building applications in the cloud. What do you think, Eoin? Yeah, I guess it depends where you're coming from.
Eoin: I remember when we were building monolithic applications and you essentially had one process that would run all of the various different capabilities of your application, maybe talking to a database. Sometimes you could run all of the code for the application in an IDE with a click of one button. Then I guess a lot of people moved into microservices development where things were a little bit more fragmented and you had containers for different pieces of functionality. And from a development perspective, things started to change around that time because you needed to run multiple containers, but it was still doable. And I guess one of the things you're talking about when you're looking at developer experience for serverless, a lot of people will compare it to containers and Docker and Kubernetes and what it's like to run containers locally. One of the things containers do really, really well, I think, is making sure that you've got a developer environment that performs very closely to what your production environment is going to do. Like you've got an immutable container that's going to run in the same way pretty much locally and in production. With serverless, it's a lot more complex, right? It's not as simple. Yeah.
Luciano: And I know for instance, even with things like Kubernetes, which it is a very complex piece of machinery when you run it in the cloud, you still have very good tooling to simulate all that ecosystem in one machine as you are developing. So that's an interesting thing to start with. So yeah, we're trying to compare the serverless experience with monoliths, more traditional way and microservices using containers. I think that the first thing that comes to mind when thinking about what is the main difference between serverless and this kind of architecture is that the unit, that the level of granularity is much smaller with serverless because you are generally thinking in terms of functions rather than services. Even though, yeah, you might argue that a set of function logically create a service, but when you are developing a specific capability, you are writing one particular function at the time. So the level of granularity is like it's immediately smaller and it's forcing you to think differently about how to build that particular capability. And it's something you can see, for instance, even in terms of security, right? You are going to be able to write one particular policy for a particular function that is extremely locked down only to the things that that function is supposed to do, which is very hard to achieve in a bigger, more monolithic application or even a microservice. Absolutely.
Eoin: It's a huge benefit and we might talk about that in more detail in a while, but it also brings a set of challenges then because suddenly you have the kind of responsibility now to make sure that you've got minimal privilege with every IAM policy you write. Yeah.
Luciano: And that's tricky because of course, we know the pain of writing IAM policies and it's very hard to get them right at the first try. So you generally go by trial and error until it does what you want it to do. Exactly. Yeah. Unfortunately that is the case.
Eoin: Yeah. So I like the fact that you've got this much more fragmentation really. So you move to microservices, you had lots of small pieces with serverless. Generally, you have many, many more pieces. They might be grouped together into services, but you're deploying lots of individual pieces of code, but you're also deploying resources very freely. So I suppose before serverless, you deployed infrastructure and then you wrote code and deployed your code onto the infrastructure. But with serverless, that separation doesn't really clearly exist anymore. You're deploying infrastructure and code together in many cases.
You might have some base foundational resources that are more long-lived and you don't deploy them every time, but you can deploy an SQS queue very easily, very dynamically. You can even create them at runtime. So it's not just about how you get your code running. It's about how you get all of those other little pieces that build up your architecture and how you get those, how you develop with that mindset then. So I suppose one of the big things, the big elephant in the room sometimes when it comes to serverless is local development. And when you've got a mix of code and AWS resources, each AWS resource is very complex in its own right. Can you simulate that locally? And is it worthwhile simulating that locally? It's the million dollar question. What's your opinion on it?
Luciano: Yeah, I don't know if I have a fully formed opinion yet, meaning that I've been working in some projects where somehow we managed to have a good enough local environment and everyone has been happy with it. But in other cases it has been much more tricky and we ended up relying a lot more on the real AWS environment running in the cloud, even for development. So while we change things just to test our changes, we end up deploying and checking things in the real AWS environment.
So I'm still a little bit conflicted on whether one way is better than the other. And I think it depends probably on the complexity of the application, meaning are you using just few simple AWS services or maybe you're using more advanced services and a mixture with many, many different services. And I think also that depends on the tooling that is currently available. For instance, I used local stack, which is actually a very good tool that allows you to simulate many different AWS services locally. It runs in Docker, so you can easily run one or more containers and use them to simulate, I don't know, S3, SQS are probably the most common ones, but you can simulate a range of different services. But it gets tricky, of course, when you are doing more advanced things like EventBridge comes to mind, Step Functions comes to mind. And in those cases, you get something very basic, but you don't get the level of accuracy that you might need, or that it might make you feel confident that what you're doing is actually going to work in production. Yeah, I think that's really good point.
Eoin: I think local stack is great, as long as you know where the limits are and where you need to actually rely on the cloud. So don't over rely on it and assume that it's a drop in replacement for AWS. Of course it isn't, and it never will be, but it can help you in certain cases if you want to have really fast feedback in a local developer environment. There are some places where it does really well.
If you need a local S3, then yeah, it'll work with the limitations. So it can help to optimize your developer workflow, but it's not going to replace ultimately the cloud. You need to test your code in the cloud as quickly as possible. Question is, how satisfied are you with deploying to the cloud every time you make a code change? So what is your developer flow? How fast do you want your developer feedback loop to be? And when do you go from working in local mode to working in cloud mode? And there's a lot of different factors at play there. It depends on a lot of things, including your internet connection. I've had that experience where developing on a slow internet connection and deploying a cloud formation stack and even uploading it was a bottleneck. So if you're working with a gigabit connection all the time and that's not a problem, then that's good for you, but it's not like that for everybody. So the tooling has to be broadly applicable if it's going to really be successful. Absolutely. That's a great point.
Luciano: Is there any other tool that you know, aside from local stack for trying to get something locally or maybe to simplify the developer experience in general? Yeah, there are a lot of options and there's a growing set of options.
Eoin: So for a long time I've been using serverless offline when using the serverless framework successfully. And if you've got like a traditional API stack with Lambda behind it, maybe DynamoDB behind it, you can run all of that locally with reasonable results, pretty good results. It can start to break down a little bit when you've got other triggers. If you're triggering from the event bridge or SQS or SNS, then you need, this is where it gets a little bit more complicated. You've also got SAM local, which is also pretty good. Probably even more robust than serverless offline because it's, I guess, built in support by AWS and also uses Docker by default to give you more of an isolated runtime. So that's good. And then you've got some of these new tools that are coming out like SST, we were discussing earlier, right? There are some third parties which are really kind of pushing the boundaries and trying to make local development and the bridge between local and cloud development a little bit more seamless, letting you do kind of local troubleshooting. So what do you think? Yeah.
Luciano: And it's interesting that I haven't tried SST yet, but it seems that they are promoting more this idea that if you could get your code faster to the cloud, then maybe the cloud becomes your development environment. And I think this is maybe the big question. The big question, is this going to become a reality or it's just because we cannot simulate everything locally, this is the only approach we could reasonably take. So I don't know, again, what is the real answer, but it's interesting to see that there is a push even from AWS itself to say, yeah, you can do things locally, but the real environment is the cloud. So try to do as much as possible in the cloud straight away. And that hybrid approach, I think is still a reality for most of the cases where maybe you are still running your code locally, but your code reaches out to services that are already available in AWS, like, I don't know, read a table in Dynamo DB or write a message in IQ or send an SNS notification. You can definitely do all these things from the code running locally. So you get more or less something that is trustable enough at the end of the day as you change your code. So, yeah, I don't know. Do we want to do some final remarks of what do we think is good and what do we think is bad right now? Yeah, that's a good idea. Maybe it sounds like we've taken a kind of a negative view.
Eoin: I mean, it is, I think, let's face facts, we're serverless advocates, but the tooling is immature compared to container-based development. That's my view on it. And I think that's fine as long as you understand that and understand the limitations and the workarounds and keep an eye out for improvements there because there's the number of benefits outweigh these disadvantages in my view. If you've got a good approach to constantly improving your development environment, you can live with these limitations. But I think there's a lot of scope for good improvement as well. So what do you think could be better?
Luciano: Yeah, I think what can be better, as we said, is the ability to be able to push code faster. And we have seen before and during re-invent a number of promising initiatives from AWS. For instance, I remember we discussed some accelerate or also some sync, sometimes it's referred to, which should allow you to be able to synchronize the code in your Lambda if you're using some, like straight away. As you do a change, there is probably like a watching mechanism that will automatically publish your code without you having to invoke a separate command which triggers cloud formation and so on. So that should give you a better way of just changing your code and seeing the changes straight away. And similarly, I think there is something called CDK-Otswap. If I understand correctly, it will work with CDK and it will allow you to update Lambdas, but also ECS, code inside ECS containers, and also step functions, state machines, definition. And as far as I understand, that doesn't use cloud formation. So that's probably why you get a much faster feedback. Probably you don't get rollbacks and all the other things that cloud formation gives you, but for development is probably fine. Yeah. I mean, that's what it's about.
Eoin: It's not about making your production deployment faster because that needs to be safe and predictable using cloud formation, but it's about just getting code up to AWS faster and running it faster. Of course, we said at the start that serverless applications aren't just about the code. They're about all those other resources that comprise your architecture. So that's one of the things that can slow you down and could be improved. If you want to deploy a new queue, we create and destroy AWS resources all the time when we're in development mode on AWS with serverless applications. So how can you make that faster? If cloud formation was orders of magnitude faster, that would make the developer experience instantly way, way better. And it can be a bottleneck. You also mentioned security, Luciano, getting your permissions right. When you're in that phase of development where you're constantly tweaking IAM policies, what's the best way to do that? Are you updating the policy programmatically? Are you using the console to update your policy? If you're using cloud formation every time, you have to wait for the stack to update before you can test every version of the policy. So improve tooling around that and maybe better predictive generation of policies and validation of policies. Some intelligence in there would go a long way to improving the developer experience. I really think so. I absolutely agree. I think that this is all we have for this episode.
Luciano: And I'm really curious to know what you all think about this and what are you doing today to build your serverless applications. If you found any sweet spot that works well for you, we are really curious to know about that. So definitely share your experience with us. And please remember to follow us and subscribe so you can be notified the next time we publish an episode, which is generally every Friday. So we'll see you next time. Bye.