Help us to make this transcription better! If you find an error, please
submit a PR with your corrections.
Luciano: Let's imagine this. You have built a sleek little web application. It's kind of a mall on it, but it's solid. It's a Rust backend. And maybe in the front end, you're using SolidJS. You see what I did there? And it didn't run well enough. And maybe initially you deployed it on premise. Maybe you have a client that gave you a box where you just put all the code and it's running fine. But now, for whatever reason, it's your turn to host it. And of course, you want to host it in the cloud on AWS. So what do you do? You could roll up your sleeves and dive deep into VPCs, load balancers, ECS, task definition, and all the other delightful complexity that comes with running containers on AWS.
Or maybe there is a simpler way. What if there was a service that just took your code or your container, whatever that is, packaged it, built it, deployed it, scaled it, and just gave you a URL and even an HTTPS connection. And at that point, you are just done. If you do any change in your repo, commit on main, the process will just start again and give you a new version just ready to use. That's actually the promise of a service called AWS App Runner. And in this episode, we are putting that promise to test. So we'll share the real story of a project that we recently migrated, why we choose App Runner for it, and everything we discovered along the way. The good, the bad, and the downright confusing. And there is a lot of confusing spoiler alert. So if you ever wish for an Heroku-like experience on AWS, or if you are just trying to figure out when to use App Runner, for instance, compared to something like Fargate, hopefully this episode will answer all your questions. My name is Luciano, and today I'm joined by Conor for another episode of AWS Bites podcast. AWS Bites is sponsored by fourTheorem, so thanks fourTheorem for making this possible. We'll tell you more about fourTheorem later. Conor, do we start by trying to describe what the heck is App Runner?
Conor: Yeah, what is App Runner? So it's relatively new, a mid-2021 service, and it is yet another way to run containers on AWS. So you're probably familiar with Corey Quinn's rantings about the 17 ways to run containers on AWS. I'm not actually sure if this is the 18th way or the 17th way, but there are a lot of ways. So the tagline for the service is deploy web apps and APIs at scale, and it's definitely more oriented towards web servers.
So, you know, applications that are fronted by some sort of low balancer that accept, you know, HTTP traffic. So who is it for? I guess, you know, if you're a developer, they're trying to simplify that process. You know, I have an app, I have a Docker file, I want to run this container in the cloud, please. And, you know, App Runner says that it removes a lot of complexity and moving parts. And then for your DevOps teams or your ops teams or your cloud wranglers, they also promise to remove a lot of the operational pain, deployment, build container registries, VPC, network topology. You know, apparently we can throw a container over the wall and make it AWS's problem. That's the dream. So focus on building your app. You know, App Runner will, has a build process. It'll build your container. It'll push it to an ECR repo. It'll provision whatever infrastructure is needed, deploy the container, make it available on a magic HTTPS URL. And then it has support for auto scaling and all the things you'd expect for a solution like this. So that's App Runner. What was the exact use case, Luciano? What was your test bed for the project, I guess?
Luciano: Yeah, so it was a monolithic container with a web application with both backend and frontend packaged together into one container. Not really relevant, but probably fun for people to know. The backend is written in Rust using the Axum framework. So in that sense, it's a bit of a monolithic framework. You put all your business logic, all your HTTP routes into one binary, effectively. And then there is a frontend, which is an SPA written in SolidJS.
So all the assets are pre-built and then they are served by the same Axum server on a public path, basically. So the story for this particular project is that this was an application that was built a few years ago for a very small project. It's also probably not relevant, but probably still fun to know that it's effectively like a quick style mini game. So it's not really a very complicated or something that is like business critical type of application.
So that was actually a good test bed for trying new things without being too much worried about if we break something, this is going to be like a massive damage for a customer. So the idea is that originally we were only asked to build the application, something we did very quickly. And now for whatever reason, that application was actually hosted on-premise. So our deliverable was like, this is the container, go figure it out yourself.
Now we are asked to, OK, can you actually host it and manage it yourself in the cloud? And of course, because we live and breathe AWS, we try to think, OK, how do we move this container to AWS? And can we use this excuse as a way to experiment with new services? To be honest, App Runner has been on our mind for a while. We never really had a good excuse to try it out. So we felt like, OK, this was actually the right opportunity to give it a spin because it seemed like a really relevant use case, especially because we didn't want to spend a huge amount of time in doing this migration. But maybe the question that at this point people have is we talked about App Runner and more or less what is the idea. Probably people are much more familiar with something like Fargate when it comes to container. So what is really the difference between the two?
Conor: Yeah, so I guess Fargate for me is kind of my meat and potato service. It is my go to service for yeeting a container into AWS usually. You know, it's whether you have a Greenfields app or you're trying to launder technical debt into AWS. It seems to be, you know, always the best choice or the least worst choice for getting your container up and running. So, like, it's hugely flexible and the operational burden tends to be tiny, very small.
You know, once you have a Fargate service running, AWS does a really good job of taking care of the tasks that are, you know, fulfilling that service. So it is our go to a lot of time. But, you know, there's obviously a lot of Terraform and CDK abstractions on top of it. And it's very easy to get up and running with Fargate. But I think you forget as a practitioner that's been using it a lot that there's actually a lot of moving parts involved in Fargate.
So, you know, you need a VPC, private subnets, root tables, NAT gateway, application load balancer, security groups, hard groups, ACM certificate, verified assert, route 53 records, IAM task, execution role, task role, CloudWatch log group, ECS cluster, ECR repo, task definition, ECS service. And then if you want to deploy it, you probably need GitHub actions, an OIDC identity provider, a granular role, and you need to get your hands dirty and write some GitHub actions.
So it's very easy to forget the level of moving parts involved in just getting a pretty straightforward web app up and running in Fargate, right? So I guess App Runner promises to hide a lot of this complexity or at least automate it for you. So it's much more abstracted. We don't have the concept of ALBs or autoscaling groups or networking on the happy path. And then you don't really have to worry about scaling too much, right?
It has a scale to zero-ish concept. It'll shut down containers and it'll relaunch them after some period of, you know, inactivity. It'll put them to sleep for you, I guess. So the pricing model on Fargate is well understood, I guess. It's, you know, your kind of vCPU slash gigabyte hours. And then as mentioned, you have a lot of ancillary costs with a simple Fargate app, whether it's the ALB itself, public IP addresses, or a NAT gateway, which you'll probably have if you want to run Fargate in a private subnet. So how does the pricing model compare in App Runner Luciano? What are we looking at to get up and running?
Luciano: Yeah, I think you described it really well. I think the idea of App Runner is that it is a much more managed service in a way. And in that sense, probably similarly to Lambda, for example, that the pricing model is more geared towards you're going to pay for the kind of resources that you're actually using as your application is running. And we mentioned that the service scales down. It doesn't really scale down to zero.
And actually, there is an interesting open conversation on GitHub. We'll post the link in the show notes if you're curious where people are asking. Well, this is not really competitive with something like Google Cloud App Runner. Sorry. Is it called App Runner? The Google Cloud one. No idea. There is a service to run containers. I don't think it's also... I think it's called Cloud Run. Oh, it might be called AppRun.
Is it? Yeah, something like that. There is a very similar name as well. But effectively, there is a similar service in Google Cloud. And lots of people are saying, well, that one scales to zero. Can you, AWS, please do the same? Let's leave that aside for a moment. If you're curious, you can check out the link we'll put in the show notes. So going back to the pricing, the idea is that you have two dimensions, again, very similar to Lambda.
So the amount of memory that you are using and the CPU that you are using. So for... It's in that sense, similar to Lambda, but there are some fundamental differences because we mentioned this concept of containers can be freezed. So you will need to have at least one container. That's the minimum. Of course, you can set your own minimum. We'll talk more about scalability later. But imagine that there is always a container there allocated for you.
That container might not be consuming CPU. So in that sense, it's kind of frozen. But it's not totally destroyed. Like, for instance, a Lambda instance eventually gets destroyed and even all the memory is released. With App Runner, it doesn't get released. Like, there is always one container at least sleeping there. So your memory cost, in a way, can be... There is like a baseline that is fixed. So whatever is the minimum number of containers that you are setting in the autoscaling groups, you are going to be paying for at least that amount of memory.
Then, of course, if you're, let's say, a cluster, if we can call it like that, if the number of containers, basically, it's very elastic. Like, if you have lots of traffic, there can be more at some point in time. So, of course, you will be allocating more memory. And in that sense, you pay overall the memory cost, which I think is 0.07 cents of dollar per gigabyte per hour, which, if you have one container, is about $5 per month.
So that's... I think that's kind of the baseline for one application you have in... Even if it's not getting any traffic, I don't think App Runner is going to get any cheaper than this. Then you have the concept of CPU cost. So the memory is only one dimension. The other dimension is CPU. And CPU, effectively, when your container is frozen, you are not consuming any CPU. Even if you have any background task, effectively, your container is put to sleep in a way.
So the memory is still allocated, but it's not executing any CPU. And when you are actually handling web requests, so your CPU is running, that time is actually calculated, and you have a cost of $0.064 per vCPU per hour. And you can also configure the number of vCPUs that you need per container if you're doing something, for instance, that requires multithreading. So if your app is active all the time and you have one container, the CPU cost is $46 a month.
So that, I think, gives you the idea that even if you have only one container and it's working 24-7 all the time, I think you will get to a cost of about $50 per month for that. So that kind of gives you, I guess, the ballpark of what could be the cost. The good thing is that there is no load balancer cost, which, if I remember correctly, is something about $30. Correct me if I'm wrong. So you are not paying for that.
So that's kind of absorbed in the rest of the cost, which is a good thing. Like, actually, you don't even see the load balancers in your own account. So AWS is totally managing that. Then there are some extra costs. For instance, if you enable the feature of automatic deployments, the one that Conor, you described. So you just connect your repo and let AWS do everything else. I think there is a cost of like $1 per month.
But then there is also a build fee because, of course, your build might be very simple and very quick. But maybe you're doing something extremely complex that might take, I don't know, half an hour to build. So that time is billed with an extra cost. And it's $0.005 per minute. So keep it in mind. Like, if you have a very long build, probably you might want to invest in kind of building it yourself.
Because there is also another mode where you can build it yourself and then just publish the container. And then tell App Runner, please release this new version of the container. So you are not forced to go with the totally managed approach. You can still handle the build yourself if you want. If you let AWS do it, just keep in mind that there is an extra cost. Now, if we want to talk a little bit about networking, because I think that's an interesting point. We already mentioned that it's kind of all abstracted for you. Like, you don't even know in which network your App Runner or your containers are running. Like, you don't really define a VPC and put the containers there. It's more magic that AWS does for you.
Conor: So for people that miss EC2 Classic, this is a throwback maybe. They can get back to that experience of not worrying about a VPC.
Luciano: Pretty much, yeah. Or even, I don't know, sometimes when you just use Lambda and you don't put it in a VPC, it's running somewhere magically. You don't have to worry too much about that. Which I think is good. Like, I think it removes lots of complexity for many use cases. Except that for almost every application, at some point you need to connect to another base, right? To do anything useful. So that creates a problem.
Because if you have a database, I don't know, an RDS or an Aurora cluster running in a VPC, then how do you connect your application that is running somewhere that you don't even know to a VPC that you actually are well aware about and you actually control in your own account? Whatever networking AWS is provisioning for you to the actual networking where you have your own database. And at that point, you can decide to do different things.
Like, effectively, you can say all the traffic that my application is generating, like either to connect to a database or maybe go and pull, I don't know, a file from an HTTP. Or maybe connect to a S3 or maybe if you want to send an event through EventBridge. So whatever is internal or external traffic is going to go through that VPC that you control. So at that point, you need to be aware of creating all necessary things like routing table, NAT gateways.
Yeah, everything from a networking perspective to make sure that traffic can be routed correctly to your services. So I think that's the interesting part about networking that generally is very easy to get started with. But then I think eventually you still need to take a little bit of complexity because unless you don't need a database, which is like most cases, or maybe you are using a database that is not even on AWS. And you can just use the public network for connecting to your database, which I don't know if I would necessarily recommend. But yeah, I think for most serious use cases, I think you still need to understand a little bit of networking and make sure you understand how to connect a runner to your own VPC.
Conor: Gotcha. It's great to have the escape hatch, I guess, right? That if you're on some very happy path, it's great to not have to worry about this stuff. But they didn't make it a Mickey Mouse service, I guess, by not being able to integrate with the well-known VPC fundamentals, I guess. Okay, that's good.
Luciano: Yeah, pretty much. Yeah, next topic I guess we can explore is maybe deep dive a little bit on the concept of autoscaling. We've kind of mentioned it briefly, but I think it's interesting to understand it a little bit more. It's probably based on the existing load balancer autoscaling groups. But again, it's something that has been a little bit abstracted for you, like you don't have to provision autoscaling groups.
It's more you have some configuration dials that you can play with to effectively define what are the rules for which I have maybe a container to start with. And if there is more traffic, I want more containers. And then if this traffic goes down eventually, I want to reduce the number of containers, basically, elastically, based on traffic. So you actually have, actually, the first thing worth mentioning, as we said, is that you can specify, just to configure a single container, so a single instance of your application, you can specify virtual CPU and virtual memory. For both, you actually have very limited choices. Like for CPU, you have 0.25, 0.5, 1, 2, and 4. For virtual memory, you have 2 gigabytes, 3 gigabytes, 4 gigabytes.
Conor: Wow. Okay, it is quite limited in an memory footprint then. And I guess that's probably good enough for most web applications.
Luciano: So again, it's another symptom that this is a service built for web applications in mind and web applications only. Like I don't think you are expected to do, I don't know, data crunching or like massive processing. It's more you are going to be handling HTTP traffic. So probably these characteristics will cover 99% of the use cases. Then in terms of autoscaling, you have three parameters you can play with.
One is max concurrency, then you have max size and min size. Now, what is max concurrency? It's basically all the requests that you are receiving through this invisible load balancers. They are effectively being monitored. And you are effectively saying with this max concurrency, how many requests per seconds can one single instance of your application handle? So you can set this number, I think the maximum is 200, which I thought was very disappointing.
So I believe it's something between like one and 200. But basically what it means, let's say if you set it to 200, is that as soon as you have 201 requests per second, AWS is going to spawn up a new instance of the container. Of course, you can have limits. So you have this concept of minimum number of containers, so min size, which goes from one. One is the minimum. Max size. I don't know if there is a maximum.
I didn't check, but it's effectively, let's say you put 20, right? You are never going to have more than 20 containers. So if you really have a huge amount of traffic, eventually your containers are going to start to struggle a little bit because you are not going to spin up more instances. And this is just a cost control measure. So you can put reasonable boundaries in place so that your containers are not going to scale indefinitely. I guess, yeah, I guess that covers more or less what you need to know about instance configuration and auto scaling. What about security?
Conor: Yeah, it's always good to kick the tires on a new service and see how you or your team might shoot themselves in the foot, I guess. So I had a quick play with App Runner myself. I guess one of the interesting things is if you are integrating it with GitHub or your VCS of choice, which is probably the way most teams will go to take advantage of the automation. In the GitHub case, anyway, it uses the AWS connector for GitHub, GitHub app, which is used by a variety of services.
You can let that have access to your entire GitHub org, or you can actually limit it to specific repos as well, which is nice. So the team that are responsible for your GitHub organization might be happy with that. It does have the kind of load a secret into an environment variable functionality, like you'd get with Fargate and ECS, which is fantastic. So you can specify a Secrets Manager secret. And thankfully, also SSM Parameter Store parameters can be loaded dynamically as well.
So there's a couple of services now where they'll try and strong arm you into using Secrets Manager to get that $0.40 per month. So great to see Parameter Store as a first class citizen there. And it is just a great pattern for loading Secrets at runtime into a containerized app. It has a concept called Instance Role, which is exactly like EC2 Instance Profile or an ECS Task Role. So credentials, temporary credentials that the container will assume at runtime.
So again, you can give it granular IAM policies, which is fantastic. It has WAF integration, which is great. And then from a, I guess, operational point of view, does seem to have really good Terraform coverage as well. So, you know, lets you put some governance and opinionated tooling around the pattern. It's not just a click ops, click buttons in the console to get easy containers solution. Like if it fits your use case and you want to lean into App Runner, it does look like there's ways to build it into like an existing robust software development lifecycle around infrastructure. You don't have to use the click ops escape hatch, which is great. One question I wanted to ask you, Luciano, is you mentioned background jobs. You know, it's very typical if you have a web app, you might have some sort of utils instance or you want to have a runner or something that consumes messages from a queue. In this kind of App Runner, you know, web app paradigm, how do you run background tasks or kind of ad hoc tasks that are not HTTP requests, I guess?
Luciano: Yeah, that's a great question because I think it's very common. For instance, I don't know if you use something like Laravel or Ruby or Rails, like all this kind of MVC monolithic web frameworks. I think at least all the ones I've seen, they have a concept of background tasks. So make it easy for you to keep the responses to the users very fast. But then whenever you need to do something that is not necessarily correlated to the response you want to give to the user, like, I don't know, sign up a user to a newsletter, send an email, that kind of stuff.
You probably want to schedule a background task, reply to the user, okay, as soon as possible, and then in the background process that particular request. And yeah, as I said, many frameworks have all of that machinery built in. So I think it's common for people to just use this kind of functionality. I think when you go to App Runner, there is a little bit of a caveat that you need to keep in mind because considering the idea that your instance can be frozen at some point, you might end up in a situation if you have an application with very low sparse traffic, then maybe a user comes in, makes a request, something goes in the queue, nothing happens for a while.
So effectively, everything is frozen. And that background task never has a chance to run until maybe in a few days, another request comes in, everything is woken up. And then effectively, the CPU has time also to deal with the background task. So this is something to be, I think, aware that probably not ideal to use this kind of characteristics from web framework when you're using App Runner, especially if you have very low traffic.
Like if you have very frequent traffic, probably it's not something you're going to notice. But just be aware that you have very sparse traffic. You might end up in a situation where your background task just gets frozen and delayed indefinitely, or at least correlated to your web traffic patterns. So I would say that an easier approach is just since you are in AWS leverage event services like EventBridge and Lambda.
This is actually how we solved it for this specific application. We had a concept of background jobs, specific for newsletter and sending emails. And then there were some monthly reports just to collect some statistics, create some reports and send them by email. So all of that stuff now is literally just there is EventBridge. So whenever the app needs to schedule something, it creates an event. And then Lambda captures that event, will process it totally decoupled from the main application. And then whenever the job is done, of course, yeah, there are mechanisms to notify that to the application if the application needs to know about that job being completed. So this is something I think was worth mentioning because it wasn't very obvious at first. It's something that we realized in due course and we didn't expect to have to do some rework on the application to kind of adjust for this particular use case.
Conor: Something similar, I guess, just to put you on the spot. You mentioned Laravel or Rails. Very common to run database migrations or something on a single host during deployment. There is apprunner.yaml, which people might be familiar with. Codebuild.yaml and similar files. And there's different phases and stuff. Is there some sort of lifecycle hooks or something where you can run code that you want to happen once, I guess, as part of an apprunner deploy?
Luciano: That's a very good question as well. Not that I'm aware. Like, I couldn't see anything like that. We do use migrations for this particular application. I think the advantage there is that this particular migration system will put a lock on the database. So effectively, whichever instance starts the migration first will have precedence. And then all the other ones will be like, okay, this is a no-op.
So it kind of works. No problem. But I remember this was an old Laravel application. So I don't know if this is something that they solved now in Laravel itself. That this locking mechanism wasn't there. So this approach wouldn't have worked there. Because effectively, you could have two containers starting at the same time. And they would both crash because they are conflicting on running the migrations.
So I guess something to be aware. In the past, this is also something we solved with Lambda by creating custom resources that were run before a new deployment. And effectively, all the migration logic is in the custom resource. Which could be a bit annoying when you're using a monolithic framework. Because effectively, you are suddenly removing all that code. Which is generally very nice, abstracted in those frameworks. And you have to put it in a Lambda. And it's not always straightforward to do all of that. So yeah, this might be another point where you might find a little bit of friction. Just because of the running model. But I think in general, when you take frameworks like Laravel or, I don't know, Ruby or Rails. It's very hardly they are built with that level of scalability in mind. It's more like it's going to run in a big VPS. And then everyone is happy. So this is at least my experience with those kind of frameworks.
Conor: The serverful approach. Okay, so should we try to do a final analysis?
Luciano: I'm going to try to say what I liked and what I didn't like. Let's do it.
Conor: What's the good, the bad, and the ugly?
Luciano: Exactly. So I think my opinion, and feel free to disagree with me, is that it's really nice that you don't have to worry about managing lots of stuff. You made a very comprehensive list. Networking, load balancers. I'm not going to repeat all of that. But you have a path that is almost like the good old times of Heroku. Modern times probably fly.io or Railway. I think it's a very similar experience.
It's like I don't want to know anything or almost anything about infrastructure. I want to focus on an app, building an app, and then just throw it over the wall. AWS, this is my repo. Go figure it out. I think this is a very appealing proposition for most people. And I think this is a common struggle that a lot of people that are starting with AWS would somehow describe as, yes, I just wanted to deploy this one app, and then I spent the next two years learning AWS.
So I think I felt that myself a few years ago. And I know that a lot of people are feeling this kind of stress of, I thought this was simpler, and then suddenly I need to take a PhD in AWS to do the most basic thing. I think with services like App Runner, this is going to be less and less the case. So I think this is very welcome, in my opinion. And also, if you know Fargate, this is another simplification from Fargate.
I think knowing Fargate is absolutely a great skill, but sometimes you just want something simpler. So this can be another option. Another thing that I noticed, and this is probably due to, actually, I'm not really sure what is the main reason for this, but I had the impression, and I don't have hard evidence or benchmarks, but I had the impression that doing a deployment is generally much faster than my experience in deploying web applications on Fargate.
So again, it might just be due to my bad configuration in Fargate, but I had a feeling that, yeah, App Runner is like, you can literally deploy a new version in seconds or maybe a minute rather than having to wait for 10 minutes for all the health checks and everything to stabilize. And also the idea of autoscaling configurations is pretty cool. I don't know if we mentioned this, but we mentioned the parameters.
What you can do, you can actually create multiple configurations. So you could have, I don't know, a Christmas event configuration for when you expect a lot of traffic, and then you can just turn it on and off for specific deployments. So you are not limited to one configuration. You can create sets of different scalability properties, if we want to call it like that, or configurations, and then you can assign them to every deployment. So I think that's a nice thing for those kind of applications that can be very seasonal and maybe you want to be ready for when the season starts. You're just going to flip the switch, change the configuration group, and that's already prepared. You don't have to do the maths again or think about, okay, how many containers do we need now? You prepare all of this configuration up front, and then you can just use it.
Conor: Yeah, it's interesting too that they went with that requests model, right? Instead of the classic, you know, aggregate CPU usage across a fleet of instances, which is kind of hard to reason about. With this, it seems like you can be like, oh, no, we expect this many requests per second on Black Friday because that's what we had last year, and it's a bit easier to reason about the level of load maybe and your scaling. Yeah, pretty much.
Luciano: And I also think that in general, you could easily benchmark your own, like one instance of a container and see how many requests it can handle with a similar kind of vCPU memory model as well. Maybe you just run it as a container locally with constrained access to the actual resources of the host machine, and you see, okay, it can handle, I don't know, reasonably well 100 requests per second. That can be your number.
I think it's much easier to think about scalability that way rather than trying to predict how much CPU means. I don't know. Now you need another instance or something like that. Yeah. Especially, I think, because lots of, for instance, this Rust application uses an async framework, so it's very efficient in dealing with requests. So I was actually a bit disappointed. This is, I think, my first bad note, that the maximum is 200 because my benchmark shows that it can easily handle much more than that with one container, but AWS is forcing me to have that as a maximum bound. So if suddenly I have, I don't know, 400 requests per second, it's going to scale it, even though it doesn't really need to scale because one container could deal with that just fine.
Conor: Okay. This is the opposite. You should rewrite it in Python instead of Rust so that you're getting more CPU cycles per container. Exactly. Yeah.
Luciano: Yeah. One reason not to use Rust. Okay. So the other thing that I was a little bit disappointed, probably a little bit more than just a little bit, was significantly disappointed with this, is that I was using CDK for all the infrastructure's code, and the support is pretty bad, and this is probably an understatement. So you only have CFN resources, so the basic level of resources. So it maps exactly what you have in CloudFormation.
There is no simplification whatsoever. It's like, okay, you need to be extremely explicit, which is a bit annoying, but it's not just that. It's that some features are not even exposed in CloudFormation itself. So, of course, they are not even in CDK. And this is the case, for instance, if you want to associate a custom domain to your application, all of that stuff is not exposed in CloudFormation.
So you need to... It exists in the UI, in the web UI, and in the CLI. So, of course, the usual thing there is that you create your own custom resource, and you kind of solve the problem yourself, and there is actually a very nice article that shows you how you can do a custom resource using CloudFormation and Python. It's written by Mark van Holsteijn, the CTO of Xebia. So we'll have a link in the show notes if you want to do something like that.
I kind of copied that and then did it myself, and it was still, I think, more than 200 lines of code that I would have loved to avoid. So please, AWS, fix this, because I think anyone reasonably is going to need to have a custom domain if it's like a public web application. And I would say, hopefully everyone is doing infrastructure as code. So if you are in those two buckets, it's like, yeah, then you have a problem that requires a lot more code than it should.
Then the other problem is that if your custom domain is actually an Apex domain, so let's say that you have example.com and you want to expose the application on example.com, that means that you need to create an alias record that points to the specific app runner domain. I don't know if it's the right terminology, but the fact that you need to create an alias record. And if you ever used alias records, there is this concept that you have predefined targets.
So there are AWS services that can be a target of an alias record. And app runner is one of those, but yet again, this is not exposed in the CDK CloudFormation level. So there is a workaround as well, and we'll have a link in the show notes. I'm not going to explain exactly how this alias Route 53 mechanism works. Maybe this is a topic for an entire other episode, if people are curious. But effectively, if you know certain configuration parameters and will point you to the documentation, you can recreate this functionality yourself.
But again, I found myself writing probably other 30 lines of code that I didn't want to write. It was interesting to learn a little bit more about how this alias mechanism works in Route 53. It's not magic at the end of the day, but it is just painful that you have to go through all of this extra research and code just to be able to use an Apex domain for your application. And in general, I think the pain was everywhere in the docs.
I think all the docs are not bad, but they are only geared towards people that want to click ops the whole application. I think that experience is actually quite well done. But if you are a cloud practitioner, again, don't do that. Please use infrastructure as code. And AWS, please support people more to be able to use infrastructure as code. And yeah, then there are other nitpicks that can be easily improved by AWS.
For instance, your log groups, they are created automatically, but they are endless. And there is one log group for every deployment. So if you are iterating very quickly on your application, I think suddenly you'll have hundreds and hundreds of log groups and you'll end up paying some significant amount of money. So to be fair, this is similar to what happens by default in Lambda, but at least in Lambda with CDK, they made it somewhat easier to configure this parameter.
So hopefully they can do something similar for App Runner as well. And yeah, I think in general, all these problems are not showstoppers. It's just, I think the whole experience can be much nicer if AWS improved it. My general worry is that while I was bumping into all these issues, I found posts that were like three years old where people already complained about these issues and they are still open.
So I'm just a little bit concerned questioning what is the level of investment for AWS on this service? Is it just something that they tried, but they are not necessarily fully committed to it? Or maybe, I don't know, eventually they would decide to invest more and improve all these rough edges. So I hope the case is the latter because I think in general, I like the service. I just hope it doesn't become another AWS Abandonware service where yes, eventually you figure out all the rough edges and you have your copy-paste solutions for all of those, but it's just not the nice experience that I wish we could have as cloud practitioners and users of AWS.
Conor: I look forward to the blog post, migrating your AWS App Runner apps to ECS Fargate that will be published in 2029. Yeah.
Luciano: Hopefully that's not the case. Maybe we'll have more of the opposite use cases. Yeah. And finally, it would be nice if it did scale to zero because Google Cloud Run, that's the name I was not remembering before. Everyone is saying, I didn't use it myself, so I cannot speak to it, but everyone is saying it's like very similar type of user experience plus generally much cheaper and simpler because automatically scales to zero. You don't even have to configure it. So it would be nice to have something like that. Now, should we try to summarize why we would prefer App Runner to Fargate or the other way around?
Conor: Um, it depends, right? Are we allowed to say that on the podcast? As consultants, we are, I think.
Luciano: It depends.
Conor: Yeah, I guess, yeah, if you're looking for that batteries included option, App Runner really does seem like a no-brainer. I took it for a quick spin earlier today and you can get from GitHub repo with Dockerfile to running web app in minutes, literally. So it's fantastic for that. I think it'll be great for teams that maybe have a lot of prototypes or they want to deploy feature branches to hosted containers.
The fact that you can get that kind of $5 per month price point for mostly stopped apps that you interact with infrequently, I think it's going to be great for that kind of use case. And yeah, maybe it will become the go-to for your initial, you know, it'll jump to the top of the list maybe in terms of what tool should I consider for hosting my container. And then you maybe, if you start hitting a lot of rough edges, then it's time to look at Fargate.
So maybe just slot it into kind of that existing hierarchy maybe that every practitioner has, you know, where maybe you're like trying to minimize the amount of effort or inference. You might be like, can I do it in a step function? No. Okay. Can I do it in Lambda? No. Okay. Can I do it in App Runner? Can I do it in Fargate? So I feel like App Runner is just going to slot into that decision tree that we all have. But yeah, for me, it's definitely a service that is worth looking into. And it's definitely something I'll be playing around with and looking at Terraform coverage and hoping to use it a little more. What about you?
Luciano: Yeah, I'm curious to hear your feedback on the Terraform coverage. I hope it's better than the CDK CloudFormation one. So yeah, please let me know about that. Yeah, I think the only thing I'd like to add is that pricing, it's an interesting one because I think it can, it could go either way when you compare it with Fargate. Because I think the fact that you're not paying for a load balancer, for example, could make it much cheaper just because you don't have those 30 bucks fixed per month.
But at the same time, I think there is a premium on the cost of memory and CPU. So I think if you're really doing an intensive use case, like you're running dozens of instances of your containers, probably I think there is a point where Fargate becomes cheaper than App Runner. So I don't really have hard data. So this is just an assumption for now. So feel free to experiment with cost calculator or your own spreadsheet and figure it out exactly if the pricing will work for you or not, depending on whatever are your metrics.
So I think that brings us to the end of this episode. And sorry, this was a little bit of a longer one, but hopefully you found value on it. And hopefully you are now curious to use App Runner, give it a spin. And maybe you find out that it's easier and better than Fargate for your own use cases, or maybe not. Definitely let us know. We are really curious. As always, we want to know what are you using for your own little tests, but more importantly, what are you using in production and why.
So please share your stories because this is how we all get better by sharing our stories with each other, lesson learned and so on. Now, before we go, a big thanks to our sponsor. So thank you fourTheorem for powering yet another episode of AWS Bites. At fourTheorem, we believe that serverless should be simple, scalable and cost-effective and we help teams to just do that. So whether you are diving into containers, stepping into event-driven architecture or scaling a global SaaS platform to AWS, our team is there to help you. So visit fourTheorem.com to see how we can help you to build faster, better and with more confidence on AWS. Thank you very much and we'll see you in the next one.