AWS Bites Podcast

Search...

76. Unboxing AWS Copilot!

Published 2023-04-14 - Listen on your favourite podcast player

In this episode, we're doing something different! Join us for a special screen-sharing edition of our podcast series, as we take a deep dive into AWS Copilot, a service designed to simplify container application deployment on AWS.

During this video, we'll be sharing our screens as we walk through the AWS Copilot landing page and documentation, and demonstrate how to use the service to deploy a container application. We highly recommend watching the video version of this episode, as we'll be providing a lot of visual guidance and examples.

Starting with the basics, we'll learn about the differences between copilot init and copilot app init, and how to prepare our environment using a custom domain. We'll then walk through the deployment process step-by-step, examining the generated configuration file, manifest.yml, and testing our deployed application.

Next, we'll explore the networking resources created by AWS Copilot, including a VPC, subnets, and a load balancer, and review the automation capabilities of CodePipeline. We'll also discuss the options available for rolling out new changes, and demonstrate how to make changes and re-deploy through the pipeline.

Throughout the video, we will share their thoughts and opinions on AWS Copilot, including a failed attempt with AppRunner and a review of the pipeline execution and timing.

AWS Bites is sponsored by fourTheorem, an AWS Consulting Partner offering training, cloud migration, and modern application architecture.

In this episode, we mentioned the following resources:

Let's talk!

Do you agree with our opinions? Do you have interesting AWS questions you'd like us to chat about? Leave a comment on YouTube or connect with us on Twitter: @eoins, @loige.

Help us to make this transcription better! If you find an error, please submit a PR with your corrections.

Luciano: Hello everyone and welcome back to another episode of AWS Bites. Today we have an incredibly exciting episode in store for you. We are here to do something a little bit different than usual. We want to unbox an AWS product and specifically we want to unbox AWS Copilot. This is going to be a very visual episode and we will be screen sharing. So if you are listening to the audio only version, we will be doing our best to describe what's happening but you might be better off watching the video version on YouTube or Spotify.

Now I mentioned that in this episode we will be exploring Copilot. Copilot is not to be confused with GitHub Copilot. We are talking about AWS Copilot which is a CLI helper that helps you to create container-based applications and deploy them on AWS. We will work you through the entire process. We will show you how to install it, how to configure it and we will create a container application and deploy it on AWS.

So I hope you are as excited as we are to explore AWS Copilot. Hopefully we will make the best of it. So sit back, enjoy and relax and let's have fun together. AWS Bites is sponsored by fourTheorem. fourTheorem is a cloud consulting firm that helps businesses to migrate to AWS and optimize their cloud infrastructure. With a theme of experience cloud architects and engineers, fourTheorem provides end-to-end solutions for cloud migrations, application development and infrastructure management. If you're interested in finding out more, check out fourtheorem.com. The link is in the show notes. So today we are here to explore AWS Copilot. Eoin, what do you have in store for us?

Eoin: Well, when you take it out of the box, the first place you might look is the Copilot landing page on the AWS site. So there's a quick overview here. You might be wondering, okay, where does this fit into the 17 different ways to run containers on AWS? So I suppose it's not really a separate way to run containers. It's just like a tool that you can use to run them on the existing services. So we've already talked on the podcast about ECS and we've talked about Fargate.

We haven't talked a lot about AppRunner yet, but AppRunner is a relatively new way to get up and running with running containers on AWS for fairly simple applications. Now those could be like background processing or APIs or front-end applications, but it hides a lot of the stuff you normally have to deal with, like task definitions and services and load balancers. AppRunner makes that a lot simpler, but it also is a bit limited.

You can only run so many parallel containers of certain sizes, but for a lot of use cases, it would be perfectly okay. So what Copilot is doing is it's allowing you to run stuff on ECS and Fargate, but you avoid having to write loads of cloud information yourself or Terraform or CDK code. You don't have to worry about all of the well-architected pieces because Copilot, I think, is trying to sort that for you, try to make it observable, make it easy to set up deployment pipelines, safe deployments. It's a command line interface only, so there's no AWS service here or anything like that. So we're going to give it a try.

Luciano: So is it fair to say that Copilot is basically a guided experience into trying to deploy containers in AWS the right way, hopefully?

Eoin: Yeah, I think that's a good way to put it. If you go to the Copilot CLI documentation, you've got this GitHub page. So it's hosted on GitHub pages, and it's got a good getting started guide, which we're going to have a look at to take you through. If we go to getting started for the first time, you install the Copilot CLI. I'm on Mac OS here, so I've already brew installed it, so I've already followed this part.

And then you can see the commands you get. There's a couple of concepts here in Copilot, which we might cover quickly. When you start initially, you do this `copilot init`. If I go `copilot init`, it's going to ask me if I want to create an application, and then it's going to ask me for my Dockerfile, and then it's going to go ahead and deploy everything. And it's going to ask me, give me a few options here.

So let me just give an example here. So I'm just going to say test, application name test. It asks me what kind of workload represents my architecture. So it's showing me here that I can have like a web service backed by AppRunner or a web service backed by ECS on Fargate with a load balancer in front of it. Or I can have a back end service, which is ECS on Fargate. So it doesn't have an internet facing connectivity.

And then it can have like background services, like something pulling off a queue or a scheduled job. So if I run one of these things, it'll ask me for a name. By the way, this wizard is also, it has this question mark for help, which is pretty good. At every step, it'll give you more details about all these configuration options and what they do. Let's say I call this service one. And then it asks me which Dockerfile I want.

So there's a couple of concepts here. It's got application. Within the application, there's a service and you can have multiple services within your application. And those could be microservices. So it could be a front end and then you might have an API gateway. And then you might have other services that talk to each other and it will set up all that for us. So an application is almost like a workspace, right?

Yeah, exactly. That's a good way to look at it. Cancel out of that. Because we did a quick dry run for this earlier on, actually, it was more than a dry run. It was a bit of a failed attempt where we explored it, a few different AppRunner concepts. And one of the things we realized is that if you want to use custom domains for an internet facing application, the default setup doesn't work out of the box.

So if you just do copilot in it, it'll create that application for you, but it doesn't give you the option of using a custom domain and it's difficult to apply it after the effect. So there's this, the better way to set up a copilot application for the first time seems to be to use `copilot app init` first. And once you do that, it will allow you to specify your domain. When you deploy an application to it, you can use your custom domain.

So it's more of a step by step because when you do `copilot app init`, it doesn't create any services, it just creates the workspace as you call it. And then you later have to go and add services in it. So you can add services into that workspace. So it takes a little bit of time to get those concepts in your head, but you've got your application, your services in your application. And then the third thing is you've got environments.

So you have your test environment, your QA environment, production environment. We can start with our `copilot app init` and let's put a domain in. We're using an AWS account that has a domain registered with route 53 and the hosted zone for that is in the same account. So that makes it easier when it comes to creating certificates and we'll see how that works. So let's get cracking then. So we've got an awsbites.click domain, which we can use. So let's give that a go. It's going to check that we own it first. Let's just call our application copilot app. And now it says it's proposing infrastructure changes, which is not a term that you would understand by just from normal AWS usage. What it's basically doing is creating CloudFormation templates. It says it's creating an admin role for CloudFormation, which is good. And then it's adding name server records for our domain.

Luciano: So basically it requires us to have an hosted zone on route 53 with that particular domain, right?

Eoin: If we go into AWS, we can actually look into CloudFormation and see what is happening under the hood. So here's one stack being created by copilot app. And in here, we've got the admin role it mentioned. It's creating a hosted zone and it's creating an execution role. So we can look, click into that hosted zone even. Okay. This hosted zone is basically a sub domain over our main domain. Yeah. So we've also got our main domains hosted zone in here in awsbites.click.

It hasn't created a delegate NS record in here yet, but I expect that it will. These name servers are publicly available and anyone who's doing DNS queries against this domain, the records will come from these name servers. But this new one, our copilot-app.awsbites.click, that's in a different hosted zone. So that uses a different set of name servers. So we need to delegate from this one to the subdomains name servers.

Okay. So this create is still in progress. It's weird the way it says stack set. I saw that it's creating stack sets and I'm wondering why does it need stack sets because stack sets are normally when you need to deploy a stack across multiple regions in AWS. And it seems like it has support for creating some resources across region, like your ECR repository, where you're going to put container images. But I couldn't find anywhere in the documentation that explains why it was doing multi-region configuration by default or where the set of supported regions would be defined or how you would actually proceed and do a multi-region deployment with copilot. So it's a little bit strange. I wonder if it's just like future proofing in some way so that eventually they can support multi-region disaster recovery type deployments. Yeah, that's an interesting detail.

Luciano: Okay.

Eoin: It says it's complete and it also says it has added the NS records to delegate responsibility. What I like about this is that it gives you much more human readable output than just giving you the CloudFormation events raw. And I also like that it told you what to do next.

Luciano: Yeah, I think it's the usability and the developer experience.

Eoin: Somebody has put a lot of thought in here. So well done and thank you. So this is done. Let's have a quick look in our hosted zone. So the hosted zone, which was there before, it has been updated to add the delegate NS records. And that's it. So we don't have any application yet. All we've got is some foundational stuff set up. But now, as you said, Luciano, it's told us that the next step is to add the host and the next step is to run copilot init. So let's try that. Oh, by the way, this application we have, I just based it on the one that comes with the getting started documentation. That's what it looks like. Okay.

Luciano: Should we have a look at the Dockerfile as well very quickly? Yeah, let's do that.

Eoin: Yeah. So this is the Dockerfile. So it's almost as simple as you can get. It's using an Nginx web server. It's exposing port 80 and it's using all the default config in Nginx. And it's just overriding the index.html and adding an image. So it's literally just a static website backed by Nginx. Yep. So I think we can try and init. Okay. So we're going to go for a load balanced web service. So this is one that will use a load balancer rather than using AppRunner. I think this is a kind of a more interesting example and probably more common given that this is our static application. Let's call this front end. And it's asking us whether we want to use a Dockerfile or an existing container image. We want to use our Dockerfile because we haven't built the image yet. So let's try that. Aha. It's interesting that it says that it's detected that I am on ARM 64, but it's going to set the platform to Linux x86 64 instead. Yeah, that's a very common problem.

Luciano: Yeah, that's a very common mistake that it's always very hard to troubleshoot because of course you need to build the container for the target architecture, but by default it's going to be the container for your system architecture, the one you are working on. So if there is a mismatch, you might end up with a container that is going to fail in very weird ways in production.

Eoin: We're going to do x86, but it says here that we can update a manifest if we want to change that. Okay. Please change the platform field in your workload manifest. So maybe we can have a look at that while it's doing the next step because it says, would you like to deploy a test environment? And I would say yes.

Luciano: By the way, that's another interesting concept that it supports this idea of environments out of the box. By environment, we mean a development environment, QA staging, whatever you want to call it, and production or more custom environments if you want to. Yeah. And my understanding is that it will give you a set of best practices there as well, like isolate them in different domains. Yeah. I think it creates maybe different VPCs as well. Yeah, that's a good point actually.

Eoin: We can switch back to the docs actually, because we didn't really focus enough on what's in the docs. So the docs will show you what the different concepts are, the things we talked about, applications, environments, services. It mentions that it has application-wide resources that are not specific to environments, like your ECR repositories, buckets. And by the way, you also have these handy utilities like Copilot App Show or Copilot Service Show, which would just give you a text summary of an application or an environment, I think, or a service. So if we look at developing, we've got the ability to add additional resources to the additional resources. So I was kind of surprised to see this, but you can add a bucket or a DBDB table or an Aurora serverless cluster using the copilot storage init command. I haven't tried this. But you could also add custom resources, right?

Luciano: If you want to do your own CloudFormation, row, just add whatever you need, right?

Eoin: Yeah. So we've got here the concept of add-on templates, which allow you to put in essentially raw CloudFormation. So it supports then properties, resources, outputs, parameters. Interestingly, there is another section in here on overrides. It talks here about CloudFormation overrides. There's also the concept of CDK overrides. So CDK overrides actually allow you to put CDK components into your application as well.

And then there's task definition overrides, which I guess allow you, if you discover that the generated ECS task definition isn't exactly what you want, you can go in and customize it at a more granular level. So this manifest was mentioned, so let's have a quick look at it. So the manifest, it looks like this is just the, this is the declaration for our whole application. And if you were doing ECS from scratch or Fargate from scratch, you'd have to do a lot of CloudFormation or Terraform.

But this is like a very concise, higher level declaration of an application that avoids you having to do that. So the pieces in here are the HTTP setup. So you could set up the path health checks. You specify your image config and then the container config. So this one is using a quarter of a CPU, half a gig of RAM, it's x86, and it's going to run one task by default, it looks like. It's going to support ECS exec, which is nice because then we can shell into containers if we need to.

Oh yeah, and for microservice fans, the way it supports communication between microservices is using ECS service connect. So you turn that on, it allows you to address other microservices and it will use ECS service connect, which uses Cloud Map, which is like internal DNS and a load balancer. The one also supports HTTPS, right? Across server. This is with self-signed certificates or something like that?

I don't think that's provided by service connect out of the box, but yeah, there is something in here. Yeah, it may be under the load balance web service. Here we go. Yeah, if you look under load balance web service, there's some things that are quite nice about the documentation. So when you go in here, you can see sample configurations for lots of different workload types. So you've got basic configuration and then a manifest with a domain and then examples of configuring larger containers, auto scaling. And if we go further to the right, we've got end to end encryption. So I think this is what you were getting at. Right. Okay. So you still need to provide a little bit more configuration.

Luciano: It seems to be using Envoy as a sidecar. Okay. Interesting.

Eoin: Our deployment, meanwhile, is still chugging away, but we have this very nice summary of what it's doing.

Luciano: Yeah, I don't know how I feel about this YAML because on one side is another type of, I don't know, infrastructure as code. Let's call it this way, another specification that you will need to learn and get familiar with, but at the same time, it's much more friendly than it would be if you just go with CloudFormation, especially if it's the first time that you're trying to do something like this. I wonder where this applies.

Eoin: Yeah. Is it for people who just don't want to get bogged down in CloudFormation all the time? I can certainly understand why people wouldn't want to, but there's also a lot of options for containers. Like I've deployed lots of different ECS clusters and services, and it's still not trivial to configure and deploy. There's always something that's subtly different between setups. So I can understand the appeal of making that a bit simpler.

Also, I can imagine if you were migrating a load of on-premises workloads to the cloud with AWS, you don't want to have to configure all that syntax yourself. If you could use something like this, it could make the job a little bit easier. And maybe a good question, maybe somebody out there knows, but there is another tool that AWS has called App2Container. I don't know if you've seen that one. I haven't. But it's partly designed for if you've got on-premises workloads like Java applications or.NET applications, it allows you to containerize those in order to move them to AWS. Okay. So I think it's doing it a little bit like Copilot, but it's more aimed at detecting your Spring Boot application configuration, packaging it, and launching it into ECS.

Luciano: Okay, interesting. So it's probably more of a migration tool than a more general part was stored.

Eoin: What has it actually done? It talks about Fargate. It's creating load balancer resources. It's created HTTP and HTTPS listener rules, CloudWatch logs, and it's created the ECS service. So the ECS service, it's in the middle of creating that and waiting for the desired task to be running. So it has a target group. Yeah, so it's almost there, I would say. What about... DNS? Didn't see anything relating to DNS. I'm curious to look in the console and see what it looks like from the raw CloudFormation view. If I understand correctly, what we expect to have at the end is something like test.copilot.

Luciano: awsbites, what did we call it? .clicks? .click, yeah. Right, so it needs to create that domain as well and map it to the AWS. It needs to create that domain as well and map it to this particular environment, right?

Eoin: Yeah, I think so. Yeah, well, I would expect an alias pointing to that load balancer. So this is our stack. Maybe we can have a quick look at the template. So does it have Route 53 resources? Yes, it has a record set. And does it have a certificate? No, it doesn't.

Luciano: Okay, this is going to be interesting then to see what we get in the end. Maybe because it did create a star certificate.

Eoin: Did it create a star certificate? Yes. Well, no, it created test. It has one for the test environment created. Okay, but it's not part of that stack that we are developing.

Luciano: Must be.

Eoin: Let's see what the CLI says. Okay, it says it's done. Oh yeah, and it's giving us the address. So our address is pretty much what you said. It's a bit of a mouthful, but front end. So that's the service name. .test, which is the environment name. .copilot-app, which is the application name. And then the domain, awsbites.click. Let's click on it.

Luciano: It's working. Nice.

Eoin: Why are we surprised?

Luciano: I am surprised that it is probably creating some kind of star certificate because this is called frontend.test, right? So it didn't create a TLS certificate for frontend.test, but just test. So I wonder if this certificate also contains asterisk.test. You're right. It does. Okay, perfect. Yeah. I guess that's, I'm sure that certificate must have been in this template.

Eoin: If I look in resources and search for cert. So was it in the previous stack?

Luciano: So probably there is a stack for the environment and then a stack for the specific. Yeah, you're completely correct.

Eoin: There's a stack for the environment and a stack for the service. And it was created in the previous stack five minutes ago. We didn't even notice. Okay, good. I'm curious to have a quick look at the load balancer just to see what it looks like.

Luciano: So it did create a significant number of resources for us, which is probably worth highlighting, right? That we got a VPC fully created from scratch with all the subnets and everything else. And maybe there are questionees, we might check in a second. If it did create a NAT gateway or not. Let's have a look at that. Then it did create a load balancer. It did create DNS records, certificates, Fargate cluster, Fargate tasks, built our container, deployed it into a registry and kind of connected all the dots, even supporting a multi-environment setup. Right now we just deployed a test, we call it, but we could deploy QA in production, right? Yeah, I guess so.

Eoin: And then we could change the number of containers in production so that it's more than just one runs across multiple availability zones, handles. We've even put in auto scaling if we want to. What I'm slightly confused by here is that it adds another listener. So it has a HTTP port 80 listener as well, which also seems to forward to the same target group as the HTTPS listener. All right. It doesn't do a redirect. It doesn't do a redirect, but it gave me a 503. Our other one seems still to be working.

Luciano: And if you use this one, but on HTTP, what happens if you use that domain? But HTTP.

Eoin: Well, good question. Yep. Is it using like host header? Okay.

Luciano: Does it redirect it somehow?

Eoin: Is NGINX doing the redirect in that case?

Luciano: I don't think so because I think NGINX is just on port 80 for what I could tell from that Dockerfile. Yes, it is.

Eoin: But I'm just wondering if it has, if it can check the X headers to see where it came from and redirect like the forwarded by headers. I don't know. I suppose one thing I'd like to do is turn off port 80 on the load balancer or on the port 80 on the load balancer or else ensure that it's just a redirect. Should we try creating a pipeline? We should check the NAT gateway first. Oh, the NAT. But anyway, I want to remark that we did a little bit of preparation.

Luciano: So that's of course to be mentioned. But other than that, we spent slightly more than 30 minutes to have all these things configured. And we have a container up and running in AWS with a fairly decent setup that we could probably take into like production with confidence. So that's fairly impressive. It's not bad.

Eoin: Not bad at all. In the VPC resources, we can see the internet gateway that it created. We can also see if there's a NAT gateway. And I'm happy to report that there's no NAT gateway. So you don't have to worry about mortgaging the house to pay for it.

Luciano: But that also means that our container is not going to be able to reach out to the internet if it needs to, I don't know, download anything or call an API, right? Yeah, which is a nice default.

Eoin: But I guess if we need to, then yeah, that would involve extra configuration. But you can provide if you do your VPC configuration separately, and lots of people will, or they'll have existing VPCs, you can provide those inputs. You don't have to get Copilot to create it. So we can see here that we've got this new resource map is pretty handy. It's showing us that we've got a public subnet and a private subnet, two availability zones. And yeah, it's from the public, so it must be the internet gateway route. Yeah, that's what it is. Okay, anything else we want to check? Or we try to automate this with a beautiful pipeline? Let's see. Let's have a look at the pipeline. Okay. So one thing I guess to understand is that it allows you to use Copilot to set up a code pipeline. And it supports GitHub, Bitbucket, or AWS CodeCommit. And it will deploy to your environments and runs automated testing. So it's code pipeline only. If you're using GitHub actions, you're out of luck. You got to do that yourself.

Luciano: But you can still use GitHub for repository, basically.

Eoin: Copilot pipeline init is the first step. So let's try this. Let's call this container front end main. That seems like a decent default. Okay, so what kind of continuous delivery pipeline is this? Workloads or environments? Deploys the service or jobs in your workspace or deploys the environments in your workspace? That's a bit obscure to me what it means.

Luciano: What's the difference between the two? It says we can do question mark for help.

Eoin: The help says a pipeline can be set up to deploy either your workloads or your environments. Okay. It's not really telling us anything new. Which would you pick? I don't know what the difference is. We go back to the docs? Yeah, let's have a quick look at the docs. We must be desperate if we're checking the docs. Okay. Pipeline configurations are created at a workspace level. If your workspace has a single service, then your pipeline will be triggered only for that service. If you've got multiple services, then the pipeline will build all the services in your workspace. That makes sense. But what's the thing about pipe applications versus environments? It doesn't tell us. It doesn't say anything about this question in the documentation. I have a feeling that you could probably have different pipelines for different environments.

Luciano: Yeah. It's not too obvious which one is which. Well, as opposed to have like one global pipeline that maybe allows you to promote things across environments. I wonder if maybe this is a new feature that isn't in the latest documentation yet or something.

Eoin: Okay. So let's try this again. Go for workloads. And which environments would we like to add to the pipeline? Oh, we've only got one at the moment. So we'll just use that one. Okay. That seems to be all good. And now we can deploy. Oh, commit and push the copilot directory to your repository first. All the copilot manifests gets pushed and has to be in your Git repo. Okay. So now we can do copilot pipeline deploy. Okay. And it says action required. I have to go to here to update the status of connection from pending to available. Okay. So this is for connecting to GitHub. So this is probably a one-off operation, right?

Luciano: For this particular pipeline. Yeah. So this is like basically the OAuth flow to connect to your GitHub repo.

Eoin: It's interesting that I pasted in this URL. It goes to the CodeSuite service and this lands me in, redirects me to the Stockholm region. For some region. Very random. Okay. Okay. Let's go back to home. And this one is pending. Okay. Connection. So I got to update this pending connection, which means I have to do an OAuth dance with GitHub, I imagine. Okay. So let's say install a new app, pick awsbites and do my multi-factor dance as well.

Okay. So now that I've got the GitHub app configured, I can just hit connect. And then it knows how to talk to GitHub and set up web hooks to trigger the pipeline and all of that good stuff. Let's see what our... Oh, okay. So it says it has created the pipeline. So let's go into code pipelines, see what we've got there. Okay. So it is in progress. And it's the latest one is my latest commit, which adds the copilot directory. So this should be deploying. So what does it actually do? So it takes the source from GitHub. It's got the commit there and it's running a CodeBuild job. What does the CodeBuild job do? Probably builds the container, I imagine, right?

Luciano: So we'll look. Okay. Build details. We should be able to see the build spec here.

Eoin: I'm guessing. Okay. So that's in the source code. So we can go back to VS code to see that. Pipelines build spec. Okay. So what's this build spec? It downloads Copilot itself. Specific version. Runs the tests, but that's commented out right now. Post build. Okay. Ooh. It's converting. YAML to JSON, right?

Luciano: YAML to JSON using Ruby.

Eoin: That's why it needs Ruby. Okay. It's quite verbose. Okay. This is going through all the environments. So it's reading the environments from here, but we asked it to only run the test environment, didn't we? But okay, let's go have a look at that. It comes from the pipeline. The pipeline manifest. Okay. So I guess that's what dictates the environments, its stages, and then the name. So it's only going to pick out test because there is only test. But we can add more there to this array, I guess.

Luciano: Yeah. Okay. So here there is a similar concept where build spec is fairly complicated. So they kind of provide this higher level, simpler interface, which is the manifest YAML. Yeah.

Eoin: It's interesting, though, that this build spec is part of your source code then instead of all of this, all of these steps being somehow folded into the Copilot CLI itself. That's true. Good point.

Luciano: Probably because they want to allow you to change things in there, like enable the test or I don't know.

Eoin: So from what I can see, it's basically doing Copilot package, which is generating the cloud information and not doing any deployment. I don't see it doing Docker build, though, but maybe that is happening as part of the package. Let's have a look, see how our pipeline is doing. Build logs. It's succeeded anyway. So what did it say? Oh, yeah, it does build the container image. And pushes it to ECR and it has also generated the stack, the cloud information stack and the parameters. Cool. But it did not deploy, did it?

Luciano: This was just the build step.

Eoin: So I think there's an additional step for deployment. So here is the deployment step.

Luciano: Which is in progress, though. We don't have to manually approve.

Eoin: We don't have to manually improve, but from what I can see by the manifest, you have the option for each stage to say whether it requires manual approval. That's pretty good. Yeah, nice. I still prefer GitHub actions, but it's pretty good.

Luciano: One thing we could test is to change that index HTML and I don't know, add a title or whatever. Commit again and then we should see another run of the pipeline and eventually we should land that change into the test environment. Aha.

Eoin: OK, so this is updating the CloudFormation stack. So this, OK, the deploy action is actually an AWS CloudFormation integration. It's not a CodeBuild job. That's why it took us directly to it. So it's taking the CloudFormation template that was generated in the previous step and the parameters and it's using the CloudFormation integration directly. OK. While that's happening, should we talk about our failed attempt or disappointing attempt with AppRunner? Yes. Earlier, because we've deliberately decided to use the load balancer here because we tried this earlier with AppRunner and we were recording and everything and it all went a little bit sour. To say the least. It went well to start with because it created AppRunner. We were keen to see AppRunner because we haven't used it very much. We were keen to see how it would work. But it has a couple of flaws, right? At least the copilot experience with AppRunner. How would you describe it? Yeah, I think it wasn't too bad.

Luciano: Initially, I think it got a little bit tricky when we said, what do we do about domains? Yeah. And I think because AppRunner is, it almost looks like a PaaS, like a Heroku that tries to remove all the details. Like you don't get to see where is the load balancer, where is ACM, how does it generate certificates, how does it manage DNS and all that kind of stuff because they are happening in some kind of global AWS account that you don't get to control.

Yeah. So a lot of the automation that we were expecting was not actually happening. It was more, for instance, OK, if you want to use this custom domain, you definitely can, but you need to basically create all these DNS records yourself. Yeah. Which, first of all, we didn't expect it. So it took us a while to figure out that literally was blocked waiting for us to creating these DNS records manually. And then it did take a while to actually validate the DNS records themselves. I think it took like 15 minutes. So we were wondering, are we doing something wrong or is it normal that it takes so long? Plus, we did a few mistakes with DNS ourselves because, of course, with DNS, there is always something that goes wrong. Right. So I don't know if I'm missing anything, but this is my recollection of what went wrong.

Eoin: Yeah, exactly. I think we just made an assumption, like at the start of this exercise, we use this `--domain` when creating the application to associate the Route 53 domain. And that worked so well for this one. But when we tried it with AppRunner, you'll notice in the documentation, this is all specific to the load balancer, well, it's our web service application type. And this is all nicely documented and it tells you how to use different domains for different environments, et cetera, what it works like under the hood.

That's really nicely documented. You can also import your own existing certificate. But if you go down to this section, admittedly, we didn't read this before we tried it. The request-driven web service, that's the AppRunner version. It says that you can also add a custom domain for this. But the way it works is that you specify a subdomain in your alias here, but it's unrelated to your dash dash domain in your app.

This is just per service. And you get one domain name per service and you can't use different domain names for different environments. So there's this info here, which should be maybe warning, but it says for now, they only support one level subdomains and environment level subdomains or application level domains are not support or route domains are not supported yet. So there's a few caveats there. It's a pity because AppRunner, I think, is really nice. I think our summary at the end of that, Luciano, it was more like AppRunner is already pretty simple to set up, even with CloudFormation or with console or whatever you're using. So maybe Copilot doesn't really add that much, especially given that you've got these restrictions. Yeah, that seems like a fair conclusion.

Luciano: I don't know why they are trying to support it. Maybe there are more advanced use cases that they want to support. But in general, it feels that for ECS, Fargate, there is a lot of value there just because doing things manually and by manually writing your own CloudFormation or Terraform, even if you are experienced, it's still going to take you probably an entire day just to set up everything. If you are not experienced, probably we're talking about weeks. So here, this tool is probably going to take the time down by a lot, like probably in the order of hours or even less. Should we make an aesthetic change to our application?

Eoin: Yeah, let's just add an H1 tag or let's change the background code.

Luciano: That sound good? Sounds good.

Eoin: It's not going to be very pretty, but it's a very visible change.

Luciano: Lovely. It's attention grabbing.

Eoin: Okay, so let's add this. You should add like a CSS animation and make it flash.

Luciano: All right, let's push this and see if CodePipeline does its thing. Also, this can be an interesting test to see how long does it take from commit to actual production. Well, it's CodePipeline, so it's not fast is the default answer in that case.

Eoin: But I'm curious to see, are we talking about one minute or like 10 minutes?

Luciano: The first execution was eight minutes, 49 seconds end to end.

Eoin: Okay, let's see if it has some kind of cache or whatever.

Luciano: If it's faster than that, no.

Eoin: Probably a good time to promote or refer people to our CodePipeline versus GitHub actions episode, where the performance of CodePipeline is a hot topic. Indeed. Yeah, because here what we are effectively doing is not allowed, right?

Luciano: It is building a new version of the container, but hopefully it uses the Docker cache effectively. Also, it's a relatively small container, like we literally have two files, one is the starting from NGINX, and then it's created the CloudFormation using the CLL, the Copilot CLI, and then it is using the CloudFormation integrations to actually deploy that. Yeah, and in the last pipeline execution, that was the part that took the time, five minutes and 12 seconds. Okay.

Eoin: So I'm guessing that the reason for that is because it's a relatively small container, okay, so I'm guessing that the reason for that is not for CloudFormation, because I don't think CloudFormation itself is at fault for that performance. I think it's just ECS, load balancer, target group, updates, all of that, all those shenanigans. Waiting for the health checks and so on.

Luciano: That's actually maybe something worth looking at while the deployment is happening. What happens to the service? It's not going to go down, right? I imagine it's going to kind of soft roll out to the new version in a way or another. This is a good question, actually.

Eoin: I think we have options there.

Luciano: Because this is also one of the things that when you have to decide for yourself, it's a difficult choice and it's difficult to configure it correctly, so I'm curious to see what's the default here.

Eoin: I'm guessing it's not doing the blue-green.

Luciano: It's probably just waiting for the new containers to be up and receiving traffic and being healthy and then starting to drain the older ones. Will it add it as a target to our load balancer before it removes the old target? Maybe an interesting thing to do is to keep doing requests on the website and see if it flashes back and forward for a while in the background. Oh, I got that. So you see that you have both containers running right now for a while until I suppose it's going to start to drain the old one. Which I mean is not necessarily too bad. But I do want to go back to the docs because...

Eoin: Okay, here we go. So in the deployment section, you can specify a rolling deployment strategy. Which is probably what we have right now, right?

Luciano: Yeah.

Eoin: The valid values are default, which creates new tasks before stopping the old ones. So you have a moment in time where you have both versions running.

Luciano: Yeah.

Eoin: Or you can recreate. But there's no blue-green deployment or canary deployment. Anything like that. So what is recreate doing? Recreate... Sorry, where did it go? Stops all running tasks and then spins up the new tasks. So you have some kind of downtime, I imagine.

Luciano: Yeah.

Eoin: Minimum healthy percent is zero, which is... I guess this is faster to get your new container up and running completely, but more dangerous if you've got a service that people rely on.

Luciano: Okay. Interesting to see that there is no other strategy there available.

Eoin: Yeah. I think for people's benefit, there is a good blog post on the fourTheorem Blog by Gurarpit, which is about blue-green deployments with AWS CodeDeploy on ECS. And this is using Terraform, but I think the same kind of strategy still apply because it's using CloudFormation, ECS integration for the blue-green part anyway. But I think if you read this, like there's a very thorough article that you'll realize how much complexity and how much goes into thinking about safe deployments on ECS. And it's probably understandable then why Copilot doesn't support it out of the box. Yeah. We will have the link in the show notes for you too. It looks like we're all yellow. Okay. And I'm curious to see if the pipeline says that everything is completed.

Luciano: It looks like it took at least five minutes, right? Yeah. Not as long as I thought.

Eoin: Let's have a look. Six minutes. It was six minutes, 50 seconds. And the breakdown was also five minutes for CloudFormation. It's just that the... What is the difference with the previous one? Probably build it a container was faster because...

Luciano: Five, 1.5.

Eoin: Previous one is 5, 1.53. It's the exact same. I think the difference is probably just it could be the amount of time it was waiting for CodeBuild to provide a container. Right.

Luciano: So transitioning between the steps.

Eoin: Yeah. It was pretty successful. This unboxing. I think we saw that there's some gotchas with this, but hopefully it's fairly obvious to people that... Like the documentation, I think we've all seen mixed quality AWS documents. I think this is a good example, one that we should probably call out as a nice document. The tool itself, I think provides good onboarding. There's a couple of concepts you have to get used to, like what's an app, what's a service, what do you do first, how do you add things? In general, it provides a nice guided experience. It is developer friendly. So I think that's off.

Luciano: Indeed. Yeah, I agree with that. I'm just surprised that this tool seems to have been around for a while. I think you mentioned like 4 years. Yeah. And it's not something that I see often being mentioned.

Eoin: I mean, it's definitely worth using. I would use it again. That's my impression from having used it today. If we look at it, it has been used, it has been committed to since I think 2019 is when the repo starts, but it's continually maintained. Issues are being opened and closed all the time. It's long enough time that we can say like, this is definitely being maintained. But yeah, it's not going to go away anytime soon, probably.

Luciano: I get one of the things like it provides an abstraction over ECS, Fargate, AppRunner.

Eoin: I think it's probably more valuable for ECS and Fargate, but I'm interested to know if other people have tried it with AppRunner or have opinions on it. Maybe people who've got experience of running co-pilot based workloads in production and how it compares to the other options.

Luciano: Yeah, or also if you use it for other kinds of workloads, like I don't know, task based from, I don't know, workers processing tasks from a queue or something like that, which seems to be an option that it is support. So I'd be curious if people tried that, what is your feedback?

Eoin: Yeah, and also if people have tried it from microservices applications with service connect for inter-process communication, maybe some of the more advanced stuff like auto-scaling. And also there's some concepts we didn't even touch on here altogether. Maybe it's under developing, but you've got the ability to put a cloud front CDN in front of it as well. Add additional sidecars for whatever you want. So I'm interested to know what's the most extreme co-pilot example out there in the wild.

Luciano: Okay, so I suppose that's all we have for today. This has been a little bit of a long episode. I hope you will like this new format that we're trying to explore. So definitely give us your feedback if there is something you particularly liked or you didn't like, or maybe another product that you would like us to unbox. Let us know in the comments and we'll take your feedback very seriously and hopefully we'll be able to give you value in the next episode. So thank you very much for being with us today and we look forward to seeing you in the next episode.