Help us to make this transcription better! If you find an error, please
submit a PR with your corrections.
Eoin: One of the things that's very common for web applications running in the cloud is that you will need to handle configuration. You're probably running your application in different environments, dev, staging, production, etc. And most likely you'll need to provide simple things like database connection details, various secrets for things like API keys, session storage, or simply referencing different S3 buckets or DynamoDB tables.
Most likely these values will be different for every environment. In this episode, we'll discuss which AWS services you can leverage to store configuration for your web apps. We will discuss simple strategies such as just using environment variables or services such as SSM, Secrets Managerr, App Config, and even how you can even roll your own configuration storage. We'll discuss the pros and cons of every approach. And if you stick till the end of the episode, we'll also give you our recommendation on what's the best strategy for different kinds of applications. My name is Eoin. I'm joined by Luciano for another episode of the AWS Bites podcast. fourTheorem is the company that makes AWS Bites possible. If you're looking for a partner to accompany you on your cloud journey, check them out at fourtherem.com. Now Luciano, before we start, as usual, we should probably begin by clarifying the use case a little bit more.
Luciano: Almost every application needs some degree of configuration. As we mentioned in the intro, what is really configuration? It's generally something environment specific that your software needs as an input to be able to perform whatever task it needs to perform. And just to give you some examples that can be different kinds of configuration, maybe your application needs to call a specific third party API.
So you need to have an API key for that that is injected somehow at runtime. It can be database credentials if you need your application to connect to a database, or maybe you need your application to do some kind of client side TLS under shake, so you need to have client TLS certificates. So you need to have a way to also provide those as parameters. Or in AWS, it's very common that you build, I don't know, a Lambda or a container running on Fargate, and they often need to use other services like S3 or DynamoDB.
So you might create everything together in a stack, and then you need to have a way to tell the application, OK, which DynamoDB table do you need to use or which S3 bucket do you need to use, and somehow be able to provide that reference. But it can be also something more like application configuration level, like what kind of logging level do you want? You might want to provide that as a parameter because maybe in a development environment you want to be very, very verbose.
But in production, you don't need to be as verbose because otherwise you might collect too many logs that you don't really need all the time. And other more functional parameters could be, I don't know, timeouts when doing HTTP requests or different kinds of connection. Or if you really buy into this mental model, you can start to do things like feature flags to enable or disable specific features or maintain allow list or deny list to expose certain capabilities only to specific users or customers that maybe have different tiers.
So really, there is no limit to imagination. You can use different kinds of parameters for all sorts of different things. So traditionally, configuration was stored mostly in code. So you would have one configuration file that will contain all this information, maybe multiple configuration files, one for a different environment. And this is a simple and effective practice, but it comes with a problem.
And the problem is that you are effectively maintaining all your configuration as code. And therefore, every time you need to change even one single configuration value, that means you need to go through a code change and through the full lifecycle of deploying that code change. And this is still not necessarily too bad, but it becomes really bad when you need to store secrets because maintaining secrets in plain text in your Git or whatever other source control tool you use is not always easy to do securely. Most likely you are going to end up disclosing stuff that should be sensitive and should be managed more properly. So definitely, there needs to be a better way to manage configuration. And today, this is what we want to talk about. So what would be the first option Eoin?
Eoin: Well, there's an old document at this stage called the 12-Factor App, which is very popular, I think, still. And it's all about best practices for designing running applications. One of the things in there is that they say you should store your configuration as environment variables. So maybe we can talk about that one first. So what are environment variables? Probably have used them. But when you start a process on any system, Windows, Linux, any Unix system, you're provided with access to a set of key value pairs that are in the environment of the running process.
So you might have seen AWS credentials, for example, like AWS_REGION, AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, standard Unix ones, like PATH, USER, PWD for the current directory, hostname, etc. And then different runtimes have their own ones as well. Like in Java, you'll have CLASS_PATH and JAVA_HOME. In Python, you'll have PYTHONPATH. In AWS, you can use environment variables with Lambda, with Fargate, EC2, any process really.
AWS generally provide mechanisms for you to configure the environment variables when you deploy runtime. Now, it's convenient with infrastructure as code as well when you're creating resources in the same stack, you can define them in your infrastructure as code, so you can reference them when you need. So it's quite typical that you'd have an environment variable to point to an S3 bucket, so the process will know which bucket to write to and read from, or an SQS queue or a DynamoDB table.
This allows you to use auto-generated names, names that are generated by your infrastructure as code, and your infrastructure as code tools will then track dependencies and make sure to create all the necessary resources before the compute one, so that you have the right environment variables. So what are the pros and cons then? Well, they're very simple to use and often very effective. They're free, right?
They're a built-in concept for most operating systems, so you don't need to buy into a particular service and pay for it. On the other hand, they're not great for secrets, right? So environment variables are clear text. You can obviously put an encrypted version in your environment variables, but then you need to have a key somewhere. You can generally see these values from the web console or querying resources from the CLI.
It's also risky in that environment variables may be logged to a log file, or anyone with access to the host can inspect the process and find out what the environment variables are. So in general, it's not a good practice to use environment variables for secrets. They can also only be strings, which can be a bit tricky if your configuration is complex with some sort of structure. Different runtimes will provide their own interminable environment variables as well, so there might be a risk of collision if you're not careful with naming. Even though this 12-factor app, which I mentioned at the start of this bit, is recommending environment variables, I find that to be a little bit dated and also not very effective for secrets. And I think we've moved on a little bit and we've got a lot more options now. So in AWS, we've got a few different options for storing configuration. All with our pros and cons, so let's get started on those. What's the first one?
Luciano: The first one that comes to mind is Systems Manager Parameter Store, or SSM parameters for short. And it's a managed service that it's a little bit like a key value pair storage, where basically you can store as many parameters as you want and it gives you a very simple mental model. You can store one parameter at a time, you decide the key, you decide the value, and that's pretty much it. It's up to you to organize the keys in a manageable way, maybe by application, maybe by environment.
Maybe you find some kind of convention where you say, OK, I'm going to try to stick to a tree of different things where I always start with slash an environment, slash an application name, slash maybe database, slash maybe different parameters that are relevant to your database. So you can build a structure that way, but it's totally up to you to define that convention and actually implement it correctly and consistently.
It can give you values in different formats, so you can store strings, of course, but you can also store strings lists. So if you have an array, that could be a more ideal way of doing that. And if you do that, there are certain conventions that you can use when you do infrastructure as code with CloudFormations or with SAM so that you can easily iterate through all the values in an array list. And you can also store secure strings, which are encrypted values, which give you some degree of extra security and control in case you are storing sensitive information, because that value is not going to be visible in clear text unless you have access to the key that allows you to decrypt that clear text.
And also, that gives you a bunch of tools and automation that can make integration more seamless, so you don't really need to manually encrypt and decrypt that information. One of the downsides is that there is no validation built in. So again, this is something else that is up to you to make sure that every time you insert the values for the first time or change them over time, you are actually doing that, respecting whatever is the correct format for that particular key value pair.
On the good side, you also get an audit trail of changes. So every time you change something, you can see that the value is changing and you can keep track of values being changed, which is something that can be very important, especially again when dealing with security sensitive information, like maybe you are changing an API key, it's important to know that that API key is changing. In terms of service, it comes in two tiers.
The first one is called standard tier, and we also have advanced tier, and you just get slightly different features and different pricing. The standard tier is free and allows you to store a maximum of 4 kilobytes in size. So I think that's generally more than enough, but if you really need to store more information in a key, you need to use the advanced tier, which can go as high as 8 kilobytes per key value pair.
The advanced tier is also more interesting because you can use policies, so you can add additional rules. For instance, you can say this particular parameter is going to expire after a certain amount of days or months, whatever, but it comes with an extra cost because when you switch to the advanced tier, you have to pay $0.00 per parameter, so $0.05 per parameter. And there is also an interesting caveat that is that you can, if you switch to the advanced tier, you can easily upgrade basically from the standard to the advanced, but you cannot go back.
So of course, when you decide to buy into the advanced tier, you need to consider it's not as easy to go back again. And if you think about that, it makes sense because you might start to store an 8 kilobyte value. So how would you transition back at that point? You will lose some information. So AWS doesn't really give you a way to do that as a, I guess, a preventative mechanism to avoid you losing data in your parameters.
Now, how do you use it? Is it, it is actually really easy. Like you can do an API called get parameter where you provide the key for the specific value you want to read and you get back the value. And of course, you can do that from the CLI, you can do that from the different SDKs, or you can even see the values from the web console. You need to have the right permissions. This is actually a really good thing.
That you can define fine grained permissions with IAM to effectively say which principles can have access to which keys. And if you have created that structure, there's like a tree using prefixes, you can use the asterisk just to say, okay, I'm going to give you access to this specific sub-tree of configuration. So maybe just by environment and application and not every single parameter in your account.
There is also, if you're using it with Lambda, it is a bit of extra code that you need to write at bootstrap time. So when the Lambda is doing the first call start, you probably want to fetch some of this parameter and you need also to have some piece of logic, maybe to try to refresh them every once in a while. So it might be a bit complex to do it in a Lambda because the Lambda is generally more focused on the business logic.
You don't want to pollute it too much with all this extra management code just to fetch configuration. So one idea there is if you don't want to do all of that yourself, you can use an AWS provided Lambda extension, which once you install it, is going to do all of this work in the background and in your Lambda you already have immediate access to the value coming from SSN parameters. If you do Node.js and if you're using Middy as a middleware system for your Lambda, there is actually a middleware that you can just enable and it does exactly the same thing as the extension.
I am of course biased because being involved in the Middy project, I tend to prefer this option, but I think it's a little bit easier because you don't need to install an extra extension, it's just dependencies in your Lambda. So if you already have dependencies, you can easily just do an MPM install and everything is then packaged together without needing additional extensions. If you use Python, there is something similar, I think by Alex Casalboni, which is called SSM cache and it's pretty much a library that again you install and it can do all of this stuff seamlessly for you.
So with very minimal configuration, it takes care of all the life cycle of fetching the parameters and making them available to your Lambda handler. And there is also something similar in Lambda power tools. I think there is definitely support for TypeScript. I imagine there is also for Python, but worth double checking that. And then if you're using tools for infrastructure as code, such as SAM or serverless, there are often very interesting pieces of syntax that can facilitate fetching SSM parameters at different stages.
Of course you can reference SSM parameters at deploy time, but the more interesting thing is that sometimes you can also prefetch this parameter before the deployment time. So you can effectively interpolate the actual values into your templates, which sometimes allows you to do more dynamic things like, I don't know, conditionals, where you could say if the value of this SSM parameter is something that maybe you change the template slightly to deploy something rather than referencing something that exists already in another stack.
So it can be actually a very powerful model. And actually I believe Eoin that you have an article on that so we'll make sure to link that in the show notes for people that want to deep dive into this approach. So in summary, let's try to say what are the pros and cons of this approach. I think it's generally a good approach because it's relatively simple and cheap and you also get a quite good throughput.
So if you have lots of services, lots of applications, reading parameters all the time, you should have still significant throughput to be able to support all of that. But of course there are some problems. It's not great for very structured use cases because you need to come up with your own structure and make sure to be consistent. You don't get any validation. So you are always at risk that somebody is gonna mistype something and then the application breaks because you cannot really parse that information at runtime.
It doesn't deal too well with sensitive data. You can definitely do encryption using the secrets value approach, but it's not very structured again. So for sensitive data like API keys, you also don't get a concept of rotation built in. So it's up to you to create some kind of automation, maybe a Lambda that runs on a schedule, just to make sure that you remember to rotate a key that might expire after a while. And speaking again of throughput limits, you have 40 reads per second, unless you buy into the higher throughput mode, which is 10,000, I think, reads per second. But there is an extra cost for that. So I say that it's good because you generally get throughputs but if that throughput is not enough, it comes with extra costs. So you have options there, but you need to account for all the features that you need to build yourself and all the extra costs that you get when you need the more advanced features. So should we talk about the other approach?
Eoin: Yeah, I think we've got a few other approaches. And the last one is going to be less familiar to people, but Secrets Managerr, which I think we'll talk about next, is probably more familiar. And this is a specific managed distributed service dedicated to storing secrets, right? So this is about passwords, API tokens, things you want to really protect. Again, you can create key value pairs, but unlike parameter store, you've got more options.
You can have structured JSON. So if you've got hierarchical document oriented values, that's possible too. The difference between the secret value in parameter store and a secret in Secrets Managerr is that Secrets Managerr would use KMS to do the encryption rather than that being all hidden from you. So you need to understand how KMS works a little bit for the key management and also provide access to the key as well as to Secrets Managerr for principals who are trying to retrieve and update secrets.
So to read a secret, you need to use the API, GetSecretValue with the IAM permission for that. And you can be very granular then as you would want to be on which gets access to a secret. You can keep data versioned also for auditing, which is important. You can monitor access to secrets thanks to Cloud Trail, which is very important for governance and compliance. And then the really outstanding feature really for Secrets Managerr, I think is the ability to automate secret rotation.
So it can rotate secrets automatically on a schedule for certain types of credentials like access to Redshift, RDS or DocumentDB. And if you want to customize the nature of that rotation, you can use Lambda as well. So it's more of a complete managed service for secrets. And one of the advantages also when it comes to things like databases is that it will integrate into RDS, DocumentDB and lots of other AWS services so that you don't have to go through the dance of retrieving a secret, making sure it's stored secretly and even in or sensitively in memory and then passing it onto another service.
AWS will glue those things together for you. An example of that is you're using CloudFormation to create an RDS cluster. You can set the master password to be a secret that's also created in that template. You can configure the rotation for it. You never even have to see that password. It's all just wired together automatically. So that's pretty nice. On the cons, I guess, for Secrets Managerr, it can be more expensive, especially compared to the parameter store free tier.
If you heard a few people kind of suggesting skeptically that AWS are, you know, realize the parameter store was a little bit too cheap, especially with the free tier. That's why they invented Secrets Managerr. But Secrets Managerr allows you, I suppose, more throughput. Yeah, I think you get 30 days free per secret and then it gets into a 5 cents for 10,000 API calls. So with all these things, you really have to think about your throughput, right?
Parameter store, you've got those throughput limits. You need to make sure you're caching. You can't be reading too aggressively. I've seen lots of teams run into limits with parameter store. With Secrets Managerr, it might be just a question of cost. So you need to think about, okay, how many processes do I have running? How often are they reading these values? And what's that gonna cost me? And will I stay within the throughput quotas? So Secrets Managerr has that throughput cost, but it also has a 50 cent per secret per month cost as well. So think about that. And maybe think about some of the alternatives. So where are we when it comes to alternatives? I mentioned one that's less familiar for people and I'm definitely interested to hear about App Config. Can you walk us through that one Luciano?
Luciano: Yes, App Config is really interesting because I think it tries to give you a more structured experience, trying to fill all the missing gaps of the other approaches we mentioned before. So let's try to describe everything in order. So what it is really is an order managed service from AWS and it allows you to store configuration. But this time, rather than thinking in terms of key value pairs, it's more you are storing an entire configuration object that makes sense in a specific context.
And this configuration object is of course replicated across different ADs, so it's always highly available. So you don't really have to worry about the storage piece. It's more, it's there and AWS make sure that it's always available for you when you need it to reference in your application or your infrastructure as code. And one of the new features compared to the other ones is that it uses a concept called validators that is actually something you can configure very, very granularly.
And you can define exactly what are the rules that basically say that the values you are inserting in this configuration object are actually conformed with what your application is going to look for. So basically that is gonna save you from somebody making a typo because maybe they forgot a quote or a semicolon or a curly brace, whatever. And that is something you will see when you try to change the value.
So when you try to deploy the value itself, not when your application starts and then your application is going to crash. So basically this measure allows you to prevent accidental crashes of your application by seeing the issues when you try to change the configuration rather than when you deploy the new configuration and the application crashes, which I think is really, really cool because it can prevent also downtimes, accidental downtimes just due to human error.
And in that light of trying to make deployment safer, there is an entire mechanism that allows you to roll out deployments of configuration changes in different ways. We'll talk a little bit more about that. But again, the idea is try not just to manage configuration in a more structured way, but also to make sure that every time you change that configuration, deploys are actually managed more carefully and you try to spot as soon as possible if that configuration is gonna break your application and take preventive measures or roll back as soon as possible.
Again, the service keeps an audit trail of all the configuration changes. So this is not necessarily new, but of course you also have that feature. So let's try to talk more about what is the experience of using it. And I think that will describe a little bit more all the different concepts and how this tool is a little bit more feature complete than the other ones. So when you start, you need to define an application configuration.
And this is already the first big change because right now we have been talking about key value pairs. So not necessarily tied to one environment or one application. AppConf immediately makes you think about, no, this configuration is not something very generic. It's not one parameter that exists on its own. You need to think about an application and we are defining the entire configuration for that application.
So you start by defining this concept of a container that represents your application. You can integrate that basically into once you have basically the application stored into App Config, of course you need to do something at the application level to make sure that you can fetch that information. And this is interesting because it's again a pull mode. So it's your application that needs to know exactly when to fetch that information.
And it needs to do that by calling explicit the get latest configuration command again. So you can do from the CLI, so you can do it from the SDK or with a bunch of other helpers that we will describe later. One of these helpers is an extension for AWS Lambda. If you use Lambda, so very similar to the one we described for SSM that can fetch the configuration automatically for you and can try to refetch it after a while to make sure it's always kept in sync with the latest configuration.
If you use Middy, again, there is a middleware for it. So very similar to the SSM parameters one, does auto fetching, caching and refreshing for you. And another thing that you can do is fetching and refreshing for you. And I think from a configuration perspective, there are some interesting concepts that are worth expanding on a little bit more. So when you define an application configuration, you also need to define environments.
So again, the approach is very methodical and structured. You don't have to invent anything. You just need to follow the process. So an environment is something like depth, staging, production, beta, preview, whatever you want to call it, that makes sense for the different stages of your application life cycle. So you can pick different configuration profiles. You can pick between freeform and Fisher-Flack, and they give you a very different experience on how to define your entire configuration.
So Fisher-Flack is probably a little bit simpler, but it's probably more specialized for the cases where you are actually really thinking about enabling or disabling specific features for specific classes of users. While freeform is a lot more, you have a big structured configuration file, I'm going to give you all the tools that you need to manage that configuration file. And it's not really a file, it's just something you are storing in AWS and you load it when you need that information.
So when you use that freeform configuration profile, you have a choice of how are you going to define the object structure, and you can define plain text, JSON and YAML as the three available option. I think JSON is of course the most common. And if you use JSON, you can even use JSON schemas to create your own validators, which when you do APIs, they're probably used to JSON schema. So it can be very convenient way of defining all the validation rules for a piece of JSON.
But if you use something else, like if you want to use plain text and use your own format, because I don't know, maybe you like TOML, let's say, which is not really supported out of the box, then you can even create a Lambda that can do the custom validation for you. So it's really an extensible model where if you really have bespoke use cases, let's say maybe you are migrating an application that I don't know, from Java using INE files for configuration, you can still use this approach.
You just need to do a little bit of extra work if you want to have validators, making sure that everything is configured correctly. When it comes to deploying a change, as we mentioned before, we have different deployment strategies. And just to give you an example, you can go for like an immediate rollout where you say, every fetch that happens after I click okay, needs to get the latest version of the application, of the application configuration.
This is of course the simplest rollout model, where it's like, I'm sure everything is gonna work fine, and no worry, just push it to everyone. But if you want to be a little bit more careful, you can use different strategies for software layouts. And just to give you an idea, for instance, you can say, okay, I want to linearly increase the number of clients that see the latest configuration. For instance, you might start with 10%, then after a minute, an additional 10% is gonna get the new configuration until you reach the 100%.
Or you can even define that by time. So you want to say, okay, I want to gradually rollout everything in the next three minutes. And of course you can monitor the software rollout, and if something goes wrong, you can basically define in App Config, you can tell it to watch for a specific CloudWatch alarm. If that CloudWatch alarm fires while you are doing a software rollout, then it's gonna assume that something went wrong, and it's gonna roll back to the previous configuration.
So this is actually a very powerful mechanism that allows you to safely rollout with the least damage possible, because you have validation upfront, you can still break things, because even if your configuration is syntactically valid, maybe the content is not correct, maybe you have the wrong API keys, so your application, when it starts, is gonna start to fail, because it cannot call a third party service.
So you can roll back as soon as possible while you maybe have impacted only a small fraction of your users. And I think this is the most powerful feature, and if you would have to replicate that yourself, it is really a lot of work, and it's hard to get it right. So this is definitely, I think, the power feature that you get by using App Config. One last note that I have is that it can be integrated with Secret Manager, so if you are worried about storing secrets, there is a nice integration there where you don't really have to manage secrets yourself in plain text or encrypt them yourself, because you can rely on Secrets Managerr doing all of that for you.
And another interesting feature, which I was not really sure when it could be useful, but if you want to store the actual content of the object in things like S3 or SSM parameter, or even SSM documents, you can even do that. So the backend doesn't have to be App Config itself, but you can even rely on using other services as the backend. Finally, let's talk about pricing. It seems very appealing just from the outside.
I haven't used it really at scale, so I don't know if there is any hidden surprise with pricing, but basically it's a usual pay-as-you-use model, only unfortunately there is no free tier, but then the price seems relatively low, so I don't think it's really a problem. So you basically pay for configuration updates, which is a very low charge. Like you will need to do 10,000 changes in a document to get charged $8, and I don't really see most applications doing 10,000 changes even in like 10 years, probably.
So yeah, I think it's a very reasonable charge. So most of the time that cost should be neglectable, unless you really have huge applications that are changing all the time, because maybe they are extremely highly dynamic, integrating, I don't know, configuration for multiple sources. And then of course you pay per API call, so every time you fetch the configuration, there is a cost. It is relatively low, but again, worth doing some maths there, making sure that if you have thousands and thousands of services trying to read the same configuration, you have multiple environments, so that kind of multiplies even more. Of course, that low cost can easily compound and get to a point where it's not sustainable anymore. So always take our recommendation when it comes to prices with a pinch of salt, because every use case is very different, and you always need to do your own maths to make sure that service and that pricing model makes sense for your use case. Okay, Shu will talk now about the crazy idea that you don't like all the other services, and you are just feeling confident that you can build your own service, managing all this configuration.
Eoin: I think the main reason that anybody would be motivated to roll your own probably based on everything we've said is if the pricing or throughput constraints of any of the services we've mentioned don't really fit their access patterns. Seems like it would be simple to implement, but not necessarily so. You could do it, I've seen it and built systems like this in the past, and you can use services like S3 or DynamoDB, or ElastiCache or Redis or Memcache for this kind of thing.
All depends on what kind of performance you need ultimately. Then it's up to you to define all the necessary conventions to manage the data consistently per app and per environment and ensure that the consistency is in place according to what you need as you replicate across multiple availability zones. You might need to add validation. You'd need to think about how to manage sensitive data in a secure way and maybe provide rotation support, and then defining an API to make it easier to fetch specific versions or a subset of the configuration.
Then you need to think about controlling access to the configuration layer, your change log, keeping a history of changes for auditing. And if you do, probably in a regulated environment, you need to then think about getting compliance for that, which AWS have already taken care of. So for simple cases, it might work. You can imagine if you look at the SSM Parameter Store API and you don't like the cost, you could say, well, I can implement this with DynamoDB.
I can easily do starts with secondary key match to retrieve the values, and I can pretty easily build an API that's compatible with the SSM Parameter Store API, but you have to think about all those other pieces and keeping it up to date, et cetera. Even though you might end up getting very good performance, throughput and cost with your custom built solution, you end up with another chunk of code that you probably don't really want to maintain once the novelty of building such a system has died down.
For simple cases, if you have config files and you just want to store them in S3 or a DynamoDB table, there are solutions that help you there. Even we've mentioned Middy a few times and Middy offers a collection of middlewares that make it easy to prefetch and cache configuration that's stored in places like that. You can even do that with power tools as well. It's one for the people that like to build everything themselves, I guess, but best avoided otherwise. And I think that's our final one in this collection of options for storing configuration. Luciano, do you want to give people what we promised at the start, our recommendations for what approach to take?
Luciano: Yeah, I'll try to give a more, it's definitely opinionated, but hopefully sensible enough in terms of how do you approach even choosing between all these different options. I think starting simple is always a good recommendation. If you're building something small, maybe doing infrastructure as code, you don't really need to reference anything sensitive, but maybe you just want to reference, I don't know, DynamoDB table names, S3 packets names, that you are building the same stack, going with environment variables is going to be super simple, no problems in terms of security.
Everything's populated by infrastructure as code, so even the risk of doing mistakes is very low. So why not doing that? It's something that you see a lot in every tutorial when you see, I don't know, how to get started with Lambda and API Gateway and DynamoDB, you will see something like that. So also a very common approach and worth using it for the simplest cases. You can switch to SSM and Secrets Managerr as soon as you start to have more advanced use cases, where maybe you need to manage a bit more configuration and you want to define your own structure, or maybe when you need to start managing secrets and you need to manage all the life cycle of those secrets correctly, make sure that they are stored correctly, rotated correctly, and you have control on who gets to access those secrets.
But of course, when you get to work on more complex applications and you want to really have a fine-tuned life cycle for all the configuration, I think App Config is really the service that you want to use. It is relatively new, so no surprise that tries to fill all the gaps of the previous services, but it seems to give you really an entire life cycle, so you can have, I guess, a better experience and you don't really need to build anything to fill the gaps yourself.
Now, finally, we would, of course, discourage the custom build solution, unless you really have good reasons, and good reasons, as we said, it might be cost or performance, because you might have really bespoke use cases where going with the other services would be either too expensive, or maybe you need a throughput that is not really friendly enough. Either you cannot do it because there are service quotas or because it gets too expensive again, right?
So considering cost and throughput together, you might want to build something yourself that might be, I guess, reasonable at that point. Or maybe another use case for this is when you are doing a migration and you already have a very bespoke mechanism to manage all the configuration for your application, and you don't really want to start to change all of that as a first step in your migration, maybe that's a place where it might make sense to do something a little bit custom as you continue through your migration. But I suppose you still eventually want to migrate to something more structured like App Config, just so that you can clean up all this custom code and keep your application a little bit more focused on what is the business value that it's providing for your company. And I think that's our generic recommendation. Let us know if you agree with it and which kind of services have you used already, and if you have, I guess, a similar experience and perspective on all of those.
Eoin: Well, to wrap up, I'm going to point people to a few resources that we've found, and Be a Better Dev has some great videos with deep dives and demos on all of the options here. Those links will be in the show notes. Everything we mentioned with Middy and power tools, everything else, all the other articles we mentioned are also in the show notes. So once again, thank you very much for joining us and listening, we really appreciate it, and we'll see you in the next episode of AWS Bites.