Help us to make this transcription better! If you find an error, please submit a PR
with your corrections.
Luciano: AWS is all about removing undifferentiated heavy lifting. As it evolves, we got more services that are meant to take away complexity and maintenance. Now that we have become used to building with serverless and AWS, we are beginning to take a step back and still notice that there is still plenty of complexity left. While we wait for AWS to evolve further and handle more of this for us, other companies are innovating and try and get there first.
Today, we're going to take a peek at Ampt, a recently launched solution that builds on AWS but aims to take the pain away and deliver the utopia of only ever focusing on the business value. My name is Luciano and today I'm joined by Eoin and this is AWS Bites podcast. And if you're wondering why I'm wearing a shirt, this is to celebrate our episode number 100 💯 🎉! So let's get to it. fourTheorem is the company that makes AWS Bites possible. If you're looking for a partner to accompany on your crowd journey, check them out at fourtheorem.com. Okay, Eoin, you spend a little bit of time playing with Ampt and I think you have a fairly good idea at this point of what it is, what is the value proposition. So maybe we can start by describing high level, what kind of problem does it try to solve and how it works.
And it's the without part that's probably the most appealing, right? For people who've been building serverless applications, you can understand that sometimes it can get pretty complex, especially when you have to manage configurations of resources, events, functions, permissions. So Ampt has been, I think, kind of spun out of the serverless framework and it's got some big names behind it. Jeremy Daly is the CEO.
So it's about writing code, not infrastructure. So what does that mean? Well, the promise of Ampt seems to be that you deploy your code. It auto-optimizes and creates the infrastructure for you. I've seen this term, self-provisioning infrastructure. So you don't have to worry about creating YAML and configuring loads of resources. You're just really writing the code for the logic and the code that glues pieces of logic and your data together.
And that's really it. So it's really boiling everything down to the essential fundamentals and getting rid of all that mess that we typically have to wrangle with on a daily basis. And the other interesting thing about it is that it provides isolation for every environment and that includes developer sound boxes. And that's just something that you get out of the box from day one when you sign up. And that's something that we've probably seen in AWS environments. It can take you days, weeks or months or sometimes to figure out how to do those things correctly. So that's already a pretty big win.
Luciano: Absolutely. Sounds really interesting. But let's try maybe to understand a little bit better. What kind of applications can you really build? You mentioned APIs and frontends, but can you use any framework or are you only forced to use specific things that Ampt gives you?
I guess the only major difference there is that you're not doing a create server or a listen. You're not actually opening a port and listening. You basically just have one line of code that's going to wire it into the Ampt ecosystem and that will connect all of Ampt's magic into the roots that you write in your code. So you might then wonder how does this work? Like if you don't have to create any infrastructure in advance.
So you only write one type of code. And this is really just your logic and setting up the SDKs. All of the infrastructure is kind of generated from that or generated in advance. The term infrastructure from code as opposed to infrastructure as code is being used to describe this. So you don't need any CDK type code. You don't need any YAML. It doesn't really compare to things like CDK or Pulumi or even Wing Lang.
That other new project that kind of gives you a new language for generating your infrastructure. You don't really think about the infrastructure that's being generated too much here with Ampt. So if you take the example of a database, right now Ampt supports a key value data store. So, and it's there and ready for you to use right away. So in your code, you can start doing sets and gets and remove operations.
You don't have to create any tables in advance. And the same goes for objects, storage and events. Everything's just there out of the box. So if you take an example of creating an API, you just write the API implementation in the framework of choice. And with one line of code, just wire it in. And then Ampt will automatically handle the API infrastructure and routing. You don't have to think about load balancers or API gateways, any of that stuff. You're just writing routes like you would back in the old days in more, let's say monolithic applications, maybe.
Luciano: That's a good point that makes me think about the next question. Like where does the code run then, right? There must be some kind of wrapping thing that happens when you want to deploy to Ampt. And in order to take all of your code and package it in such a way that can be effectively executed in a scalable way on AWS infrastructure, right?
Eoin: I'd love to know more about exactly how some of this magic works under the hood. What we do know is that it has this concept of smart compute. And I think this is one of the most interesting and exciting parts of Ampt because it allows your code to, by default, run in Lambda. And I think from day one, that's where your code will run. But if they detect that your traffic is consistently high, they can move that into AppRunner.
And you don't have to do anything. Or if your tasks start to run for longer, I think it's longer than five minutes at the moment, they'll start running your code in Fargate. I think you have to be on a certain pricing tier for that to happen. But the idea there is really nice, right? That you don't have to think about monitoring your function, optimizing memory, all of this stuff, timeouts, scalability, quotas.
The idea that they can take your code and move it around behind the scenes, and you don't notice anything, but it's kind of cost and performance optimized for you, that's something that's really exciting. And it's something I can imagine them doing a hell of a lot more with when you think about maybe even automatically optimizing your data storage as well. You asked the question, where does it run?
And we're talking about Lambda and Fargate and AppRunner. But that means it's running on AWS. So all of Ampt's infrastructure is built on top of AWS. And every developer gets their own sandbox, which is really cool out of the box. When you deploy as a developer on the team, you automatically have an isolated environment that you can share with other team members. You can kind of share a snapshot of it.
But it's automatically synced as well. So as long as you're running the Ampt front end, it's automatically updating your infrastructure and code. And I think it's doing a lot of smart stuff in the background there, because the deployments at times I've seen are pretty fast. The feedback time is really good. You can also then deploy your own Ampt server. You can also then deploy to any stage, just by running deploy space stage.
And you can create different stages like QA, pre-production, and production. This isolated environment, it's a big USP for Ampt, because they're focused on kind of eliminating any kind of resource contention problems, noisy neighbors. And from what I understand of the launch party announcement, it seems that each environment runs in its own AWS account under the hood. So you don't have these noisy neighbor problems with quotas and rate limits and everything like that. And I'd love to know how that's done. But assuming that they've handled all of that, it's really nice from a user perspective, because you don't have to worry about setting up those accounts and managing environments. It just happens automatically for you.
Luciano: Yeah, I can definitely see lots of edge cases in trying to think how they might come and implement all of that. But I mean, that doesn't mean that it's not possible. I'm sure that considering that all the smart people that work at this project, they have figured out a bunch of interesting solutions, would be nice at some point to discover some of them. But that's maybe a topic for another episode. OK, let's talk how do you get started. Assuming we are kind of making you excited as well and you want to know how to get started, what is the first step?
Eoin: I'd suggest doing what I did, which is just start with the very simple instructions on the website. You just npm install the CLI and then run the frontend amped command. And from that, it'll ask you to pick from a number of starter templates, like an API backend with Express or Fastify, for example, or a frontend application built on, say, Astro. And then it automatically gets generated for you and deployed.
So you immediately then get a link to a dashboard where they've got a really nice UI where you can monitor the applications. And you can see the metrics and logs right away. So I think the usability and the aesthetics around amped, I've been pretty happy with. I think they look much nicer, feel much like a better developer experience than we're used to. And there's obviously plenty to be done there, I think, in terms of making the logs accessible in different ways. But I like that it's pretty simple. And automatically with one command, you're up and running with that dashboard. And then you get a generated link for your API endpoint or a static site, if that's what you've deployed. OK.
Luciano: In terms of features, you mentioned already HTTP APIs. You also mentioned key-value store. Is there anything else worth mentioning?
Eoin: On the API side, I guess it's notable that you've got support for API keys, also web sockets. And even though I haven't tried it, I've seen that HTTP response streaming is supported. And I'm interested in how that works across the different compute platforms. But I guess that's another curiosity. When it comes to the data side, what I've seen from looking at it and observed is that it's basically a much nicer API for DynamoDB.
That's what it feels like. Because with DynamoDB, the API is a little bit strange, takes a bit of getting used to. This is like a more developer-friendly API. So you don't have to create any tables or worry about creating additional indexes. You basically get set, remove, and add operations. So you can do everything pretty much that you could with DynamoDB through those operations. And then you have nice things like they automatically generate metadata, like created timestamps and modified timestamps for your objects.
And then you have namespaces to separate what would be your DynamoDB hash key and your range keys. So it's just a little bit more easy to get used to. And you don't have to worry about the different types of operations. You can just use wildcards, for example, for searching starts with. When you need to have secondary keys, you just provide attributes that are called labels. And these seem to automatically generate these secondary keys for you under those.
So that's pretty nice. So it's really just key value store. If you need to use anything else, like relational database, they're recommending that you use partners like PlanetScale, MongoDB, Memento, and you just use their SDK. They also have a nice support for parameters, which can be automatically injected into your application as environment variables. You can have organization-wide parameters or application specific parameters.
And then you have object storage, which is just like S3, but also another simple S3 abstraction. You don't have to worry about creating and managing buckets. And then you have events. And events and tasks are quite a nice feature as well. So you can have your cron events, like run events on a schedule every hour. But you could also get events based on storage and data. So if you have your data stored in the data store, you can just say data.on.
And then you can put a filter which says if an object is written with a certain key, then call my function. And this is where the infrastructure just starts to disappear, because you're just basically writing what looks like a Node.js event emitter or handler or something. It's just an on event, then call this function. And after handling all of the wiring and the event every bruise or whatever other magic is happening under the hood. I have seen that they create a queue in this account. I have seen that just from snooping a little bit into the environment variables in the code and stuff. So I don't know if that's used for that or if that's part of a future feature.
Luciano: That's pretty interesting. I am a bit curious to know how they deal with things like, I don't know, if you're passing a callback to this event interface, how do they actually serialize the callback in a way that it is, I guess, responsive to events in a distributed way. It's not just running in one Node.js process. But yeah, I don't think we have that information. We can just speculate. So again, maybe in future episodes, we might be able to figure out this magic and give you more details.
Eoin: Yeah. Maybe we can get talking to a member of the Ampt team at some point and find out all of the great details.
Luciano: Absolutely. That would be fun. But meanwhile, should we talk about pricing? Because is there going to be a huge pricing, is there still reasonable? Because I guess that's where the trade-offs are. You get a much nicer experience, simpler, get started, complete your project quickly. But if it's going to be too expensive, is it going to be worth it?
Eoin: Right now, it looks not too bad. But I guess it depends on your usage. And there's three tiers. None of them are ridiculously expensive. But the pricing is essentially per team member per month. So you have a preview tier, and that will give you three apps, 10 environments, and 500 invocations per hour. And then you have a $7 tier and a $29 tier. And it gets more capable. You can have more team members, more apps, long-running tasks, all of that stuff. I don't know exactly how this works yet. Like, if you've got pricing that kind of dictates the number of invocations per hour, but what happens if you're doing like a massive number of DynamoDB calls in that one invocation? And that causes a lot of spend on their side, how that's managed. They also say in the pricing details or in the FAQ, they're working on implementing spending limits. And I think that would be a big differentiator. Obviously, we've talked a lot about how that feature is missing from AWS. If they can pull that one off, that would be really cool.
Luciano: Yeah, that actually makes sense. Considering they provide you that kind of abstraction layer, they can probably see what kind of operations that it's trying to do. And then if they keep track of all of that and the cost it might have, I think they might be able to implement some kind of smart blocker to limit the expenditure there. But again, I'm just speculating just because, yeah, of course, my mind is curious to figure out how would I build that kind of feature with these ideas. But speaking of which, another thing that I have in mind is how easy it is to do CI/CD, which I think these days is something that everyone is doing, possibly through things like GitHub Actions. So is that something mentioned in the documentation, something you tried yourself? I've tried a couple of different things.
Eoin: One is using their GitHub app, which is a bit like using Netlify or a lot of other services like that, pretty seamless. You just connect the GitHub app to Ampt, and then it can automatically deploy your environment to branches you set up. So you can say, from this branch, deploy to the staging environment, from another branch, deploy to the production environment. But you also get feature branch deployments out of the box with that, which is really cool.
Apart from that, they're basically just providing examples for you to write your own GitHub Actions workflows. And then you're just running the Ampt to deploy command. So it's pretty straightforward. It seems like every application is just an isolated piece. There's no kind of, I guess you would say, like microservice approach, where you're deploying lots of things from one repo. I guess you could do monorepo deployments, but with separate deployment pipelines for each single service, if you'd like. So it seems like if you've got an API backend and then another Ampt to app for your front end, you would just do two deployment pipelines or just deploy them separately in one pipeline, if that makes sense.
Luciano: It does make sense, even though I'm not sure how they could manage the fact that you still have one environment per user. I guess the question is, when you buy into CI/CD, is it going to be just one user, or you can still retain some control of which user space is going to be used?
Eoin: Oh, yeah. I think I have the answer to that one, because when you're deploying from your local environment, you're just running the Ampt command, and that picks up your GitHub credentials or whatever way you've logged in to Ampt, generates an environment name for you from your name. But then if you do deploy production, it's deploying to the production environment, which is shared by multiple developers. OK.
Luciano: So you will use the CI/CD only to deploy to production, which is a shared account, effectively. That seems like the pattern, yeah. Yeah, actually makes a lot of sense. OK, you mentioned that you were able to see some of the underlying AWS stuff. Well, first of all, how did you do that? Was it really a feature, or did you figure it out in some kind of an indirect way? And how much did you get to see?
Eoin: Yeah, it wasn't any advanced hacking or anything here. I mean, they make it clear that it's running on AWS that's completely open. And then I just ran some code that printed the environment variables of the process, so I could see that there was a table name and a bucket name, and a queue name in there. And then I tried a few AWS SDK actions. And I could see that they seem to have implemented pretty good least privileged IAM policies, because I wasn't able to do much snooping. But I could list the objects in the bucket, and I could do a scan on the DynamoDB table. So it's not giving you the power to do anything that you can't already do from the Ampt to SDK. But like I said, I think they're going to allow you support to deploy to your own AWS account in the future, and then you would have full visibility, I guess. And you'd also get the benefit of being able to monitor resources, connect into other AWS applications, and even achieve compliance, because it's not all abstracted away from you.
Luciano: Yeah, that makes sense. I guess you also get the risk of you might mess up in different ways, because then you might have more control that they would want you to have, and either not change the table schema or stuff like that. So let's maybe try to wrap up this episode by listing what do we believe are the trade-offs here, because of course, this is not going to be the ultimate silver bullet to develop in the cloud. I think it's just another way with a different set of trade-offs, so probably worth remarking what those are.
Eoin: The idea of Ampt, I don't know if it's fair to call it like a "serverless Platform-as-a-service" offering. I think if it's very clear what applications it's going to be geared for and optimizes for those, I think it looks really, really promising. Of course, if you compare it to the options in AWS, there's a big trade-off, because AWS has hundreds of options and thousands of permutations for building applications like this.
In Ampt, you have a very limited set of options, but that can be a really good thing. Like in AWS, you have all the databases, all different types, and then you have services for data analytics and machine learning and chat and video, sending email. With Ampt, it's more like a pass where you will write the code for your business logic and then integrate it into AWS services or other SaaS to achieve all of that.
And it just depends on how much control you want over that infrastructure. There's other trade-offs there. Like it doesn't seem to have very fine-grained access control right now. I don't know how that's going to work in the future. The only way of protecting what you deploy is to have an API key right now. But I'm sure that will change. They do mention that they will be providing support for private VPCs if you have network connectivity needs as well. I'm also interested if there is a potential trade-off here with the ability to have this code moving from Lambda to Fargate to AppRunner. Are there some cases where you might write code that may run well because it's running in one environment and then suddenly it starts running in another environment but it won't execute correctly? I know that if you've got container code, sometimes it won't run in Lambda.
Luciano: Yeah, I definitely had that question in mind as well. And I was wondering, on the timeout example that you gave before, what happens? Do you get the timeout first and then they start to move your compute to something else? So you have a failure in the transition between one system to the other? Or there is some amazing smart system that figures out before you get there and moves you in time and avoids any failure, which I don't know if it's even possible.
But I think that would be really cool if it worked that way. So definitely curious to find out more. And hopefully we will have the opportunity to do that. But overall, I think it's fair to say that if you're building web applications, API or full stack application, Ampt seems like a very interesting contender, definitely innovating in this space. So it can also make a lot of things easier, which is great to see.
So if you're building something that sticks to the fundamentals of event-driven applications or event-driven logic, API, compute data storage, probably you can build a lot of complex things. So you probably don't need all the potential of AWS and all the millions of permutation and configuration options. So in that sense, it might really be a good platform to use for many different kinds of projects.
So we'll try to see if we will have in the future the opportunity to try it in a more serious project, something more big scale or production-ready, just to see what the experience will look like when you try to do something a little bit more ambitious. But there are actually some customer case studies that you can see on the Ampt website. So if you're curious to see what are some real cases that exist today in the market, you can check out that page and get yourself more informed on the capabilities and the different solutions that people have built with this.
Now, again, I think it's worth remarking that it's really admirable to see somebody trying to innovate the cloud space, which it changes every day, but we haven't seen this level of fundamentals which in a while. So I think it's trying to push a different approach. And that's really something that is always really welcome, because I think from this kind of ideas come the best innovations that we see in general in the tech industry. So let's keep an open eye for the future of Ampt or maybe other similar alternatives. If there are other tools like this that you have been using and you are happy with, please share them with us. We'd be looking forward to check your suggestions and maybe think about other episodes where we explore alternatives like Ampt. So that's all for today, and we look forward to seeing you in the next episode.