Help us to make this transcription better! If you find an error, please
submit a PR with your corrections.
Luciano: Today, we have a very special episode. For over 100 episodes, it has been just the two of us discussing AWS topics. We talk about other community contributors and mentioned their projects, articles, podcasts, and videos. Someone we have mentioned who created a massive amount of this content is Jeremy Daly. We are very excited to have Jeremy with us today for the very first AWS Bites interview. Jeremy is the CEO of Ampt.
He is also a fellow AWS Serverless Hero, speaker, podcaster, and writer. He is one of the first names that comes to mind when you think of leaders in the topics of AWS and Serverless. Today, we are going to talk about Ampt, hear Jeremy's view on the state of things in AWS and Serverless, and get his predictions for the future. My name is Luciano, and I'm here with Eoin. And today, we are thrilled to be joined by Jeremy Daly for AWS Bites podcast. AWS Bites is brought to you by fourTheorem, an advanced AWS partner. If you're moving to AWS or need a partner to help you go faster, check us out at fourtheorem.com. Okay, Jeremy, you are very welcome to AWS Bites. We are really excited to have you. I'm sure you need very little introduction and people may know you from your Twitter, Serverless Chats, podcast, blog, off-by-none newsletter, and whatever else you have been doing in the open-source space. So we'd like to start by asking you, where did your journey into AWS and Serverless begin?
Jeremy: Well, first of all, thank you both for having me. I've been a huge fan of AWS Bites for quite some time. I share it all the time in my newsletter. Really appreciated that episode you did on Ampt. So I am just as honored to be here as, and I again appreciate the fact that you're having me on. So in terms of where I got started though, it was kind of funny. I had a web development company that I was running for, I don't know, 12 years, something like that.
Started it out of my dorm room in college and did that for quite some time and got much deeper into, we weren't just doing web development, we're doing back-end applications, things like that. Building pretty complex stuff, integrating with UPS tracking systems and e-commerce and going through the whole PCI compliance thing. And so a lot of fun stuff, but I was actually racking and stacking servers.
We had our own data center that we used, a co-location facility. So I was very, very old school back then, but I kind of got sick of building forms for somebody else's website. And I was just like, I want to do something different. Like I'm building things for other people. I want to kind of build something for myself. And my second daughter had just been born at that point and I had built this thing, I called it the Live Baby Blog.
It was like this interactive, almost like a chat. It was before WebSocket, so we were using a thing called APE Server, whatever. But essentially it was a way that I could send updates to all my friends and family about what was happening with the birth of my daughter, like while we were actually in the delivery room and stuff. And so people loved it. I had like 100 people that were on it and commenting and whatever.
And I said, I wonder if there's a business here, like just out of curiosity. So I started building this thing. We ended up calling it SproutShout, changed the name to Lifeables eventually. But I said, I'm going to get rid of this web development company and I'm going to go into the startup world and actually build a startup myself. So I got together with a couple people that I knew. I ended up hiring a very, very amazing person to be my CEO and she did a wonderful job.
But essentially when we started building it, we were like, well, do we want to host this ourselves or do we want to get into the world of AWS? So this was 2009 and we started looking at AWS. And of course, this was before serverless existed. This was EC2 instances. This was, I think load balancers were like ELBs, weren't even a thing yet. Like, I mean, this was very, very early. No RDS, none of that stuff, right?
So we were building like, this was true lift and shift type stuff, like building exactly what we would be building if we were running these servers ourselves. So we started doing that and I got into AWS and it was just amazing. Like actually one of the things I did was I moved all of the stuff from our co-location facility from the hosting that we were doing for web development clients. I moved that over to AWS and went from spending like six, seven thousand dollars a month in electricity and bandwidth and server rental costs and all this kind of stuff to about $700 a month in AWS.
And I'm like, you know, if I would have switched things over sooner to AWS, I might have actually been able to build a profitable business around that and kept it going for longer. But so anyway, so long story short, I spent a lot of time building this startup and got into AWS and then had just been building everything on there, you know, from that point forward. And so what happened is we ran the startup for a couple of years, hit up against the Facebook timeline launch, the Instagram acquisition.
So, you know, perfect timing to be building sort of a site for parents and sharing photos, right? And so we ended up selling off some of that tech to another company. I went to work for another company. They were all in on AWS. They were using DynamoDB. That was my first experience with Dynamo and got into that and like just actually absolutely fell in love with that. And then what ended up happening is we had this massive outage when we were featured on the Good Morning America or the Today Show or something like that for the app that we were building.
Had a massive massive outage all caused by a single point of failure in a relational database. And so I started looking around. I was the VP of product at the time and I'm like, I wonder if there's a better way to scale and like how we can make this work better. And I came across AWS Lambda. So this was right in the beginning of 2015. Started playing around with it, fell in love with it. And I said, this is the future.
This is how things are going to work. Like this idea of setting up servers and trying to parallelize them or you know, trying to scale them horizontally, like makes no sense in terms of what you can do with Lambda. And again, this is before VPCs. This was before API Gateway. So I started playing around with a couple of these things and then the next thing you know, all these new services started coming out and as soon as you could do API requests or HTTP requests with it, I knew I'm like, this is it. This is going to be the thing. And so I've literally dedicated the last, what has it been, eight, nine years or something like that of my career to promoting serverless and trying to get this to be the default paradigm that people built with.
Eoin: And I think we're probably going to get back to, you know, what the promise of serverless was like back then versus, you know, what the reality of it is today, all these years later. Luciano also mentioned the content and all the open source work you create. I think you regarded it as pretty prolific in the content creation space and also you became an AWS serverless hero. How did this all happen? Was this a concerted effort on your part to put all this time and energy into content? And I suppose then what has that given back to you? How has it then influenced your career?
Jeremy: Yeah, yeah, no. So I mean content creation, I am not quite as prolific as I once was. I used to write a lot and produce a lot of episodes of the podcast and so forth. I've been very, very busy with my startup for the last year plus, almost two, well, yeah, a year and a half now, something like that. So it's been, I have not been producing as much content as I would like to. But to sort of go back to the beginning, so when I discovered Lambda and I started playing around with it, there was just nothing out there.
There was no content. Nobody was talking about it. Serverless wasn't even really a word. I actually kind of came across the JAWS framework, which is now serverless, the serverless framework. I came across that very early, right? So that was 2005, it was re-invent 2005, sorry, 2005, 2015. Well, my brain is not working. 2015, when Austin presented, I think he presented the JAWS framework at re-invent.
So I started kind of trying to figure out how some of the stuff works, and it was all experimentation. And it was like, well, can you do this? And, you know, okay, well, when VPCs came out, connections to VPCs, and we could connect to ElastiCache, I was like, okay, now this is getting even more interesting. So I started playing around with these different things, started creating stuff with it. We built a whole bunch of things internally at that company I was talking about.
And then I actually left that company to go work for another company and did everything as serverlessly as possible I could there. So I was learning a lot and figuring out a bunch of stuff, and still the content wasn't really there. Ben Kehoe was posting a lot of stuff, which was helpful. The Burning Monk, Yen Trey, was posting a bunch of stuff back then. This was like maybe early 2018. And so I started, I said, look, I've got all this stuff.
I said I'm going to put some stuff out there. I had been blogging in the past, and I was like, I'll just stop putting some stuff out there. So I started writing and just sharing some of the things that I was learning, some things about security. I wrote a big post about security. That actually connected me with Ori Segal over at PureSec. And so he and I started talking. We became friends, right? And then next thing you know, it's like I'm getting ready to, I'm getting ready, or I get introduced to Tom McLaughlin.
And I met him at an AWS Startup Day. And he's like, hey, we're thinking about doing a Serverless Days thing. And I'm like, what's that? So anyways, we get together. I met Eric Peterson. I met all these other great people that were in the space. And so one of the things that I did was I published a post, I think it was in 2018. I think this is one of the first posts I wrote that got a lot of traction, was my Serverless, AWS Serverless Microservice Patterns, or Serverless Microservice Patterns, or AWS, whatever it was.
And I think I put 16, 17, 18 patterns, something like that, of things that I had seen other people doing, things that I had used. And I didn't put them out there like, hey, this is how you do it. I put them out there like, hey, this is how I'm doing it. This is what I'm seeing people do. Like, is this right? Are you doing this? Are we, is there a better way to do it? And I actually think that started a really interesting patterns movement, like people started really talking about patterns after that, which again, I don't take credit for, but I think it was just the start of the conversation at least.
And that's, I think more people started thinking about it in terms of those patterns. So, I did that. I started my newsletter in September of 2018. So again, that's been over five years now. And I went to Serverless Days, New York, and saw Kelsey Hightower was the keynote speaker. Ben Kehoe was there. I think I met Taavi Rehemägi from Dashbird. Like, so I met like, again, just connected to people.
And it was great. And then it just kind of took off from there. And then Heitor Lessa invited me onto his show. He kind of let slip that apparently I was going to be a AWS Hero. But, so anyways, so I was made an AWS Hero early 2019. And then since then, I kind of put my foot on the gas and I started the podcast. I spoke at, I was lucky enough to speak at Serverless Days, Milano, or Serverless Days Milan.
I just spoke at Serverless Days Cape Town. I keynoted Serverless Days ANZ in Melbourne. Like I was, I've spoken in Belfast. So I've been able to do all these crazy things and meet all these amazing people. Spoke at re:Invent last year, which was absolutely amazing. So I think I forgot what your original question was, but essentially I just, it had a massive impact on my career. Like this idea of sharing what I did and figuring these things out.
And I think it, I think because I hit it a little bit early, like when it was kind of coming up that I was one of a, you know, I became a recognizable voice in the space. But it's only because I learned so much from talking to other people and willing to put it out there. But there's so many people writing about Serverless now, so many amazing Serverless, you know, frameworks, I didn't say framework, but you know, I mean like deployment frameworks or, you know, NPM projects and things like that, that are just, you know, that are amazing. And so many, so much great work going in there. It's like, I almost feel like, you know, continuing to write content and putting more content out there. I'm almost like, I don't know, there's so many new voices I kind of want to hear from them and, you know, and see where this goes.
Eoin: Well, I guess that's the value of the Off-by-None on newsletter because for me, it's, it kind of helps me to short circuit, having to troll through everything. But I guess that makes the job more difficult for you as more and more people join the community and you've got more and more content out there. Is it becoming a bit of a effort for you to do all that?
Jeremy: Yeah, I mean to give you some perspective, I have a couple of systems that are all serverless, by the way, that I wrote that scan some different things. I grabbed some stuff from Google automated searches. I have some, I have a whole bunch of RSS feeds that get aggregated, you know, and a bunch of other ways that I collect content. So every week I have about 400 to 500 pieces of content that end up in this system.
And I can filter out a fair amount of them. But I usually, you know, I usually start somewhere around a hundred or so at the top level and try to get that down to like maybe like 50 if I can, which still seems like a lot. But the, you know, the interesting thing is, and if anybody wants to know, because people have asked me this in the past, like how do I make it into your newsletter? So there are certain articles that I read that I open up and I can immediately dismiss them and say, this just isn't something that's interesting.
And usually it's because, you know, especially if it's something that is, if you're writing a tutorial and it shows all screenshots from the console, like most likely I'm not going to include that unless it's something really, really interesting. Sometimes new things that you can only do from the console, you know, I would include something like that, like bedrock and some of those new things. But the other thing is, is that also like well formatted, like if you just have giant chunks of code that it's like, I can't, I can't understand it.
It's not highlighted, whatever. Like that's sometimes frustrating. Gated content I almost never share. So if there's a, if you do Medium and you should, and I get it, I know, and a lot of people like to, you know, sort of get that revenue. Some people take that revenue and donate it to other places. I think that's really noble and I appreciate people doing that. But for me, my readers get very frustrated when they click on a link and they can't read it because not everybody wants to pay, you know, to be a Medium member.
But yeah, so I mean, just some basic tips there. But like something interesting, right? And so much has changed in the last year with ChatGPT. Like I think I've become a human ChatGPT detector now because I just read so much content and I'm like, you know, that's definitely ChatGPT. And so yeah, I mean it is a challenge. But for me, I look at it and I say, I know when I started, I think that I had the benefit of being, like I said, one of few voices in the space.
And that made my content more discoverable because somebody would search for it and they would find me. And again, I got great SEO and a boost from that. And I think that I read a lot of people's stuff, people that are doing really, really good work and answering interesting questions too and challenging things, which is what I always like to see. And again, I see they get like two claps on Medium and I'm like, how does this article not have more? How does it not have more reads? How does it not have more interactions and comments? And so that's what I try to do with my newsletter. And try to feature the ones that I think are consistent or they're interesting. And they don't have to be right. You don't have to be right. I don't always comment on whether or not I think it's a good thing that you're doing it this way or a bad thing, but I just like to get the information out there and let people think for themselves.
Luciano: Yeah, I can definitely resonate because I also have, I guess, much smaller newsletter in the full stack space. And yeah, definitely there is lots of work on curation. Automation can help a little bit, but ultimately you need to read and check every single thing you publish and make sure it is actually something good you are giving to your audience. Otherwise, the whole thing doesn't make sense anymore.
So definitely resonate with that. Another thing I want to connect to is you mentioned that you started very early with serverless. I think I also started around 2015 and I definitely remember that the feature set of Lambda, for instance, was so much smaller. And at the same time, the adoption of Lambda was so much smaller. And in the last few years, we have seen a growth both in terms of features and possible integrations, but also the way that people started to use Lambda, the use cases. So I guess the question that they want to ask is if you are seeing this perception that serverless is changing over the year, and if there are things that maybe today we can consider as myths that we need to debunk when we talk about serverless. And in general, when we talk about the benefits of serverless, what are those benefits? Are we overselling them or there is some kind of genuine value that we need to communicate more to get a larger adoption? Lots of questions, but hopefully the context makes sense.
Jeremy: Yeah, no, a lot of questions in there. But I maybe start with the first part in terms of the feature set of where it was versus the feature set of where it is today. So I think that in some regards, serverless has become extremely mature and to some degree boring. I think if we look at like the Datadog state of serverless report and we see that 70 plus percent of companies are using some sort of serverless system, whether that's Lambda or Fargate or AppRunner or something like that.
I think that goes to show that you just can't get away from serverless almost, right? Like it's just there. It's embedded in the cloud. You know, if you're using Dynamo or you're using SQS now, I mean, you're using serverless in some degree. And so I think most companies, I mean, if you think about it, that company I talked about that was using DynamoDB, I mean, technically we were using serverless before serverless was a thing.
So I think it's really hard to define it now. You know, again, mindset, ladder, whatever you want to call it, right? Serverless first. I think the idea is just that it's the way to build cloud applications now. And the feature sets have grown to a point where it's become incredibly complex. I mean, I go back to the days where I'm like, you know, I was installing, you know, Nginx or Apache or something like that on a Linux box.
It was, you know, it was running as a virtual machine. And I'm like, yeah, that was complicated. But I don't know if it's as complicated as figuring out tumbling windows in Lambda and making sure that we have the right, you know, extensions installed or the right layers installed or I've got layers that interact with the extensions that then, you know, give me these things. It just gets very, very complicated when you think about how much it can do.
So from a feature set standpoint, we're nowhere near feature complete. It can't do everything. I'm sure that you can make some other system do if you needed to, if you really wanted to run bare metal. But I do think that it's gotten to a point where there's not much you can't do with it, right? So if you're building an application today, and I know everybody says this, so this is probably, you know, probably just redundant advice, but start serverless first, right?
It makes no sense to say, I'm going to spin up an EC2 instance and set up auto scaling groups and this kind of stuff. There's just so many ways to do it. If you're a PHP developer, check out Bref. Like Matthew Napoli has done such an amazing job with that service, you know what I mean? And I know that there's not really official PHP support on Lambda, but there's Lara, what's the Laravel one? I'm trying to think what it's called, but there's another one for Laravel that is all serverless based.
Like there's just so many things you can do now. Like just do it that way, start that way. And I think that, you know, the common things we hear from a myths perspective is vendor lock-in, cold starts, right? High costs, some of these sort of things. Serverless can get very, very expensive if you use it wrong, right? If you don't set it up the right way, if you're doing what the Prime team did and trying to run step functions for every single frame of millions and millions of videos, then yeah, it's going to get stupidly expensive.
And that's just a poor architectural choice, but it probably wasn't when they did it the first time, when they set it up the way they do it to do samples. Like it probably made perfect sense and it probably took them a fraction of the time had they built some other system to do it. So I think that the cost aspect of it is, you know, depending on what your workload is, depending on what you're doing, that's certainly something that it can get expensive, but I mean, everything gets expensive if it's misconfigured or not being used efficiently.
I think the other thing around vendor lock-in, excuse me, too, is I don't know any system that exists that you're not locked into a vendor. I mean, data is the biggest thing. I mean, even if you say, well, we're using Postgres, and we're running it in RDS, so we can move that wherever we want to. Yeah, good luck. I want to see you transfer terabytes of data over to PlanetScale or over to some other provider.
I mean, the data gravity there is huge. And that's one of the reasons why I hate ORMs, and no offense to anybody who's building ORMs or things like that, but I've never seen an ORM that allows you to go from Postgres to MySQL to some other, you know what I mean? It just never, that doesn't happen, right? So you're locked in no matter what you do. And the question is, is that who do you lock yourself into?
I mean, even Next.js now is something we've been talking about. There's been a whole bunch of buzz about this. It's not easy to run Next.js not on Vercel, right? So either you're running it on Vercel or you're jumping through hoops in order to make it run somewhere else. And so you're locked into Vercel pretty heavily if you choose to run your Next.js app there and take advantage of the benefits. So this is true of everything.
But the question is, is where do you lock yourself into and what are the trade-offs of choosing a particular thing? Like I would rather be locked into Lambda and serverless on AWS than I would be locked into running a Kubernetes cluster on GCP, for example, right? Like, I mean, so to me, it just makes sense. It's faster, it's easier to do. And then the last one I mentioned, I think, was the cold starts thing.
This is something that you really got to think about what your workloads are. If you're running a webhook or an API or something like that and you're running that on a Lambda function, like, yeah, you're gonna get cold starts if you don't have sort of high velocity or you don't have that ongoing stuff. And they're working on that too. There's ways that it makes it better. If you're using Node or using Rust or some of these other ones, like it's very, very low cold starts anyways.
But this is just one of those things where you have to make a decision where it's like, do you want that scalability of scaling down to zero or do you want the availability and the cold start, the minimal cold starts? Because if you do, then just deploy to AppRunner, right? And if you deploy to AppRunner, then you pay a couple of dollars a month, maybe it costs you $30, $40 a month to run that API.
But you can run all those. You don't have the cold starts. You get good performance. I mean, there's different ways to do it, but it's about architectural trade-offs. And I think that's the last point that I'll make. And I'm sorry, I know I'm rambling a little bit. But the big thing here is that Serverless has introduced, I guess, thousands of trade-offs, right? Like there's so many different ways to think about how to make a particular workload run, whether you're using choreography through EventBridge or orchestration with step functions or a combination of those, or you're still running certain things within Lambda functions or you're trying to hand stuff off to Fargate or you're doing any of those things, or you're choosing SQS over Kinesis or those. Like there's just so many things that, so many decisions that you have to make that, again, there's a very big difference to me between a developer, a sort of ops slash cloud architect, and then somewhere in between where we're, I call them Serverless developers maybe, but like there's different sort of knowledge sets that you need to have to be on either end of those spectrums. And in order to find yourself in the middle, and this is one of the reasons why I think Serverless is still really hard to adopt for a lot of people, is there's a huge learning, there's still a lot of learning to do and you have to bring knowledge from both sides in order to be really effective, I think, at building Serverless applications.
Eoin: Serverless was simple to begin with, you know, when it was a little bit naive and maybe less capable, but I guess as more and more features have been added and now everything is possible, is it, have we arrived at the case where just the cognitive load is just very intense for developers? Have AWS kind of missed an opportunity there to kind of continue to remove that undifferentiated heavy lifting? Could there have been a different path? And is maybe, maybe this is leading into the Ampt story then. Is Ampt's mission to rectify that? Maybe you can go through your thoughts on that and how Ampt all began.
Jeremy: Yeah, so I mean, I think, you know, I think you know my answer to this question, so whether it's made it too difficult. I mean, I think AWS is, has done an amazing job building these primitives. I think they've done a terrible job trying to find a way to make the developer experience, you know, smooth. And so I don't think you solve the problem with just developer experience. I don't think that's the ultimate solution.
But what I will say about the developer experience, and you mentioned the cognitive load, which if you think of the sort of the triangle of DevEx, right, you've got one thing which is fast feedback loops. Like as a developer, you can't wait two, three minutes to figure out whether the code change you just made works, right? Like that's just incredibly frustrating and it's just, it just, it fries your brain, right?
The next thing is the cognitive load piece and this was something interesting. And Stanley was just talking about this actually at Serverless Days Cape Town, which is, you know, the new number is four. That's how many things you can hold in your head at one time. It used to be seven. Now it's like, you know, between three and five. So if you have to be thinking about more than four things while you're writing code or trying to build an application, you can't.
You have to stop, you have to do task shift, you have to go look something up like it is. It's just really, really frustrating. And then the third piece of that is what we call flow, right? This idea of, or the flow state, where you are in a point where you're just cranking, right? Every time you have to stop and go look up something in documentation or any of those things that break your flow, it's really hard.
And unless you can put those three things together and somebody can just write code or do whatever, whatever task they're working on, they have that feedback, the fast feedback loop. They have limited amount of things that they have to keep, you know, sort of at the top of their mind in order to make those things work. And they can get into a state where they can be uninterrupted. That is where you get rid of things like developer burnout, where people are just, you know, happier.
I think the developer burnout isn't talked about enough and it really should be because in today's day and age, it is very, very real. But anyway, so to me, I look at the developer experience that AWS put together. And I think that they missed the mark, but they didn't miss the mark because they didn't necessarily care about it or they didn't try. I think they missed the mark because of the complexity of the underlying technology that they put together, right?
I mean, things can only be so simple. You can only abstract so many things. And so when we originally, and the predecessor to Ampt was serverless cloud. So this is something we started actually working on at the end of 2020. So it's been a while that we've been, you know, been playing around with this idea. And the goal with that actually was to make it easier to deploy the serverless framework for people.
But Doug Moscrop and I, we were working on this thing and we're kind of like, you know what, though? It just seems crazy to us that you actually need to define that you need a Lambda function or an API gateway or that you need to say, I need this route to point to this file when you're already defining that in your code. And we didn't know where this would go. We didn't know if it was even possible.
And so what we started doing is playing around with some ideas. We built out an early version with API, like with an API router and with a task or a schedule type thing. And the data component as well. That was something early we were working on. And we just kind of found out, we're like, okay, right now, out of the box, this solves a lot of problems without you even thinking about needing to set up Lambda functions or configuring anything.
And the developer experience for it was very, very simple. And one of the things we knew we wanted to do was we said, you can't emulate this stuff locally. And I just saw a bunch of stuff, by the way, recently on Twitter where people are talking about, well, if I have a Mac M3, I can run all of this stuff locally and do all my tests locally and it's super fast and whatever. I think that might be fine.
But we looked at it back then and I still look at it now and say, I think you need to run everything against real cloud infrastructure to get the fidelity that, you know, to know that all these interconnected pieces are actually working. So we wanted to do that. We wanted to create these high fidelity sandboxes. So in order to make that work, we had to create a syncing technology that allowed us to save stuff, like a watcher that would upload code and whatever.
Now, we were doing this originally by zipping stuff and then deploying it to the Lambda function. It would take like seven, eight seconds, something like that. It was okay. But seven, eight seconds is a long time when you just flip over from, you know, you make a change and then you wait for it to deploy and then you flip over to, you know, Postman, you run a thing and you're like, okay, that worked.
It's still too long and we found it was too long. So we worked on that for a long time. We actually get that down to about 400 milliseconds right now. So it's pretty fast how quickly you can make code changes and do that. But so for us, we looked at that and we were like, we want to fix the developer experience, first of all. That was the big piece of it. We knew that that was part of it. But then the big thing was we wanted to reduce cognitive load.
We felt like as serverless developers that cognitive load was killing us. Like it was always and the worst thing is and look AWS is, I don't want to diminish what they've done. They have amazing people there. They work really, really hard. They do some really amazing things, but their documentation, especially around CloudFormation, is just it's like you just have to keep diving deeper and deeper and deeper.
It's like, okay, you click on this and then it gives you the possible options and then oh, here are the settings for that. You click on that and you go deeper and deeper and deeper. I mean, I remember times where it's like, what are even the same defaults? Like what just happens by default that would, you know, like how much do I actually have to configure and change? And all of that load on my brain, again, maybe I'm just not a great developer, but that honestly, it just gives me a headache and I get frustrated and I'm like, I can't, you know, I stare at these things and I'm like, why, why does this do what it needs to do?
So we wanted to take as much of that cognitive load off of people. So again, nailing the developer experience, giving people those fast feedback loops, reducing that cognitive load, that we were hoping would then produce this, you know, sort of flow state where people could just get into actually building the apps they were building. This has evolved tremendously. We added support for full stack stuff.
We saw most people were like writing stuff with, you know, Next and Express and, and, you know, React or, you know, Vue or whatever. So we wanted to support all those things. And we added in support for all kinds of different, you know, things like tasks, long-running tasks. We built this thing called Smart Compute. I mean, I don't want to sell Ampt here, but if you want to go check it out, go to getamp.com and the documentation.
I mean, it does quite a bit of these things for you. But I think back to the original premise is, we built this and we started on this journey because we wanted to solve this, this larger developer experience thing and just make it easier for developers, like developers capital D, I don't know, lowercase D, whatever, developers who were building applications, who are experts in writing, you know, Express and interacting with databases and those people, right?
The ones who weren't experts in setting up CloudFormation templates or writing CDK and figuring out what the cloud architecture was. We wanted to see if there was a way to go from cloud or from code to cloud with as little friction as possible. And I think we've, you know, it's not perfect yet, but I think we've gotten pretty close. So, yeah, so that was really, that was the genesis of the idea, where, you know, where we kind of are now and I mean, the other thing too is like CI-CD, AWS Cloud Account Management, right?
Like that, I mean, honestly, we have customers using Ampt that I think the biggest benefit they get is just from the account management. So we automatically spin up and I was watching your episode the other day about Ampt, I think you asked this question, every single environment that we spin up is a separate AWS account, completely isolated, you know, you've got that blast radius there, all the quotas are tied to that individually.
So, you know, we have one customer that does e-commerce sites and they deploy these e-commerce sites for their customers. And what they do is they use the Ampt'd stage, you know, to create stage, create these stage environments, they use that to create, you know, environments for each one of their customers. And then what's cool about it is they update, they have like sort of a, you know, their code that they update that they can deploy to staging and check to make sure it works.
And every one of these environments they have running for their customers, they get different data from our parameter system, they have different data obviously in their data tables. And then if they want to push out an update to them, they can just update them individually and say, okay, we're going to move you to v1.2 or whatever it is and update each one of those different services or those different environments.
And it works really, really well. We've got another one that's using it, BlockSec is using it to do tenants. Their tenancy is based off of individual AWS accounts, do some security and stuff. So there's just so many things that are taken off your plate. And then again, the last thing that the CI/CD portion of it is to basically say, you know, CI/CD in my opinion is dumb and it's broken and I don't see anyone who's ever done CI/CD well. So we were like, let's just remove CI/CD from the equation. And so we take your code and it's all built in your environment. So your environment actually builds your code and processes your code and reconfigures itself based off of what your code has specified. So it eliminates that process, you get CI/CD out of the box. It just takes so many headaches away from developers and just lets them write code, which is again, was our ultimate, that was our original tagline was just write code. Like that was, you know, that's where we wanted to be and that's where we are right now.
Luciano: You mentioned the smart compute feature and this is actually something that got me very curious because as I understand, you can basically, you have different constraints when you run on Lambda, up runner or Fargate, but all this complexity is kind of abstracted from you as a developer, the system somehow is just going to figure out which one is the best environment for the kind of workload you are trying to deploy. And that feels a little bit magic if you ask me. So I'd like to ask you if you can disclose some of that magic. And I guess my question is, does it just work? Like is it able to transition from Lambda to up runner to Fargate or something like that automatically without like any interruption or maybe there are certain trade-offs that developers still need to be aware and somehow adapt to that specific model when they write their code.
Jeremy: Yeah, so we tried to make it so that the trade-offs were handled by the system and not by the developer. I mean, you're still building on a distributed system, right? So we try to make sure people know that, right? So it's not like if you run the same code multiple times that it's always going to have access to global variables that you've set and things like that. So you should assume that every single time your application runs it is stateless and would need to hydrate itself with any information that was there.
But what we tried to do from a different approach is like, you know, look containers are great, like Docker, you know, revolutionized a lot of different things in terms of how people were able to encapsulate code and slim down applications so that they only had what they needed and give people more control over sort of not the operating system but certainly the runtime that was baked into those things.
And we don't want to take that away from people, right? But Lambda kind of did, at least initially. And it said just write code, just put some application code in there. We'll manage the runtime, we'll manage the operating system underneath. So what we said is let's start there. Let's start with the just give us the code thing. And when you just give us the code, because again, we build it, right?
We're not running a builder in your CI/CD process that packages your code. That runs in your amped environment. Because we can do that, we can actually deploy that code or take that code and turn it into or deploy it into a container and we containerize it for you. Or we can, you know, put it into Lambda directly and so forth. So because we have that underlying code, the actual code you've written, we can sort of massage it and change it and deploy it in multiple different ways.
So the trade-offs are, obviously with Lambda, is you can only run it for 15 minutes. The problem is if you run Lambda for 15 minutes, it gets very, very expensive to run Lambda for 15 minutes. So we put in some, you know, some basic heuristics that say, look, if you're going to run something for more than a few minutes, we'll just launch it into a Fargate container. And if it's a, and again, these are scheduled tasks, right?
So if you schedule a task, it's pretty easy for us to trigger Fargate based off of a scheduled timer. You know, we've got some timer things in there with Lambdas that trigger some things and do some of that stuff. But essentially, that's a pretty easy switch to just take something that would run in Lambda and run it in Fargate because we also, we do all the permission stuff for you as well, right?
So there is a lot of, I don't want to call it magic, it's just good practices or best practices behind the scenes in order to make these things work. We're just kind of handling the deployment things. The real magic, I think, comes with the app runner piece. So we had to do some really interesting things for app runner. It does have to be deployed as containers. We do have to run a supervisor on there with multiple versions of node because if the node process gets blocked, then it will start throwing different errors for you.
And so we have to set sort of aggressive timeouts and things like that. And so we had to do some magic there. But again, it's just something that you'd have to do yourself, honestly. I mean, that's the crazy thing. And so, but what we do for that is there are thresholds at which it makes more sense to switch things to, you know, to a different service. And so like with app runner, for example, just as an, you know, if you are doing, I forget the number here, but let's say you're doing about 50 million invocations a day.
That'll cost you to scale that and have some flexibility in app runner. That might cost you $600 a month, something like that, somewhere in that range. If you do that on Lambda, it's going to cost you over $3,000 a month, right? So there's a huge cost savings to switching to something like that. The problem is, is that if you're building this and you're trying to use high-fidelity sandboxes or you want to preview it, you want staging accounts, obviously you pay more for the throughput because you need more resources running.
But do you want to run app runner in 30 developers AWS accounts just so that it's there so they can test against it or whatever? I mean, because that starts to add up, right? And so this is where, this is why we like the idea of the smart computers to say, even if it's running in Lambda functions in your preview environments and your developer sandboxes, because we can guarantee the fidelity between these different compute services, we can switch that on to app runner when you're actually running it in production.
And then the other thing we do too is we've eliminated API Gateway for most of what we do and most of the stuff now runs through CloudFront. CloudFront is very good. You can still use WAF if you need to do something like that. And honestly, API Gateway, most of the services in there, it's just overhead unless you're using it for quotas or some of those other things. So we actually use Lambda Edge functions.
We do routing based off of different things. We have some very cool stuff we do with static routes so that you never have to touch Lambda functions in order to load static routes. So there's all these things that are just, they're complex. And I guess maybe that can lead me into the idea of patterns. So obviously, you know, I love patterns. Big fan of these serverless patterns that run. What I found is most of these serverless patterns that are complex ones are very, very hard to manage, right?
Like how do you tell the system, okay, now we're switching over, you know, these routes are going to, you know, to app runner, these routes are going to Lambda functions or to function URLs because we have to do some streaming with them now. And then these ones actually trigger based off of, you know, these different SQS queues or whatever, right? So it just gets really complicated and you got to write that all in CDK or CloudFormation or however you're doing it or Terraform.
It just gets really, really hard to do. And the cognitive load there is huge. So maintaining, so it's sometimes easier to maintain simpler patterns because they're easier to grok and easier to put into these, into these IEC documents. Whereas we look at and we're like, you know what, the patterns themselves are actually much more complex and more complicated and really hard to manage. So if we can manage these really complex patterns for you, then not only are we doing exactly in a sense what you would be doing if you were writing these patterns yourself, but we're doing them better because our system can manage and automate much more of it for you.
And not only that, but we can learn from every single person using our system, right? So we see how does this pattern actually work when it gets 60 million requests per day, right? How does this pattern work when it only gets, you know, 10 requests per hour? Like what are the cold starts here? Like would it make more sense if we did this versus that? And we can try those things. We can experiment, we can change memory settings.
It's all kinds of stuff that we can do to optimize those workloads. And you benefit from it as a user of this. And that's why we kind of joke a little bit and I'm not, it's kind of a joke, but not really a joke where some people have called us like an autonomous platform engineer. Like essentially, like what we do is we are a serverless expert, right? Or the platform's a serverless expert and you say, hey, here's my code, make it run as efficiently as possible in the cloud.
And somebody goes and writes all that cloud formation and all the, you know, whatever for you and deploys it. Like that's essentially what our service does and then optimizes it over time, which I think is, which is really interesting. But yeah, and you know what, the other thing I wanted to mention too, because I want to give credit to Yantray for this too. Like he pointed out a long time ago some of the cost changes for other services.
So it's not just compute, right? So if you're running, I forget what his numbers were, but it's like thousand requests per second with SQS that cost you like $1,800 a month, right? So it gets really expensive to run SQS when you're doing that kind of throughput. But if you switch to Kinesis, well, then it's like $30 a month because you only need six shards or whatever it is, right? So the question is, do I write my application to use Kinesis assuming that I'm going to have this type of throughput or do I write my application using SQS because it's going to cost me nothing in the beginning?
And so smart compute is just one aspect of this switching piece for us. We look at it and we say, there's no reason why you should have to choose Kinesis over SQS or maybe EventBridge for certain things. Like why not just write your use case, express your use case and we'll run it as SQS or whatever, you know, when it's in your developer environments and these or you're not getting a lot of traffic.
But as soon as you start getting traffic and there's a breaking point where it makes sense to switch to a different service, we can automatically set up Kinesis, start routing anything from that Kinesis to the same place that is feeding off your SQS queue, then start changing the producer so it's sending it to Kinesis and then once the SQS queue is drained, go ahead and remove that SQS queue. We can do that for you all in one without you even doing anything. It just happens and it works as opposed to you having to, you know, do six CloudFormation deployments in order to make that work.
Eoin: Yeah, I can certainly see the benefit of that and I think it's one of the challenges we have all the time is, you know, you're aware of all of these different services, but to really understand all of their characteristics and their delivery method and the latency and is it at least once or whatever and then also understand, okay, what are the cost trade-offs and mix all these into the one thing, it's pretty difficult.
So I can really see where Ampt is going to help with that. I guess as well, you know, you've got a very smart team, clearly very capable of delivering all this stuff. I guess one of the challenges then is you've got all these different challenges out there in the cloud, different perspectives on what needs to be done, probably a long list of feature requests and a roadmap. How do you get a focus for Ampt? What's your kind of North Star? Do you have like a specific target market, the kind of application that you're trying to target or specific challenges you're trying to solve or is that something that kind of evolves as you see customers and understand their pain points?
Jeremy: Yeah, I mean, we're very big on listening to customers and what their needs are. I mean in terms of a sort of a North Star, you know, the goal here is focusing on web applications. I mean, I think that's the big thing. If you're building some sort of IoT system in the background or you're, you know, you've got, I mean, we can technically support that. But you know, if you're trying to build some massive machine learning thing, or like we're just not competing with those right now.
I think what we see is that there are a lot of people that are doing interesting things out there. Now with AI, we just launched our Gen AI integration with Bedrock. It's early, it's beta, but it's interesting where it's like, how do you just give people the ability to build this stuff very, very quickly? And when I say people, I mean, I do mean developers. I mean that our focus is on developers. I think long term like we see a vision where we can say, you know, amped is this thing that you can just buy off the shelf almost as a platform engineering team or as a larger enterprise and say, hey, I want to give these developers the ability to do this stuff.
That's talk to compliant and PCI compliant and follows all these rules and you know, secure and stuff like that and I don't have to worry about that. I just, you know, all of that work is done for us and we just kind of monitor it to make sure that, you know, people are doing what they're supposed to be doing. But like essentially that out of the box solution is the longer term vision. So, you know, we are looking at it now as to say, you know, we could build a million different things.
We could go a million different ways. You know, obviously, you know, there's a lot of hype around AI and we felt that that was important to get something like that in. We're working on a whole bunch of partnership stuff so that you can easily connect, you know, memento or Mongo or those things. You can do that now. Yeah, I mean it just we, you know, you just do it through our parameters, but trying to make those a little bit more official and a little bit easier to do and also manage the, manage the authentication for you.
So that's something you don't have to think about. But yeah, I mean in terms of the feature sets that we're trying to do is focus on the ones that we think move the needle for customers. Like what are the biggest frustrations they have and so forth. And, you know, the abstraction layer that we've built has been, we've tried to take the approach of like, we're not trying to necessarily mask AWS here.
We want you to know AWS exists and that you're using AWS. And I think actually that's one thing you might have mentioned in your episode 100 was that we're like a serverless pass or like a serverless pass. So the funny thing is, is we've been trying really hard to tell people we're not a pass, right? Like we don't want to be, we don't want to host your application. We want to manage it for you and give you all the tools you need to interact with AWS and manage it on your behalf.
But we don't, we don't want to ultimately host it. We don't want to own your application. It's, that's your stuff. Like we're just trying to make it easier for you to get your application into the cloud in a way that, you know, people who have, you know, giant platform engineering teams can do that. So, you know, we're trying to focus on the use cases that our customers have. It's mostly around full stack Node.js applications.
You know, we're not focused on enterprises right now at all. We're trying to focus on, you know, the startups and the agencies that are trying to build things, you know, for their customers. And I think really the big thing is, is, you know, we live in a time right now. And this is the time I thought serverless was going to bring. I thought this was the sort of revolution that serverless was going to bring.
It was going to democratize, you know, application development. And I think it has to some degree in that people who, you know, have a couple hundred bucks can go ahead and build something that, that scales and has all kinds of amazing features, things that took us months and months and months to engineer, you know, even in 2009, 2010, things like that. You had massive engineering teams to build these things.
Now you can have one or two people that can build something pretty amazing, you know, in just a few days. But it does require a fair amount of skills and a fair amount of knowledge in order to be able to do that. And I think we live now, especially with AI and some of these other things that are happening where it's like, I don't care if you're, you know, in your college dorm room like I was or, you know, you're a multi-billion, you know, multinational billion-dollar corporation, you know, the ability for somebody to express their idea and see if it changes something for the good, hopefully for the good.
You know, that's something that I think that we need to continue to lean into. And that's why I love, I mean, I love platforms like Vercel and, you know, Fly.io and some of these other ones that make that experience very, very easy for people to just kind of get started. I think the problem is that for companies that, you know, or for when it goes beyond that, when you get to that graduation problem, and then people start thinking like, I actually kind of want to be on AWS or GCP or one of these other ones, you know, and that's one of the things that we're trying to do with Ampt as well as to say, look, we want you to be able to get started easy, you know, easy, build anything you want to build, but then also not worry about graduation, right? Like if we catch the next Uber really, really early and it's just, you know, five people in a garage, you know, I would like to think that the way we deploy their app to AWS, not only will scale as the throughput scales, right, and the patterns will adapt and evolve, but we'll be able to support them, you know, through the lifetime of that business because if they were to go and rebuild it themselves on AWS, they'd just do exactly what we were doing, but probably not as good because of all the experience and the benefit we have of seeing all the other customers use the platform.
Eoin: Is it the plan then for, given that Ampt is not a pass per se, to let people kind of run Ampt in their own AWS organization, full kind of visibility over the high-level abstraction, but also the low-level bits?
Jeremy: Yeah, yeah. So actually that, so this is something that we are trying to figure out. And again, we just launched, right, just over a month ago, right? So, you know, we've been experimenting with trying to make the platform do what people need it to do. How we deliver the platform to people, that's something we've been experimenting with and trying to figure it out. So we do have some customers where we deploy directly to their AWS accounts.
That's exactly what we would love to do. We have our control plane that allows us to do all the AWS account creation and setup and so forth. We have a way that we can plug these things in, we call them providers, but essentially we can plug in your AWS organization and we can spin up and tear down AWS accounts for you in your organization. Of course, orgs you have to apply for quotas and there's some other things like that that you've got to deal with.
But, so that's one of the things that we would absolutely love to do and have that as an option, especially for larger customers that want to manage all that themselves. We'll just spin up and tear down these accounts for you and do all the deployments. And everything, like I said, everything runs within each environment. So even the build step actually runs in the environment. We did this for isolation purposes as well, so that we weren't building them on some central system and then sending them off to your environment.
So there's a bunch of security that is built into that. For other accounts though, one of the things that we didn't want to do is burden people with having to set up AWS accounts. So even if you, you know, so if you want to just deploy, you know, your application quickly and test it and whatever, we're more than happy to manage those accounts for you and own those accounts and then just sort of bill through and you pay for whatever your usage is.
But we would love to get to a point where, you know, I think a lot of people will fall into the category where it's like, we want to own our production account. Like we want that to be in our AWS. And again, if you have like 10 apps in Ampt, you know, we might be managing 40, 50, 60 AWS accounts for you right now. So you might want to set up multiple production accounts. It depends on how you want to do it.
But essentially we do want to pass this through and let somebody, we want to be a deployment platform, right? If anybody, you know, maybe to answer this question, who do we compete with? We're not competing with Vercel. We're not competing with, you know, Netlify or Fly.io. Like we think we compete with Pulumi and Terraform. Like we're a different way to get your code into the cloud, right? Pulumi and Terraform and CDK, all those are ways in which you define infrastructure.
And then of course you have to set up CI, CD and some of those other things. We're trying to capture all of that and say we're an alternative to Pulumi and an alternative to Terraform that really reduces that cognitive load for you. So that's ultimately where we want to get to is let you deploy to your own AWS accounts. We also don't want to burden somebody with having to do that if they don't need to.
So we'll find the right way to balance that, I think. And then the whole point is that, you know, we have some internal connectors that we've developed that we haven't sort of made available to our customers yet. But the goal is to say if you want to spin up a bunch of AWS accounts with Ampt and you've got some Ampt applications running, but then maybe you have another account, you know, that you're running that has some bespoke machine learning thing or whatever you're doing in there, and you want to be able to connect to the database or interact with that, you know, that we would just use OIDC or something to generate temporary credentials from all your environments as you interact with those, right? So we don't think we're going to own 100% of your workloads necessarily. I mean, we can for certain companies. But for ones that want to expand, like we really just want to be a partner in your AWS journey and make sure that you can do the things you need to do. And as much of that heavy lifting as we can get rid of for you or the undifferentiated heavy lifting, like we want to fill that gap for you and, you know, and just get you where you need to be as fast as possible with the best developer experience possible.
Luciano: Yeah, let's change a little bit topic before we close off. I think we are going close to the end. I guess there might be people listening to us that are thinking about, I don't know, using all this technology that we talk about every day with AWS, and serverless in general, and build startups, build companies, build ambitious projects. And it's something that you have done multiple times based on what you're telling us today. And I guess the generic question is like, do you have any piece of advice, piece of wisdom, encouragement that you want to give to listeners that might be falling into this category of people?
Jeremy: Yes, I would say move to the mountains and be a goat farmer and just get away from all of this stuff because it is complex. But no, I mean, I think you have to, you know, look, technology is what everything is. I mean, every company you work for is a technology company or a software company now, right? I mean, pretty much everybody, you know, and so whether you're working in a startup that's building another startup to work, to solve some dev problem for people that are building, I mean, there's a million different things, but if you look around, you'll see that every organization out there is trying to solve some sort of technological problem.
They're using the cloud or most of them are using the cloud to do it now. It would very much so behoove you in your career to take it seriously and understand that, you know, the cloud, I think there's no better place to be building applications and in the cloud. On-prem, things like that, I'm sure some of it will still exist, but my advice is, you know, you've got to start, you've got to pick a couple of technologies and go deep on those.
I think that a lot of people talk about the T-shaped engineer, you know, it's like you have a lot of sort of low-level knowledge or high-level knowledge, I guess, of a bunch of things, but then you go really deep on one. So, you know, I would say you don't need to know Rust and Python and JavaScript and Go, and you don't need to learn all these different languages, right? Like there's a bunch of popular languages out there.
If you're doing data stuff like focus on Python and stuff like that, if you're doing more front-end, you know, obviously JavaScript and JavaScript is still a great language. We're in a JavaScript renaissance again with BUN and Dino and all of these front-end frameworks, right? So everybody's, when people count out JavaScript, I'm like, what are you doing? Like JavaScript is going to be here in a hundred years.
People are still going to be writing JavaScript for some reason. Even when, you know, I think AI is going to code in JavaScript just because it's like we can't get away from it. People love it for some reason. They love to hate it, but they also love it. So, but in terms of AWS, you know, you've got to start looking at some of these different services and obviously whatever your role is at the company you're at, it would be very helpful to pick something that is, like DynamoDB is an interesting one, I think that is worth, you know, sort of focusing some time on because there's a lot of things you can do there.
But I mean, just, you know, just Lambda in general. I mean, honestly, Step Functions is one of those things where I think you could get a PhD in Step Functions now because it does so much, right? So, you know, so my advice would be pick a couple of these things that make sense that all kind of work together and really focus on learning the ins and outs of those. And then get a high, you know, get a high level knowledge of all these other things.
Be aware of what these other things do. You don't need to be an expert in everything. You can't be an expert in everything. But I would say, you know, focus on a few things that really, you know, a couple of interesting services and go deep on those. And again, if you want to get noticed, start writing about it. I know not everybody is sort of built for that, but write about it, talk about it, you know, post about it on Twitter, you read an interesting article or X or whatever we're calling it now, right?
Like, I mean, feel free to, you know, to share your thoughts. And the last thing I'll say about, again, sharing thoughts. So many people ask me this, they're like, yeah, but I just read an article the other day that literally said exactly what I was going to say about a particular topic. And I always say to them, I guarantee you, you were going to say it a little bit differently. And when you say something a little bit differently, and that's maybe why I talk so much, I try to explain things like eight different ways.
It's just because of the way my brain works. But when you explain something just a little bit differently, that can click with somebody in a way that the other article didn't, right? And so you maybe you present the way that you do your demo differently. Maybe your example is different and it connects better with somebody's, you know, current situation or whatever. So I would say, you know, I'm happy to read an article about, you know, whatever it is, you know, turning off a dev machines with a Lambda function every night, scheduling the shutdown.
I've read a hundred of those at least. I'm willing to keep reading those to see if there's something else in there that sparks something. And again, content goes out of date very, very quick. We talked about this idea of, you know, of how quickly these, or we talked about all these different features. Content goes out of date very, very quickly, right? So if somebody wrote an article three months ago, it's very possible that you could write the same article today with new information that would change, you know, that would change someone's perception of it or help somebody out in a different way. So again, I know not everybody likes to write and share. It can be scary to put yourself out there, but I would say it is definitely a massive thing that can help with your career.
Eoin: Cool. And I guess the other piece of advice, going back to your first point, is don't use ChatGPT too much. Because you're now a human GenAI detector and it won't get in Off-by-None.
Jeremy: Don't use it for content creation. Definitely use it for code. I mean, I think that I've seen...
Eoin: Or for reviewing your content, right?
Jeremy: Yep. It's great for checking grammar and some of these other things.
Eoin: So I think the future of Ampt looks pretty promising. I'm really curious, like this idea of a smart compute and kind of almost self-healing and self-optimizing infrastructure that moves from one service to another based on cost and all these other things is really good. I'm also kind of curious, will somebody eventually invent some sort of data sink where I can just put data into it and it doesn't matter like what my schema is. And it just kind of figures out based on how I pull the data back out, how to store it. Where do you think it's going to go in the next three or five years? I mean between Ampt but also AWS and other players. Are there any kind of crystal ball moments you have where you can see maybe where this is heading?
Jeremy: Yeah, I mean, I think that as much as I've tried to fight it in some way, I think AI is going to play a huge role here. And I know everybody talks about this, but I think it's going to be a little bit different. It's going to be applied differently than I think a lot of people are thinking about it now. Everybody's using it as like code completion and some of those things. I think those are all great use cases.
I think this idea of AI somehow figuring out, you know, what's the best way to deploy infrastructure or even to optimize things like data structures. I think that's going to be part of it. I think there's going to need to be heuristics and human review and some of that. But yeah, I mean, I think that what we're seeing now is an explosion of competition to AWS in very, very small pieces. The serverless database space race is what I call it is this idea of like, you know, between Zeta and PlanetScale and the new one just launched the other day, the Nile or Nile.dev, whatever it is, like there's just more and more of these different services are coming out that are all disparate, right?
And I think that it's a good thing that somebody's trying to solve something differently, but at the same time, I see a lot of larger companies and a lot of enterprises are just focusing their efforts back on the things that AWS provides. So I do think that there'll be a consolidation. I think that if anybody comes up with a really interesting innovation other than just, you know, we can scale your database a little bit better.
I think that, you know, there's plenty of room here. There's plenty of space for people to experiment, but I would like to see some consolidation back into a few of the major players, not because I don't like competition or the diversity of it, but I like the idea of the centralization of these sort of systems. And I think AWS is the platform that most people are going to be building on. Again, I get it, GCP and Azure out there as well, Oracle Cloud.
Cloudflare is doing some pretty amazing things. But I do think that again, Cloudflare is still a bit at that surface level almost where it's like some of the more deep applications that you would be building are just things that Cloudflare is not going to support, at least not right now. I mean, I hope, you know, that it expands in the future. But so again, Crystal Ball, I don't, you know, all I can tell you is any prediction I have ever made, I don't think has ever come true, right?
So that's why I don't gamble. That's why I don't bet on sports or things like that because I have no idea what the outcome is going to be. And I'm not sure I trust myself enough to do it. But I will say where I hope things end up. And I really, really like this idea of self-provisioning runtimes. I think it is something that is needed. And I think it's something that, you know, is just a matter of time.
I think it's inevitable. And the reason why I say that is because, I mean, how many people now are like, hey, you know, I don't like using Rust or one of these other languages because I really like to malloc my own memory, right? Like I really want to know how much memory this is being used there. Or like, don't run an automatic garbage collection for me. I'll tell you when I want my automatic, when I want garbage collection to be run.
Like, there are just so many of these things that we've abstracted away. We don't write ones and zeros anymore, right? We're not doing machine code. We're writing an abstraction. And every programming language right now is an abstraction. And I feel like you take something like the CDK and that feels like an abstraction on top of CloudFormation and, you know, and the idea of IAC. But it still feels very much like you're choosing primitives.
You're still making a lot of decisions. You're still, you know, you're, I still feel like it's machine code for the cloud. And so we always get this argument of people who are like, well, I need more control. And it's like, well, the people who are yelling about control don't ever seem to change any of their default settings, right? Like how many of your Lambda functions are still at, you know, one gig or whatever, or yeah, one gig.
Yeah, is it a gig, a meg, whatever it is, right? Yeah, so, you know, like it's set to 1000. Yeah, 1024 megs, right? So how many people just never change those settings? Or don't even know that, I'll use this example again, tumbling windows exist in Lambda, you know what I mean? Like they just don't know these things exist. They don't make these changes. They talk about control. And I think that we need to get to a point where we say, I think the cloud is smart enough to figure out how to route a, you know, an HTTP request to some thing of compute that connects to a database that can load that with some sort of guarantees involved. And that doesn't have to be configured manually in a configuration file for us to figure that stuff out. So I don't know where self provisioning runtimes are going to go, but I do think that we are going to see a revolution in the near future where, you know, there's just going to be, there's too many people building in the cloud. And if we let AI solve this, I'm very, very nervous about AI deploying stuff to the cloud on our behalf. So I think there's got to be a better way. I think self provisioning runtimes are the answer to that.
Luciano: That's really exciting. I definitely don't disagree with that prediction, but yeah, I will probably talk again in three, five years time and see what is the status of things then. So before we wrap things up, is there any link or place you want to share for people to follow you or to basically follow up on everything you just shared with us and maybe deep dive on some of the topics?
Jeremy: Yeah, actually, you know, I would love it if people check out Ampt and give it a try. It's getampt.com (and that's A-M-P-T) dot com. You can check out my blog. I don't write there as much as I wish I did, but jeremydaly.com, D-A-L-Y dot com. And then I'm on X, jeremy_daly, and obviously offbynone.io. You know, you can find all my stuff, all my links usually, at those different places. But yeah, I mean, I love, again, I love hearing from new people.
I love meeting new people and hearing new perspectives on stuff. And if you've got articles to share, please, you know, send them to me and I'm happy to take a look at them and share them in the newsletter if it makes sense to do that. And yeah, but really, really want to get... We're changing, we know we're changing a paradigm here with Ampt and we know that it's, you know, it's going to be a slog to get people to understand why, you know, I don't think it's hard to get people to understand why it's different, but like coming, you know, the objections around, you know, control and some of these other things are certainly, you know, the things that we get. But we've got some customers that we think, you know, well, they've told us that we're revolutionizing the way that they're building applications in the cloud. We're saving them a tremendous amount of time and so forth. So we're excited about the possibilities of this. So yeah, so the more feedback we get, the more we can make this, you know, make sense for people to use will help this, you know, help this movement and hopefully make, like we said, sort of democratize the cloud for even more people.
Luciano: Yeah, we'll make sure that all the links you share are going to be available in the show notes. So for people watching this, listening to this, don't worry, you'll get all the links there in the description. Jeremy, it has been a real pleasure to have you on the show. So thank you. Thank you so much for joining us today. And thanks everyone for tuning in. We look forward to reading all your comments. And so definitely check out the chat, check out the comment section on YouTube and share all your opinions. We are always reading all of that and it's always amazing to have conversation following up every episode and see what people actually think about and what resonates with them. So thanks again everyone and we look forward to catching up with you in the next episodes. Bye.
Jeremy: Thank you so much.