Help us to make this transcription better! If you find an error, please
submit a PR with your corrections.
Eoin: Last week, DHH, creator of Ruby on Rails, founder of Basecamp and Hey, and general internet opinionist, released a bit of a controversial blog post titled, "Don't be fooled by serverless". In this article, DHH states that serverless is effectively a trap that only benefits cloud providers. Today, we want to analyze the major points in this opinion and provide an alternative point of view. My name is Eoin, I'm joined by Luciano and this is the AWS Bites podcast.
AWS Bites is sponsored by fourTheorem and fourTheorem is your AWS partner for migration, architecture and training. Find out more at fourtheorem.com and you'll find that link in the show notes. So let's get started by analyzing the main points of this opinion. But before we do, for the sake of transparency and for your own benefit, you might want to think of the article, read for yourself. It's a relatively short one. So if you want a more unfiltered view, pause the episode, check out the link in the show notes, read the article and come back in a few minutes. So welcome back. Luciano, do you want to summarize the salient points in DHH's takedown of serverless? I'll try my best, yeah.
Luciano: So the main thing is that serverless is something that gets promoted a lot by cloud aficionados. And I think there is a word, something like it is kind of a mantra that people keep kind of repeating. And this makes me think that the author immediately wants to take distance from the serverless world of people who believe in serverless. The other comment is that serverless is magic enough and the users don't question how serverless works while they should.
Then there is another part, which is a little bit of a crude financial example, where DHH shows how cloud providers maximize their income using serverless. And the gist of it is basically they can sell smaller portion of compute for more money. And the idea is that, of course, they can pack more users into less hardware. So that kind of shows how this model is convenient for cloud providers. Then the other opinion is that if you are an occasional user, that financial approach doesn't really affect you that much, because if you have low rate of function invocation, you're going to pay very little or no money just because there are like three tiers and so on.
So this doesn't really affect occasional users of the cloud. But when you start to become an heavy user, so if you start to scale up a specific project, the price gets very high and unnecessarily high sometimes. So the next point is that there is a massive technology and vendor lock-in because effectively every cloud provider implemented serverless in a different way. There isn't really a common interface.
So whatever cloud provider you pick, you are stuck with that. And the author at the end kind of suggests that you should own the cloud computer you are operating with. It's literally called the cloud computer. So meaning you should have your own data center. And the quote is that you should own the donkey rather than renting hundreds of slices of that donkey, which is a little bit of a weird image in my mind.
But I suppose that the conclusion is that if you are... I think that the author suggested that the cloud is only good for two very specific kinds of companies that are very extreme with each other. Either you are a very, very big player, something like Amazon or some other big e-commerce with very high swings in traffic. For instance, when you launch a Black Friday campaign, then the cloud is going to be very convenient for you because you can have the kind of scalability that justifies all that other complexity and cost.
Or if you are a very, very small one where basically all this economics don't really affect you because you're not going to use the cloud that much to really incur any significant cost. And the last piece is that serverless doesn't change this point of view because it doesn't basically open up. This is my interpretation that the author said that it doesn't open up for other kinds of companies to take full advantage of the cloud.
So the conclusion is that serverless is just a trap and you should be aware of this trap and you should avoid it. Now for more context, I think it's worth mentioning that the author, DHH, has been transitioning his companies off the cloud. And there is a previous blog post called "Why We Are Leaving the Cloud". We will have the link in the show notes as well. And basically the idea is that the cloud comes with very big disadvantages and it might be more convenient to go back and own your own data center. And this article explains all of that point of view as well. So probably a good idea to also in this context to read that article as well to get the bigger picture of the author's opinion. Now that we went through a summary of the opinion, and I hope I gave justice to this opinion. I didn't butcher it too much. Eoin., should we go through what do we believe in our opinion that actually it is right in DHH's opinion? What do we think it's fair? And then maybe after that we'll discuss what do we actually think that is not that fair and where we have different views. Yeah, let's do that.
Eoin: Maybe first I just want to say like we're all biased when we discuss these topics. And it's interesting to kind of analyze the bias on each side a little bit because I've followed the content that DHH and his co-founder Jason Fried have created like the "Remote" book and the "It doesn't have to be crazy at work" book. I really like those. There's a similar style here, which is that they're really compelling, very interesting, thought provoking and good at kind of defying conventional thinking around work practices, but not necessarily backed up by data or facts and figures.
So there's an interesting strategy I think that's working well for them and their companies in that they create this content. It gets a lot of discussion going that increases popularity, which increases the draw to their products and it's a similar thing with their latest message around leaving the cloud. I mean, it's all good publicity at the end of the day. So even if the original logic for leaving the cloud doesn't necessarily stack up in terms of facts and figures, it's still generating clicks and inbound leads for the companies.
So it's kind of win-win in a way. But like, I mean, I think there's a fair point here. We're also biased, right? We are working on the cloud. Most of our work involves cloud consulting. Ultimately, we would say that we're solving business problems for people. We're not just selling cloud, right? That's not what we do. We don't sell the cloud to people. We sell solutions to people to actual problems that they have.
And if the cloud isn't the right solution, we try not to give it to them. But at the same time, this is a part of our toolkit and we have a certain fondness for it and we have biases here. So let's get that disclaimer out of the way. Now, what do we think is correct in this article then from DHH? I think the pricing analogy, even though it's a bit crude, it does highlight a significant point with serverless that people should look at seriously, especially if you take the definition of serverless as Functions as a Service only, which seems to be what's being done in this article.
It doesn't really talk about serverless in the general sense as we normally would, where you're just talking about trying to remove lots of infrastructure and use things like SQS, S3, DynamoDB, not just Lambda. But we have talked before in the realm of compute and comparing fast EC2 and Fargate about how the price of serverless kind of has to change. And we can link back to that article again. Since that article was published, the price of Lambda has kind of changed for bulk pricing.
So it may be heading in the right direction. But as of today, if you were to design a workload that perfectly maximizes utilization of an EC2 machine, which is the example that DHH uses in the article. And then compare that price with Fargate, Fargate will cost you about three times more. And even if you were to run the same workload on Lambda, it could be two and a half times more expensive than Fargate or up to seven times more expensive than EC2. So that's if you're just looking at raw compute cost and not factoring in all the other equations. So there is a point there. And we should seriously look at that, make the calculations and ask ourselves some tough questions if things are going to be significantly pricey for our business. It's not the case for everybody, but you definitely have to do the calculations. What do you think is wrong or missing from this article that you know?
Luciano: Yeah, I think in general that the main thing that I found was missing that maybe gets a little mention in the other article, but not in this one about serverless is total cost of ownership. Because when we talk about cloud total cost of ownership, we mean the costs required to host, run, integrate, secure and manage your workloads. And of course, if you are managing your own data center, that's significant cost that needs to be taken into account when trying to compare the cost of a serverless solution with the cost of that same solution running on premise or maybe even just in an EC2.
You'll need to have a lot more maintenance when you're running more like VM style workloads. And that's cost that needs to be taken into account to have a more fair comparison. Also generally when you manage your own VMs or you manage your own data center, you probably need a lot of specialized stuff that can operate this infrastructure. You also need to take care of electricity cooling, physical security of the buildings, racking things up, making sure all the cables are connected correctly, UPS.
And also there is an element of opportunity cost because if you're focusing so much energies of your business into this kind of elements, you're probably getting distracted from what is it your business needs to deliver to customers. So that's also something that needs to be taken in mind. And so, yeah, it's fair to say that calculating total cost of ownership is really, really hard to do correctly. I don't even know if there is a definition for what correctly means, but everyone I think should do a genuine effort to try to understand this particular choice. Like if you go with one option or the other, you cannot just look at one dimension and not consider the other. You need to be consciously look at both types of cost structure and decide which one might be more convenient for you. So yeah, I don't know if there are other points that you have in mind where you would disagree with the article?
Eoin: Well, I think total cost of ownership is the big one and I think you're completely right. That's a difficult thing to quantify. So it's something that both sides of the argument can hide behind for sure. But the other one is just around the agility you get with serverless. I would say in our experience in general, serverless helps teams to be more agile and ship products faster. That's not true for everybody.
If you're learning serverless for the first time, I think in one of our very first episodes, "Is serverless good for startups?", we talked about how, if you've got a skill set and it isn't serverless, you might be better off not using it at a certain point. So it's not a one size fits all, but I would say in general, if you can invest in it and you've got the existing expertise somewhere within your organization, you can benefit massively during the innovation phases because of the low cost of experimentation with serverless.
You can try things, swap them out, quickly adapt and move on. So I would say it also makes it easier to reverse a decision and create that two-way door where, if you build a system and it does end up being expensive or suboptimal for any other reason, you can switch things off and immediately stop paying for it. And you don't have to worry about the capital expenditure upfront and the sunk cost fallacy that comes with investing a lot of money in one particular decision.
That's one of the really massive benefits of serverless that isn't spoken of quite enough, I think. A lot of this is because a lot of the responsibilities that traditionally would fall under the umbrella of different development teams are just delegated to the cloud provider itself. So we're talking about security, high availability, reliability, scalability, et cetera. And that should allow teams to be more focused on the business logic and the application. There's always a trade off there as well. You also have to understand that with new paradigms, there's new learning that has to occur with new challenges. So you have to bear that in mind too, and keep going with your eyes open. Certainly, with massively distributed serverless applications, event driven applications. You need that investment in observability and operations. In response to DHH's comment around lock-in, is there really an insurmountable lock-in with serverless?
Luciano: Yeah, I have mixed feelings about that because I think it's true that every cloud vendor has different APIs for Functions as a Service, at least. And I remember back in 2018, I think, the Cloud Native Foundation was trying to bring all the cloud vendors together, at least to try to define a common set of events and provide kind of a unified interface. I don't think that there was any progress in that initiative, or at least up to this day.
I think all the cloud providers are very different in the way you are supposed to write your own function as a service, the events that you get, and different capabilities that you can have within your Function as a Service. So definitely, that's something that we cannot negate. It's definitely there and it can be a problem. That creates an effect that if you want to change provider, for instance, it's not something that you can do as a lift and shift kind of process.
You will probably need to rewrite some stuff. And this is true not just for FaaS, but also for serverless compute in the broader sense. Even if you use containers, for instance, if you use Fargate, if you look at Fargate or container serverless in the context of AWS, it is still very specific to AWS. How do you configure, for instance, networking, storage, permissions? And again, even if that container probably is something that you don't need to change in the event of migration, you still need to change a lot of stuff around the configuration of how that container is supposed to run in another environment.
So I would say that in general, lock-in is something that maybe cannot really be avoided. It's something that always exists because every technology choice you make in one way or another causes some degree of lock-in. And we can go deeper and start to look at web frameworks, libraries, hardware vendors, like every one of these choices. It's something that is going to lock you in to some extent. And if you need to change, you'll need to incur some kind of cost and redesign some elements of your solution to adapt to a different approach.
So yeah, I suppose that another question we might ask ourselves is, OK, let's admit that there is lock-in, but is the serverless lock-in bigger than a lock-in that you would get with another approach, maybe, I don't know, managing things in your own data center? And in a way, I feel that this will be a very fair question to ask, but also a very difficult one to answer to properly. Because I think it really depends on how you build your own applications.
And that can be, of course, in both ways. You can build a very good application on premises and a very bad application on a serverless environment. And therefore, one might be easier to move than the other, but also the vice versa applies here. So I would say in general, trying to think about how to build applications that will have less lock-in, it's an exercise that you should do anyway, regardless if you go for a serverless approach or for a different kind of approach, maybe your own data center on premises.
And, of course, there are some common suggestions there that you can follow. For instance, you should try to decouple your business logic from vendor-specific APIs, whether they are coming from, I don't know, your own FaaS provider on the cloud, or maybe they are coming from the hardware that you picked on premises. You should try to isolate that code into libraries that are easy to swap so you can keep the core of your application as, let's say, pure as possible so that you don't need to change anything there.
But then all these integration layers, you should be able to write them in a way that they can be swapped easily in case you decide to change that kind of integration. And this is a common practice in software. You've probably seen this in many, many books or talks. It's not something new and I think it's just best practice that you should follow anyway, even if you are not considering the option of switching your environment in the future.
The other thing that I think goes a little bit in favor of serverless is that serverless by nature forces you to write very granular units of code, like the function unit itself. So that's something that in the event of a migration can be actually convenient because it doesn't force you to go for a big bang migration where you have to take an entire solution and move it in one go. You can actually have the freedom to move very specific functions.
Maybe you realize one function is very expensive and it's not convenient to run it in the cloud. You can just move that one function. Or maybe you can decide, I want to rewrite this one function in another language because maybe that language can be more efficient, can be cheaper, can be easier to maintain. You can do it with that one function. You don't have to rewrite everything. So actually, in a way, I think serverless can be, at least from this particular point of view, can be easier to migrate incrementally than other solutions.
And then there is another great quote about vendor lock-in that I really like by Kelsey Hightower who is saying, this is actually an old tweet, probably around 2017 or something like that. And he was saying that instead of trying to avoid vendor lock-in, concentrate on switching costs, which basically is, you need to try to answer the question, how easy it is to create a new solution. How much does it cost as opposed to how much will it cost to migrate away from that solution later.
So if for you it's very, very easy to build something, that alone might justify maybe higher cost of switching in the future. Of course, it's not an hard and fast rule. Everyone needs to try to understand what are these costs and kind of make a balance and decide, okay, this is convenient for me. I'm going to go for this. Or this is probably too worrying. I'm not going to go for this and keep going with maybe a more traditional approach that is very well known within the company. So I think, yeah, it's definitely important to consider this conversation with more data points and more dimensions. It's of course easy to say that serverless doesn't work with just one data point. I think that there is a lot more that needs to be put on the table to decide whether serverless can be convenient or not for your particular project and your particular company. Now I don't know if you want to add any final take on serverless and what we think about the future of serverless in general?
Eoin: We believe that serverless does bring significant benefits to the table. Hopefully we've managed to convince people. I think everybody needs to embrace it and figure out where it works and where it doesn't work for them. If you take it as your default choice but have a healthy degree of skepticism, I think that's a good approach. It always comes with its own trade-offs. You just need to understand and evaluate them, but that goes for every choice. And that choice should be made based on looking at the data and the facts a little bit deeper and not just based on some well-written opinion piece. It's a trend that isn't going to stop with the serverless. So businesses need to deliver products fast, have to get going quickly. Serverless can really help there, so don't lose out.
Luciano: Yeah, I think this probably covers what we wanted to share and I think we did a good job at giving another point of view. And again, it's just another point of view. So at the end of the day, you have to bring all the different opinions together and decide for yourself what do you believe. But we are really curious to know what do you believe. So definitely let us know in the comments here if you think that serverless is actually going to be more and more prevalent in the future of the cloud, or maybe it's just something that is going to fade away and will come back to more traditional approaches. Or maybe there is something else that we are not considering that is neither serverless, neither on premise, and maybe that's something we should be focusing more on. And before leaving you, I want to mention there is actually a very interesting resource, which is an article by Jeremy Daly, which we will have the link in the show notes. And it's a very good response to the first article by DHH, which is "Why Are We Leaving the Cloud?" And I think that's, again, if you're interested in this kind of conversation, it brings yet another opinion into the table. And it can give you more things to think about so that you can form your own opinion. So yeah, thank you very much for being with us and we will see you in the next episode.