Help us to make this transcription better! If you find an error, please
submit a PR with your corrections.
Luciano: What if Java is too old for serverless is the biggest myth holding teams back in 2025? Today, we are putting that idea to the test with someone who has seen Java succeed at a very high scale, especially with Lambda. I'm joined by Mark Sailes, former AWS engineer and internationally recognized expert on Java and all things serverless. By the way, if you know me, you probably know that Java isn't necessarily on my list of favorite programming languages when it comes to Lambda, especially. But if there is somebody who can change my mind, that's definitely Mark.
I have compiled a long list of questions, so we'll cover all things Lambda and Java. And just to give you a spoiler, I'm very keen to hear when Java is the right call for Lambda and when it isn't, what are the trade-offs that actually matter, and what are the tools that move the needle, especially when it comes to latency and cost. And you know, we'll probably be talking a lot about Snapstart and provision concurrency and things like that. I'm sure that Mark will also share some practical tips, and those tips I'm sure are going to be valuable for both juniors and experts. So I hope you are going to enjoy this episode. My name is Luciano. This is another episode of AWS Bites. And today we are joined by Mark Sailes. So Mark, thank you for joining us. And I just want to start by letting you introduce yourself.
Mark: Thank you very much. And it's great to be here. I've watched a lot of your episodes. They're all really cool. So it's great to be part of this podcast. So yeah, I'm Mark Sailes. I worked at AWS for six and a half years. And at that time, I did a whole heap of different jobs in different industries, all to do with solutions architecture. And then the majority of my time was as a specialist solutions architect for serverless, so really diving deep into serverless workloads with some of AWS's biggest customers. And that was just a fantastic experience. It really did give me a lot of insight into how big companies are using serverless. Some of the scale that Lambda can handle, which is just fantastic. And then, yeah, where I found my niche, which was helping customers who are using Java that just don't have a lot of material to kind of learn from. So I spent a lot of time looking at what their problems were, and then trying to work with the service teams to fix them. And that was something that I really enjoyed and something I hadn't really done before in my developer kind of led career. So yeah, definitely, you know, looking behind the scenes into the product side of the business and seeing how it works.
Luciano: That's a great intro. Thank you for sharing that. I remember in one of our previous conversations, you mentioned you had been working on something like 100 projects or close to that with AWS. And I assume many of them involving Java. So I'm sure you heard a lot that criticism of Java being around for a long time. It's kind of an outdated language. So I guess my question would be why? Well, I'm sure that's not true that there are ways to kind of dismantle that myth. But in general, why should we bet on Java in 2025?
Mark: I, you know, I really don't think we should be coming at, you know, it in that direction. You know, when I come to a new project, or come to an existing project that I'm looking at again, I don't start with saying, which language should I use? You know, if I'm working as a team lead, or if I've joined a team, or if I'm an architect working with a team, I look at the skills of that team, and the application that we're working with and the wider context. And that's how I'm choosing every part of the stack, you know, do we have any skills in NoSQL? If we don't have any skills in NoSQL, we probably won't be using DynamoDB. If we haven't got any Go skills, we're probably not going to be writing the code in Go. So I'm thinking much more in that direction. So if I'm speaking with a large bank, and they are wanting to adopt serverless, I am definitely going to help them to adopt serverless, because serverless is really going to help that business to actually gain some momentum and cut some of the overhead that they've traditionally spent a long time dealing with. And Java is one of those things where, you know, it got a bad reputation, and it hasn't been able to shrug that bad reputation off. And I talk about it similar to the way that Lambda used to have a bad reputation with VPC. If you've been using Lambda for long enough, you'll remember if you had a VPC attached Lambda function, a cold start would take at least 30 seconds. But now, no one even considers that because it's kind of been fixed. And the same is true of Java. You know, there's a lot of tools and techniques that you can use. Some of them easy, some of them hard, some of them, you know, you don't need to invest much time into. You know, it's very easy to get a cold start less than one second. But can you get the same sort of cold start as a Rust function with 128 megabytes of memory? No. But that's just the different behaviors of the language. And, you know, teams have been using Java for a long time. They're very familiar with Java. They have all of their tools, libraries written in Java. So I'm much more interested in helping them to adopt serverless rather than, you know, them having to, you know, rip up everything that they know. And start from scratch.
Luciano: And to be fair, one of the things that I really like about Lambda and FaaS in general is that you could use a language for most of the code base. And then in those very small pieces of the entire software where you need, I don't know, some different characteristics, it might be performance, it might be something else. You can pick another language that maybe is more suitable for that particular
Mark: absolutely. This is the very nature of distributed systems and being able to have something that's composable. But also, you know, Java does have the capabilities to now compile to native bytecode. So this is a technology that Oracle has developed called GraalVM Native Image. So I have seen that very pattern. I was working with a large bank and they produced a Lambda authorizer for API gateway that had to be low latency. So they spent the time and the effort that is required to use that technology to make sure that it was, you know, cold starting in, you know, 100, 200 milliseconds with 128 megabytes of memory. But, you know, that use case was very specific. It didn't have a lot of dependencies. There was no, you know, external network calls. It was able to be, you know, done in such a way that it really excelled. And the time and effort spent doing that was worthwhile. absolutely.
Luciano: And with that in mind, when you would advise teams to use Java on Lambda and when maybe you will say, actually, for this particular use case, you're better off with something else?
Mark: So in that scenario, I, you know, the first question I'm asking is what sort of traffic profile are you seeing? What is the kind of ceiling on cold starts that you need? Because that's probably the biggest blocker to get something below, you know, a one second cold start with any meaningful application. So when I say that, I guess I mean, you know, does it need to get parameters from a parameter store or secret manager? Does it need to connect to a database? Does it need to connect to any other AWS services?
Is it some sort of meaningful application like a microservice? It's unlikely that you're going to get a 500, 500 millisecond cold start. So if you need something below that number, you either have to put in additional effort to go to ahead of time compilation or use a compiled programming language like Go or Rust or C or C++. But after that, I think the majority of people are building microservices and the world is the world is built on Java microservices. I mean, the amount of spring framework microservices in the world is just phenomenal. And, you know, those are the ones that I see running on EC2 servers, high availability, you know, 10 different environments, and they're just stacking up the costs every month, having to be maintained every month. And those are really the kind of targets that I want to help people migrate to serverless and have a much better life. that makes a lot of sense.
Luciano: I think a slightly related question, what do you think is the unique selling proposition of Java when it comes to serverless and Lambda? Like what's unique about the language itself that maybe other languages don't have?
Mark: I don't know if it's necessarily to do with serverless, but Java's unique position is that it is the language of integration. It probably, I imagine if there was any sort of data behind this, I'm not sure, but I think it has probably the richest, longest history, the most developed SDKs for a variety of different tools. And I mean, Lambda Snapstart is a really cool technology, being able to have that additional phase that you can affect your Lambda functions at deployment time. So you can do a lot of preparation before your application is ever invoked by a customer. Now, obviously this is now usable by Python and .net. I'm sure it'll be rolled out to the other languages as well. But yeah, I mean, Java is really the language of integration. And that's where I see the superpower.
Luciano: So I will maybe rephrase that as it has been more kind of the adoption patterns rather than the language itself, having specific unique characteristics. Maybe what we could focus on is giving people some examples of, I don't know, use cases where you have seen, where you will say proudly, yes, I think Java was the right choice.
Mark: And I mean, I've just seen, you know, countless microservices where it has run on EC2. It's a microservice that is an office based application. And it runs from nine to five. I mean, these are the the bread and butter applications that have been written by, by large enterprises for 20, 25 years. And there's that sort of maintainability that Java is famous for. There's no breaking changes, very easy to upgrade. And I don't see serverless being a something that, you know, is has to be only small companies, agile companies. You know, I think one of the real benefits is actually in enterprises where they are having to spend a lot of time doing the high availability fault tolerant architectures, that they're just churning out EC2s. And that's where I see the real, the real benefit is by, you know, bringing that vast volume of applications that has been built over the last 25 years and helping people to gain the benefits of serverless.
Luciano: I will add to that that even the, the ability to offload a lot of the kind of security and compliance aspects, just because you give that to the provider, in this case, AWS with the serverless runtime, I think it will be a huge benefit to this kind of companies because they have entire departments that generally look after those things and reducing the work for those, I think will be extremely beneficial for those businesses.
Mark: So, you know, for me, that really came to light when I was working at AWS during the Log4Shell vulnerability. So this was when, Java's largest logging framework had a, a critical CVE, which actually scored 10 out of 10 for ability to be enacted and the effect that it could have. And the Lambda team were actually able to work with the AWS Corretto team, which is the team that distribute, Amazon's version of the Java runtime.
And they were able to mitigate that attack by, checking whenever a vulnerable class was loaded into the runtime and strip away the vulnerable code. and, you know, for me, that was just an amazing capability that that team was able to, to bring to protect customers, you know, within days of it's being released. And, and being part of that team while that was happening was a real highlight of my, of my career. you know, the, the fact that we saved so many customers from being attacked was, was super cool. That probably goes to show how powerful it is to be able to offload some of that responsibility to a team like the one in AWS that is extremely experienced and they have people on call.
Luciano: So I think it would be very, very unlikely for even a big company to be able to have a team dedicated to this kind of activities. With that level of efficiency, I'd say.
Mark: And just the fact that it, it would be easy to, you know, even if it hadn't been so transparent, you know, if it would just been an SDK change to, to bump to a, you know, a new version of the runtime, it's just so much easier to do than having to, you know, patch, runtimes and servers and different OSs. I mean, it was very similar when, spectre and heartbleed, came out. I was actually a customer of AWS when that happened and we were using, Lambda at the time, but all of the other teams on the floor were using EC2 and they were not only having to patch the machines, but there was also a very, very real risk at that time that the, the patches would, degrade performance on machines. So not only were they having to patch, but they were also having to benchmark to see what capacity they needed to add. Whereas the, the team that I was leading on Lambda, you know, very smugly, we didn't have to make any changes. that patch was already applied to Amazon Linux before the announcements was made. So again, you know, these are very, very attractive features to companies that use Java, which are the enterprise companies that have to be compliant. Absolutely. I totally agree on that.
Luciano: Maybe we can revert a little bit back to some of the points we touched before, because I think I really like the way you, you tend to approach like new, new projects and work with different teams, which is what you have been doing for most time in AWS. So I will probably ask you a bit more about like, what, what is your, let's say strategy for lack of better words, like what kind of common problems and fears you have to, to work with. And then how do you effectively end up with the entire team having a great experience and the project being successful? I, I, I guess typically I get brought in, when teams are having a bad time and they'll probably have spoken to their AWS account team.
Mark: Maybe they have some experience, but don't have a lot of experience. And, and basically that, that kind of request kind of funnels up to, to somebody in AWS who can handle those sorts of requests. And I was kind of the top most, person in AWS handling a lot of Java Lambda questions. So, you know, typically you'd join a call and people, you know, really enthused about serverless and, and really wanting to make a really good go of it, but just needing some help on how to think about, Java, ephemeral environments, how to use Lambda efficiently.
So, you know, very quickly, you can look at their code and say, all right, okay, you you've, you've kind of understood some of it, but maybe not all of it. So, you know, this is an environment that needs to focus on starting up quickly. So do you need, you know, all of these external network calls to, to load in these various things? can you change your architecture slightly to, to be more, more, serverless friendly?
You know, can you get away with, you know, changing this library that doesn't really help you to this other library that it's a lot smaller, a lot leaner, and probably includes all of the features that you actually need? you know, just, just giving slight advice into, into cold starts, because that's typically where everyone kind of panics. The, the developers try something new. they're not entirely familiar with the technology.
They have a bad experience because they're uploading code, pressing test on the console, getting a cold start, changing the code, uploading it, getting a cold start. And they just think that, that Lambda is really slow. so, you know, typically I talk to them about their use case. What's your traffic profile? What's your existing application latency? And, you know, trying to understand what they're trying to achieve and then working backwards from that.
And often when companies just run a benchmark or some sort of, of load test, they'll see that actually the performance is completely different to what they've seen in their development cycles. Because their development cycles are always the worst case scenario. When they do some sort of, you know, nine to five, traffic pattern, then suddenly the P99 is like 50 milliseconds instead of, you know, two seconds, which is what they see during development. And then everyone kind of calms down and, you know, then it just becomes an optimization conversation more than a, you know, a world ending. How is this so slow conversation? that actually reminds me of something that I've seen quite a lot myself, which is, new people approaching Lambda, not realizing the difference between the init phase and the handler phase.
Luciano: And they just ignore the init phase entirely and they put all the initialization logic in the handler. And then they don't realize that they are just accumulating all that extra time for every single request when it could be offloaded at the creation of the Lambda instance. And this, I'm sure I've seen that a million times as well, but I think there are lots of like tactical tips like that, that when you have the experience, you can help teams be much more effective just by spotting those things and helping them to understand them and fix them.
Mark: And maybe that's a, you know, UX problem in Lambda, you know, you know, it's very clear that the handler is called on a customer request, but, you know, for different languages, it actually handles the init phase differently. And, you know, when you think of technology, all the differences between technology, between something like an on-demand Lambda function and a provision concurrency Lambda function and the expectations and latency, but it's not particularly clear, from a, from a UX or a developer experience point of view, like how you should be programming, those differently for different, latency characteristics. So it's easy for us to, to kind of understand these things after, you know, working with the technology for years and years, but I can definitely understand, you know, newcomers having a hard time with it. So, so very sympathetic. And just to get a little bit more practical, is there, I will say a specific list of tips that you can give people like, I don't know, I'm thinking stuff like which version of Java should they use and should they always use Snapstart and how to configure it correctly?
Luciano: And maybe, I don't know, are there specific JVM flags that you always would enable or maybe, I don't know, consider when to enable and what, which values to use?
Mark: Mostly you're not really tuning the JVM anymore. That was definitely something of, older versions. but since, you know, the, the newer versions, that's not really a thing. So specifically with, with Lambda runtimes pre Java 17, there was a JVM flag that, AWS recommended. And there's a whole heap of blog articles about a feature flag called tiered compilation, but that was, added by default in, JVM versions after Java 17.
So Java 17, Java 21, and probably the future Java 25 will all have this, flag enabled by default. so at the moment, you know, there's probably no JVM flags that I'd recommend at the moment. I think it'd have to be a very specific use case where you would go into something in depth, but you know, normal microservice architectures, you know, there's, there's no real additional, JVM flags. And that's a good thing because we've, we've worked or, you know, I worked with the Lambda team to make sure that these were enabled by default.
The sort of things that I like to tell people, straight away is, you know, there's a logging library called Log4J and this is a logging library. That's been around forever. Very, very fantastic piece of software, but now is one and a half megabytes in size. and this, I think is just not, a tailored solution that you probably want to be using in Lambda. So, you know, now I recommend another, another library called Penna.
So if you search for, you know, Java, logging library Penna, you'll find this fantastic open source library, which is 50 kilobytes. It does structured, structured, JSON logging has zero external dependencies, super fast, very low garbage, collection overhead. And, you know, these are the sort of things that we need to be, you know, looking out for in this community and helping people to understand that, you know, you've used to have no real consideration about startup time with application servers.
It didn't really matter whether your application started up fast or slow because your application servers stayed up for a week, but now we do need to. So what are, the best of breed application dependencies to, to help you do these things that you need to do, quickly and effectively. So Penna is a drop-in replacement for, for the most common, logging abstraction in Java, which is called, SLF4J. And it's a drop-in replacement.
So again, you know, it's not even like you have to refactor your code. You can drop in the dependency and your application code will, will continue to log or log in a structured logging way. And you just carry on with your life, but you've gone from a one and a half megabyte dependency to 50 kilobytes. And, you know, those are the sort of considerations that you need to be looking at through your application code to say, am I really focusing on ephemeral environments? You know, is there, is there a better way of doing this? And that is a balancing act because, you know, maybe you don't even need the super, super efficient, cold starts because you have an application profile. It doesn't really cause you to have a lot of cold starts. So it's all weighing up, you know, how much time you want to invest in, you know, technical people always like geeking out on optimization. So it's hard to put the tools down and, and, you know, get on with features.
Luciano: That's, that's absolutely fair. I'm definitely, one of my mistakes. I recognize it's always focusing a bit too much on performance, which sometimes makes sense, but not always. Most of the time, as you said, it's more important to ship features and deliver value to the business. And then whenever performance becomes a bottleneck, you can work on it and improve it. So definitely. That I agree with that point. What about, we mentioned it already, snap start a few times. We mentioned provision concurrency are those things that you would always use by default, or is there like a point where it makes sense to invest into those, those features, enabling them correctly and learn how to use them correctly?
Mark: I mean, I, I remember distinctly, I, you know, a conversation with a pretty major bank. There was probably 10 people on the call and we were discussing provision concurrency and the cost of provision concurrency. I think they had a concurrency of two at the time. And I was, I was just, man, the, the, the time we've spent discussing this problem and the salaries of everyone in the room. I mean, why are we, why are we even, you know, talking about 20, $30 a month?
So provision concurrency is, is definitely a way of, you know, mitigating a lot of the optimizations, but there is a cost involved. I guess what people sometimes forget though, is that provision concurrency can actually be cheaper than on demand. So if you are, if you do have a Lambda function that has significant traffic, then you probably should be using provision concurrency because every invoke with provision concurrency is cheaper.
than on demand. So if you can use provision concurrency at the base utilization, you will actually save money, but it does get a bit of a bad rap as a way of kind of optimizing, in exchange for cash. So I'm not against using provision concurrency. but again, it's understanding, how it works and a lot of the optimizations that you would do to, help provision concurrency be even better. help you when you use Snapstart.
And I guess at the time of recording Snapstart for Java is I think still free, whereas I don't think it is free for the other languages. so, you know, would I use Snapstart? absolutely. it's gonna, even just turning it on without doing any further optimizations, it is going to save you, latency, using Java. And so I would definitely use it. It does increase the deployment time because now you are using, you have to use, Lambda versions.
And each time you deploy a new version, you have to go through a life cycle where a Lambda function is snapshotted. So when you deploy that new version, that code is put onto a separate, fleet of execution environments. It's initialized. You do any work that you need to do. A snapshot is taken. It's encrypted. It's put into storage. That, that whole process takes time. So your deployments do take longer. So you might not necessarily have Snapstart enabled full of your development environments where you're wanting to do, high change cycle, but you would probably have it on all of your, pre-production environments where you want to have, you know, the, the best performance and performance that's going to be applicable or similar to production.
Luciano: That makes a lot of sense. let's stay a little bit longer on this kind of topic, that covers all things optimization. I will say like call starts and performance, maybe cost as well. do you prefer a specific like combination of runtime and by runtime? I mean that the standard supported Java runtimes versus maybe something like, I don't know, a custom image using GraalVM or something like that. And at the same time, you already mentioned some libraries like the login library, but in terms of framework, like you mentioned Spring is like almost ubiquitous in Java, but I know that there has been a lot of advancement with like newer frameworks that tend to be, I guess, more optimized for serverless and microservices. So I'd say, I'm going to phrase the question as like, what's your favorite setup when it comes to Java, if you could pick with like total freedom.
Mark: So for me, whether I'm building something for myself or whether I'm advising other people, I always start by saying, do you need a framework? Cause if you don't need a framework, then you shouldn't use a framework. So especially for, for event driven architectures, it's probably unlikely that you need a framework unless you have an application that really benefits from dependency injection or, you know, you have a existing set of libraries where, you are used to using dependency injection and you want to maintain similarity across the, the estate, which I can understand.
People get annoyed when, you know, certain teams do things in a special different way. I think people should have the flexibility to use the best approaches, but I can understand it from, from both sides. So if you're doing event driven architectures where, you know, you're receiving an S3 object or whether you're processing a messaging, a message from a queue, I think you should definitely be challenging yourself to not use a framework.
If you're, if you're used to using Spring, start with no framework and see how far you get. And then if you have to take on an application framework, then, then that's fine. Spring is fantastic. Quarkus and Micronaut are probably the two, next most popular frameworks. And the big thing about those two frameworks is they were both kind of born at the same time in reaction to Spring. And they've both been developed in a way that reduces the amount of, reflection, reflection that is used in the, in the, application framework, which is one of the features of, of Java that tends to lead to more latency.
At the same time, Oracle was developing, Graal VM native image. And the, the two things basically accidentally became very well aligned. The, Graal VM native image doesn't really like you using reflection. And these frameworks didn't use reflection. So it was very easy to have an application built with Quarkus or Micronaut. It was very easy to become an ahead of time compiled, binary. Whereas, if you were using other frameworks, previously you would have to hint to the compiler that, you know, this is a resource that's dynamically loaded, which became, you know, an awkward, awkward process.
But, but, but since, Spring has done a lot of work to, you know, really, make sure that ahead of time compilation is supported well in Spring. So with Spring Boot 3 and Spring Framework 7, it's really well, supported as well. So, so, you know, really you're looking at the features of these application frameworks as a whole and, and picking which suits, best for you and your project. all of them are very capable and all of them have a lot of support. So Spring is now owned by Broadcom after the acquisition. you know, there's a large open source team working on Spring. Oracle have been adding more developers to, Micronaut and building out the team at Micronaut. And then Red Hat, have been investing heavily in Quarkus for a number of years. So you've got three really good options that are well invested and will, will definitely, stand the test of time.
Luciano: That's pretty cool. And I think I've only used Spring Boot myself, so I cannot speak for the other frameworks, but I only heard very good things. So I'm curious. Maybe eventually I will try them and see how they play with Lambda.
Mark: So Micronaut has been developed, from a lot of people who spent a lot, a lot of time, building Spring. So they, they looked at Spring, taken ideas and inspirations from it and built kind of their, version of, of an improved application framework. And then the Quarkus team have built something, from scratch, but it's also very standard standards based. Whereas Spring and Micronaut don't really follow the same sort of Java enterprise standards. Quarkus is, is a standard based application. So you can imagine, migrating to Quarkus from, from previous, older, application frameworks easier. So those are kind of some considerations that you can make if you're, if you're potentially moving or migrating applications or wanting to move to more serverless orientated framework. Great.
Luciano: Let's, probably related topic, but sounds like a little bit of a change of subject. So what about testing and maybe developer experience in general? Like how do you generally go about testing your Lambda functions in Java? so I do a lot of testing locally.
Mark: So I know there's probably kind of like two camps. you know, there's definitely people who love testing in the cloud and I, I love testing in the cloud. But I try and, focus that on my, for my end to end tests where I'm doing integration and unit tests. I, I favor, doing that locally. So I'm a big, fan of local stack. And, I think it's, supports multiple languages now, but, a framework called test containers.
And in Java, it's very easy to, you know, spin up a Postgres database from a Docker container. As part of a unit test and integrate my code against that. And in the same way I can boot up, an S3 bucket or well, actually an S3 service. And then I can create a bucket and, you know, add any items that I need to add to it. And I can do that with, you know, other AWS services. So if I want to queue and a Lambda function, it's very easy for me to, to kind of write a integration test that I can integrate, locally.
Now, obviously there's going to be stuff that I can't cover, in those tests. And that's where I moved to the cloud for integration, for end-to-end tests. So things like capacity, security, permissions. Those are the sort of things that I'm, I'm looking at an end-to-end point of view, but the rest of the application and the awkward integration stuff I'm kind of doing locally. So I have the, the best tools. I have my, have my idea of choice. I have my debugger. I can, I can do stuff that I'm used to doing and have a lot of experience in the tools with. So that's, that's my approach. That's what works for me. actually, I think I do the same, although mostly with different languages, but I generally try to push the local testing as far as I can.
Luciano: Just because maybe it's just out of habit. That's what I've been doing for the most part of my career. And it's nice to have that fast reload and debugging experience. But yeah, I think at some point you, you have to, to start testing also in the cloud and you end up with kind of a mix of the two approaches for different kinds of tests. So I think that's, I will define it as a pretty standard approach. But yeah, it's sometimes a controversial topic where people will say, well, with the cloud, you only have to test in the cloud because that's the true real environment, right? I think maybe we can agree that I think mocking is on a decline because with distributed systems, it's hard to understand what behavior the system should emit.
Mark: So I think I'm using mocks less and less. And, you know, I probably can't remember the last time I used mocks in a test because I'm much more likely to favor integration tests with test containers and local stuff. And I feel like that's just way more productive and way more useful from a testing perspective and having to work out what the behavior actually is and then mock that behavior. So that would be my hot take, mocking decline integration testing on the up. which is, I think, something that I'm hearing from many people also with the inverse pyramid of testing model, which goes more or less in the same line.
Luciano: And I find myself as well, maybe for simple cases, simple enough cases that I still see lots of value in mocking, just to be sure that maybe when you have, I don't know, complex behavior in reaction to an external, whatever, event provider database, to make it easy for me to unit test that complex behavior. But then if I have to mock a lot, because I don't know, the event is very complex and it can have so many different states. Then, yeah, I think there is a point where it's not worth it anymore. And you end up spending so much time if you want to be comprehensive and also end up with code that is very brittle. Every time something changes, you better have to rewrite most of your tests. So it's not like I think, yeah, there is a line where it still makes sense, but then you cross that line and it doesn't make too much sense anymore.
Mark: I really like having, you know, the core journeys as end-to-end tests that run synthetically. So, you know, when I'm designing systems, I'm trying to design systems in a way where I can, you know, segment data and make sure that I can send synthetic test data through the system. So if I cast my mind back to when I worked in the betting industry, we would have a synthetic football match being played via a test handler.
So that meant that there was always events going through the system, whether or not there was any real football being played. And that just meant that we were able to see if any component that we'd released had caused a breakage. Because that's the other problem with kind of time-based systems or event-based systems. If there's no events triggering the application, it's hard to know if they are currently in a working state. So synthetic data going through the system that can be segmented in some way from production so that it's not, you know, displayed on the website. I think it's a really, really good trick as well.
Luciano: but probably that requires a significant investment as well because you are effectively building like a simulation of, in that case, like a football match, which might not be very trivial to build. but I think these are the sort of things that you end up thinking is a heavy investment.
Mark: Well, maybe it is, but when you are able to, I mean, any non-trivial application, so any enterprise application that's going to have any sort of longevity, these are always going to be cost savings that just come back again and again. And being able to have a, you know, a constant benchmark is a really valuable thing. So I would recommend investing in test tools and something that I always did as a tech leader, being able to, you know, not rely on an external dependency for testing. So if I'm integrating with a, I don't know, a data provider, I would often make a substitute of that data provider so that I could change the behavior. You know, what happens to my application when that provider increases in latency or has a timeout period or some other behavior if it changes the format of that data. So being able to have, you know, really strong testing tools allows you to really test your application. Whereas if you just take the happy path of their integration, often you don't see these side quests that go wrong.
Luciano: Awesome. I totally agree with that. Now I think we are getting close to the end. I have only one final question, which is where people can find useful resources and maybe you can share something that is more appealing for beginners, something that can be more interesting for experienced people. And I know you have been investing your own time in building material, books and simulations and lots of other interesting stuff. So feel free to mention all of those things, which I think are super cool and extremely useful. so there is a lot of material.
Mark: A lot of it is maybe not well connected. So I think more of it will be more and more connected on the Lambda documentation. So that's a good place to start. Serverless Lambda has a whole Java section with material on how to migrate and how to effectively use Java on Lambda. So if you search for that phrase on Google, you'll probably find quite a lot of material. So effectively using Java on Lambda.
And yeah, you know, something that I've been thinking about a lot is, you know, I've probably had more conversations about Java and Lambda than pretty much anybody in AWS. And that might be a bold statement, but I think the majority of people who I've worked with would probably agree with me on that. And I started to write down all of my kind of common notes that I would discuss with customers. And I started writing those down.
I'd try and do, you know, 500 words a day. And now I think I'm at about 10,000 or 12,000 words. And I've published a e-book. So if you go to my website, sailes.co.uk, hopefully link in the description, you'll find a book that you can purchase. It's incomplete. So the price isn't too high, but it has a lot of good material around these sort of topics. So, you know, how do I start thinking about optimizations? How do I get to the lowest possible cold start values? You know, what sort of considerations do I need to think about using observability and Lambda? Those sorts of things. So all of my kind of top tips in a very condensed short e-book. So easy to read and easy to get value from. I'm sure that if somebody buys the book now, they will get also future releases as well, right?
Luciano: Yes. Is that the model you have in mind? absolutely. Awesome. Then we'll put all the links in the show notes. So not just the book, but all the other tools and links to libraries and frameworks that we mentioned today. I also know that you have a few simulations about Lambda, Snapstart priming, and in general, like the lifecycle cold starts and reusage and reclaiming of Lambda environments. I think those two are also two other great resources for people that are starting with serverless and Lambda to really understand what is the model that the platform is giving you. Like visually seeing it, I think it's much more powerful than reading a piece of documentation, trying to imagine in your mind, like all the different phases, I think.
Mark: And that's why I spent time creating those things. I think once you understand the execution model of a Lambda function, I think a lot of things click and then you start to understand what is a suitable use case, what is not a suitable use case. Maybe where past decisions on applications don't help future performance. So I think that's the real key thing to kind of understand as a new developer learning serverless or learning Lambda, should I say. Absolutely.
Luciano: And to be fair, I think most serverless environments have similar characteristics. So even if you want to go to something else later on, it's definitely useful to know the ins and outs of Lambda and then transpose them to another provider. Absolutely.
Mark: I mean, you know, a lot of the optimizations that you would do for Lambda, even if you did go to a container environment for whatever reason in the future, you're going to benefit from so much improvement, which means that your horizontal scaling will be faster. And you know, just your costs will be more aligned to the traffic that your application receives. So it's always a win-win.
Luciano: All right. So before we wrap up, I have to give a big shout out to our usual sponsor, fourTheorem. And what I want to say is that I work for fourTheorem. So of course, I'm biased. And at fourTheorem, we work with the cloud. We believe the cloud should be simple, scalable, cost effective. And we help teams to succeed with the cloud. So whether you are using containers or trying to build an event-driven architecture, or even just using SaaS and trying to scale it globally, keep us in mind.
We'd love to work with you. Check out fourTheorem.com where you can find everything about fourTheorem, some of our case studies. And of course, feel free to reach out and talk to us. So that brings us to the end of this episode. Mark, it's been a real pleasure to have you. I think I learned a lot. So thank you very much from myself in the first place. Hopefully, everyone else here listening has been learning a lot. Feel free to drop your comments, your experience. We always love to hear from our listeners and learn from them and share everything we learn and build a better cloud together. So thank you very much. And we'll see you in the next episode.
Mark: Thank you.