Help us to make this transcription better! If you find an error, please submit a PR
with your corrections.
Luciano: At fourTheorem, the cloud consulting company where we work, one of the things that we get asked a lot, both from our potential customers, but also from our peers in the industry is how do we work? And it seems like a simple question, but in reality, we can cover a bunch of different topics. Things like what's our unique selling proposition and our target customer base? What the first engagement with a potential customer is going to look like? Do we ever say no to potential customers? How do we gather requirements? Why are software projects somewhat unique? How do we make plans, including a desired architecture, estimates and success criteria? What do we do when the work starts and how do we keep iterating over it? And what happens after the delivery? There are so many different ways of working and methodologies to deliver software and cloud projects. We believe we have our own unique way of doing that and today we want to share it with you. Hopefully by the end of this episode, you will know what working with a cloud consulting like fourTheorem could look like and you might learn some strategies to make cloud projects possibly more successful. We will also digress a little bit on the history of software practices, common misconceptions and what we believe should be the right way to build software in the cloud. I hope this is going to be a fun ride. My name is Luciano and I'm here with Eoin for another episode of AWS Bites podcast. AWS Bites is sponsored by fourTheorem, an AWS partner with plenty of experience delivering cloud projects to production. If you want to chat, reach out to us on social media or check out fourtheorem.com. All the links will be in the show notes. So why don't we start by giving a little bit of an introduction to our audience, telling them what fourTheorem is, a little bit of the history, the people and some of our past projects made.
Eoin: fourTheorem started in 2017, actually that's already a good few years. There are three founders in the company, myself, Peter and Fiona. And I guess the interesting thing about us as the initial team is that we've all been through a lot of startups in the past, like our own startups, working in startups, bootstrapping, funding, building new products, getting them to market, succeeding and failing. That kind of helps shape, I think, our culture and our mission a little bit. One of the founding principles is that we don't want to just become a body shop. In the consulting world, it's pretty common. There's almost an inevitable gravitational pull towards just throwing people at problems. And that's customers and consultants kind of enable this behavior a little bit instinctively because when things get tough, people tend to just say, okay, let's add resources. We don't have enough resources to solve our problems. So let's just add more people. And this suits the business model of consultancies very well, because they're just earning money based on time and materials on a day rate. So you just times the number of days times the number of people, and you end up with a nice revenue at the end of it. But the problem with that is that it doesn't really solve the core problems in general, the core engineering problems, cultural problems, etc. And we try to do it a little bit differently. So rather than just adding people, we try to focus on the minimal number of people almost to do projects and focusing on the quality of the people and the quality of the work. And so far, we've been able to stick to that after almost seven years, six or seven years, which is good going so far, I think.
And so we focus on a small team. We're deeply technical, I would say, but with a focus on business because of, I guess, our previous startup experience, we optimize for delivering early and often so continuous improvement where possible, we were bounded with a big focus on AI and data science. That was one of the reasons myself and Peter wrote the book on, you know, using Amazon machine learning services in the beginning, just kind of stake a claim to that as a direction of travel. We're seeing that a lot now these days with Gen AI and other machine learning fields, data science, modern data architectures, production, those in a really optimal way was going to be a big differentiator. And the rest is just focusing on more managed services. So trying to remove a lot of the old school operations and maintenance that legacy systems have seen a lot of in the past so that people can do more again with fewer people. We help people of all different sizes. One of the nice things about working for fourTheorem, I guess, as an engineer of any kind, or no matter who you are, is that we work with startups, but also larger enterprises. That kind of has a, there's a virtuous cycle within that because when you're working with startups, you tend to move very quickly, iterate quickly. And therefore, as an engineer, you learn quickly and course correct quickly. In enterprises, the appetite for speed of innovation is less because there's more risk associated with it. So the pace is naturally different. But when people move from a startup environment into one of our enterprise environments, they bring that innovative culture, lots of new learnings. And then when you move from an enterprise client onto a smaller client, like a startup one, you kind of bring that rigor and process and compliance and governance and all of that stuff that's more common in the enterprise world. So it kind of benefits our clients in a virtuous cycle kind of way, but also our people. No matter what we're dealing with here, big customer, small customer, I think the first engagement is kind of similar. Do you want to take us through the discovery part when we, okay, we've wooed a new client or we've been another client has referred a friend of theirs in and we start talking to them for the first time. How does that work?
Luciano: Yeah, this kind of meeting is what we generally call a discovery session. And it's, there is no commitment from either parties to proceed with future work. So it's just an exploration to try to figure out exactly what is the problem that we are discussing and whether we can help solve that problem or not. It generally requires a mix of people. It could be a mix of business people and tech people. It really depends on the company and the type of project. But generally speaking, you need people that understand the business and the problem, both from a technical perspective, but also from a business perspective. And of course, we are going to bring our experience as well in that evaluation of the problem. But before doing that, we need to understand what the company looks like, what the problem space looks like, what is the project in detail and how accurate is the understanding of the problem itself. Because sometimes companies just have a very vague idea and then they haven't really developed a reason for why this idea is important for the business because they haven't dug deep enough to really understand all the nuances of the project. So by asking very specific questions, we can try to figure out exactly what is the level of understanding of that problem and whether the business really needs to address that problem or not.
Or maybe they need to focus on something else. So I guess focusing on the business challenges is a very important element to this phase of the conversation with the customer, where we are really trying to understand, do you need it for the business? Is this going to be an enabler of some kind? Or maybe something you want to do for innovation, but there needs to be a strong value. If the project is completed, the business is going to have a value pack.
So in a way, there needs to be some kind of investment and a return of investment as part of framing the conversation around the project. And of course, there will be also very specific technical challenges that we will need to understand. For instance, if this is something that the business is developing to compete with other competitors in the same industry, what the other competitors are doing is also very interesting. What kind of characteristics their solution has? Do we need to be better? Do we need to be the same? Do we need to have something maybe similar but slightly different? Maybe it needs to be more performant or simpler. So all these kind of details will inform the type of architecture and will frame also from a technical perspective, what the problem and the solution will look like. And also it's very important to understand, is there a tech team? And this is actually something very interesting because when we work with startups, often there isn't even a tech team or if there is, it's very small. Sometimes it's just one person.
Sometimes it's even the founder themselves are doing a little bit of business and a little bit of tech and they just need more support. And other times they don't have a tech team at all. So we will become their tech team if we decide to help them. While when we work with bigger enterprises, of course, often they have their own tech team. And it's important to understand exactly what that tech team looks like because our engagement is going to be very different depending if we are going to become an extension of an existing team or if we are going to become that kind of interim team that is going to develop maybe an MVP to get the company to the point where they have a point where they can basically keep growing the business and it makes sense to start to invest in creating their own internal tech team. And then another interesting question that often comes up because our expertise and I guess our fame a little bit comes from being AWS experts.
So sometimes not all the problems are problems that you should solve with AWS. So also evaluating, even if we are brought in as AWS experts, does that problem really require AWS? I think it's a very fair question to ask and assess very honestly. Don't just try to put AWS in there just because you are the expert on it, but make sure that it's something that is really needed that is going to make the difference for the success of that project. So basically, at this point, we want to come out of this meeting with two possible outcomes. It's generally like an alpha day, so it's not super intense. And the idea is that at the end, we have a quite good understanding of what's the problem space, the people involved, what everyone expects as success criteria. So we might decide that we can help and that's kind of a positive outcome or that maybe we are not the right fit for that particular company and problem. And at that point, we just say, okay, this is not going to work. Hopefully, you got some value anyway with all this conversation and try to describe it and nail down the requirements of the project. But if we decide to start to help the company, what we are going to do is create a proposal. And that proposal is effectively a document that says this is our understanding of the problem and this is how we can help you out to develop that solution further. And there were also other interesting cases that happened to us in the past where we decided not to be a good fit for that company because we didn't really believe on the structure, on the framing of that particular project. Sometimes we realized that a founder is really early in the implementation. Maybe they need to do a little bit more market research. So even though we could build a product for them, that product is still likely to fail just because there isn't enough research. So we feel like it's... We are obliged to say, well, make sure you really understand the problem sets before you invest money into this project and then come back to us later and we can reassess together whether we feel we are going to be more likely to be successful after all of that. So I think it's also important to call out that you need to be a little bit honest that way as a consultant, because we feel that's part of our job. If we have that expertise where we can give a useful piece of advice to our potential customers, even though that means losing business in the short term, I think that can be much more valuable in the longer term. Now, I guess another question that comes very often when we have this kind of conversation and maybe when we present a plan to our potential customers is this looks very different from what I expected. I was expecting maybe, I don't know, lots of planning and then maybe a very clear roadmap going forward. And then you give me a very precise deadline where everything is done. But then when we present our plan, it might look very different from that expectation. So I think it's worth spending a little bit of time discussing our opinion and how do we perceive the history of software development and why we think that there is a large misconception, especially from people that are not really into the software industry or now software should be built.
Eoin: It's probably worth reminding everybody from time to time about how software projects are fundamentally different in a few different ways to other projects. And that could be a non-technical founder, but also a business leader in any company. But I think we also forget and we often have a false sense of security around times and deadlines and budget. We kind of forget that software is a relatively new field with only about 70 years of history and a lot of change has happened in that time.
There's a bit of a bad reputation around software as well because of lots of horror stories around scope creep, bad estimates, delays, significant cost or time overruns, quality expectations, being poor and then from a team perspective, burnout, death march projects. So those things are quite common, right? It is common, let's face it, for software projects to ultimately fail and we want to avoid that. So we have a strategy to make sure that we do avoid that. And we just think that it all comes down to the fact that software projects are inherently just more complicated, more complex than people might think, especially when you compare it to maybe more of a traditional engineering field like construction engineering or mechanical engineering. So I think one of the analogies we've been using recently is if you compare what it takes to build a mobile website versus what it takes to engineer a coffee grinder. And if you can imagine more of a mechanical engineering example and let's look at it from a few different angles. So in terms of specification, if you're building a device like a coffee grinder, generally you have a clear set of functions. You want to grind coffee to various sizes. The requirements are pretty clear and they don't usually change once the design is finished. Whereas with software, like you're building a mobile web website, even a simple one, the requirements are always dynamic. It needs to adapt to a variety of different user needs and expectations, and those change rapidly.
And I think as well, the constraints are quite different. With your physical device, the design and functionality, ultimately the constraints come down to the laws of physics, which I think are generally well understood and reasonably stable over time. In software, you're building a mobile website, the constraints are constantly evolving, technologies are evolving, practices are changing, user behaviors are changing. What's cool and trendy today might be obsolete in just a couple of years. Security context is changing very frequently. So you need to constantly stay up to date, stay secure and adapt. And that's why software projects are always kind of living, breathing things.
They're never really complete. If you leave software, we talked about this, I think in the last episode or the previous one, you can't just leave software alone and expect it to come back to it in two years and for everything to work because the whole environment, the technology landscape has changed. So it's a living, breathing thing. You need a high performing team of domain experts, software engineers working together to tackle this, right? And to address the different nature of software projects, to distill it all into prioritized tasks and deliver on the tasks in a predictable way. So this is, I suppose, worth reminding ourselves because when we talk about how we do things and our process and everything, it's really to address the extraordinary, exponentially growing number of variables that you have to deal with in software projects that you don't have in other environments where you can use engineering principles, deterministic calculations, and then come up with a Gantt chart and be reasonably confident that you'll meet the delivery. Software has to be much more adaptable and much more agile. How do we then manage software projects? Do you want to share some of our process?
Luciano: Yeah, I think speaking of the Gantt chart example, another thing that comes to mind is the waterfall approach, which is generally the idea that you have a specification, you do some kind of analysis, from that you do some kind of design, then you move to an implementation phase, then you have some kind of ready product that you just need to test a little bit before you can confidently release and you're done. So it's like step by step from beginning to end, it's kind of a straight process and it's called waterfall because generally it's represented with this kind of diagram where things are moving down to the completion phase. And this approach is really interesting because I think there is this misconception that most people believe this is how anything is built, including software, but you could argue that this model is not really something you can use. In reality, it's kind of a conceptual model to understand what are different phases of a project, but this is not really how you move things from zero to completion. It's not like one single iteration and the phases are very distinct. And even if you think about building physical products, if you start to compare this model with the agile methodologies that we are going to talk a little bit more about in the second part of this episode, you can see that the agile methodologies actually started in more traditional manufacturers, not something that was invented in software, it's something that actually came out from Toyota. So even in traditional manufacturing, there is a clear need that you need to have a more dynamic process where you can adapt to change, you can adapt to things you didn't expect, you can adapt to new requirements and try to minimize waste at every step of the process. So it is really interesting to see how even in traditional manufacturing, this kind of conceptual model, it's good to understand the basics of the different phases of a project, but it's not realistic. This is not really how you move a project forward.
So I think it's very important for us to try to demystify, especially with some customers that maybe are not so experienced with the way of building projects or software projects specifically, that we need to take a different approach and an approach that needs to be more agile. So what do we mean now by agile? It's not really easy to define what agile is. There are many different definitions, many different interpretations of different methodologies. And I think ultimately everyone has a slightly different way of doing agile. So it's kind of a set of principles, and then eventually everyone comes up with their own framework for taking some of these principles and adapting to their way of working. So our idea of agile is building things in small iterations and have frequent chances to reassess the scope and the landscape. And if there is something that changed, you have the time to course correct things. It's never too late to do a change because you are trying to reassess things frequently enough that even if you took the wrong turn somewhere, you are still in time to come back and try a different approach.
And this is one of the main reasons why the waterfall approach doesn't work, because if you spent a huge amount of time building a project and invested tons of money, and then you come at the end phase, and when you are there, everything changed in the landscape, you're understanding of the problem, what the customers would expect, you just wasted tons of money and tons of time. And going back basically means let's start from scratch and let's redesign again. And then let's hope that next time things don't move as fast as they do, as they did. But we know that most likely they're going to move again. So the only way really is to find a process that's flexible enough that allows you to understand what's going on, understand if you are on the right path, and if you need to change something, have the time to do that as soon as possible. So the way that you can do that is using a process that is divided in small chunks of iterations, and we generally call them sprints. A sprint can be a period that can vary between one week and four weeks, typically. And again, this is just arbitrary amounts of times, you just need to pick one size. And ideally, it should be small enough that gives you frequent chances to reassess, but also big enough that gives you enough time to have a small objective in mind that you can complete during that timeframe. So it doesn't make sense to have a sprint of one hour because realistically you're not going to have time to do anything meaningful in one hour. But if you start to think, for instance, about two weeks, you can probably achieve something useful for the business in two weeks. I think the other side effect of this approach is that it reduces the risk of experimentation.
Because if you are trying to do something and you're not really sure whether a possible solution is actually going to be the right one, it is less risky to actually try it out. Just say, okay, we are going to spend one sprint maybe trying the solution. It might not work, but the worst outcome that can happen is that it doesn't work and we learned new stuff. And now we spend two weeks and we can consider those two weeks of learning, even if the product didn't really evolve. But our understanding of the problem space evolved and the next sprint, we can actually use our time even better. And probably ultimately that's going to allow us to be even more efficient in the long term because we had the chance to try a number of different things and we understood much better what works and what doesn't work. So I think it's also important to mention that this process to work needs everyone to be involved and to understand the project.
It's not just something that technical people do in isolation. It's something that requires all the different stakeholders to really understand the process and be fully involved in all the different phases of the process. For instance, it's very important that everyone understands things are flexible and iterative. We don't have all the answers straight away. We will find out the answers as we go and we will figure out what are the most important questions to ask at every single step.
And there is a mindset of continuous improvement in that sense, again, because you are knowledge that you are starting from a stage and you are trying to get better and better as you go doing this kind of an iterative approach. It requires great communication skills. This is actually one of the common problems that we generally see that if people are not sharing all their learning, they cannot communicate effectively, what are the challenges, what are the expectations, that's where you start to see problems and this process breaks down a little bit.
So it's very important to make sure that everyone is confident in terms of discussing how they understand the project, what is their progress, if they have any blocker and what are their expectations maybe at the end of a sprint. And this is something you need to keep doing as much as possible. And sometimes other communicating is probably a little bit better than under communicating in that sense. And you can have tools that can help you in that sense.
For instance, you can have tools that will help you to make sure that there is space during a sprint to have this kind of useful conversations. Sometimes we talk about retrospectives, which is something that you can do at the end of a sprint to try to have an open conversation of what you think went well and what you think didn't go so well and maybe you want to try to improve in the next sprint. Or you can have, for instance, tools that try to measure the output, for instance, in terms of the quality of what you deliver, in terms of how many tasks did you actually complete compared to the ones that you wanted to complete during the sprint. And all of that stuff, it doesn't mean that you deliver poorly. It means that maybe your understanding of the problem space wasn't as accurate as you thought. And then in the next sprint, you can take everything into consideration to try to plan a little bit better for the next sprint. Now, talking about measuring the output of a project, I think there is a common complaint in the industry that developers are not really productive. And there are a number of studies that often come up and they all make very interesting claims, like a developer, a developer doesn't write more than 100 lines of code per day, or it doesn't spend a certain number of hours coding per day. So I guess what do you think about that? Is it really important to measure the output of developers that way? Or is there more that developers do in their daily job?
Eoin: Well, when you learn that code is not just a measure of productivity, but it's also like an, it's an asset, right? When you build a product, you want the work that your team are creating to be an asset that's valuable and growing. But at the same time, code is a liability. I think we hear this expression quite a lot, which means that every time you have a line of code, it's something you must maintain into the future, every line. And every line of code is a line that can have a bug. So what you want to do is get the balance right, maximizing the benefit in terms of business outcome, by minimizing the amount of code it takes to create that. And there's lots of ways we do that. We talked about one of our principles being leveraging experts like AWS as partners to manage our infrastructure, so that we have to write less code. And this is a constant journey, trying to remove the code you have and offload things that aren't really specific to your business. We talk about this quite a lot. So let's think about software development. And we've seen some studies that say an average software developer, if you have an eight hour working day, they're spending anything from one hour to four hours coding. And it might seem like you're unproductive. But when you look at what the function of a software developer is, it doesn't seem so bad at all. Because a software developer, you don't generally want to be coding all the time. It's a bit of a red flag if they are, because it probably means they're a little bit isolated from the larger mission.
If we look at the all of the activities of a developer, they need to understand the customer and the business need, they need to be constantly honing their skills, because the landscape is changing quickly. So they need to be learning the right tools to do the job and ensuring that the approach is optimal. No matter how experienced a developer you are, learning is a daily part of the job, that's never going to change. Then you have things like analyzing bugs and constraints in existing systems, often systems that were implemented by others who aren't available anymore. And this is a really advanced skill almost in software development, which is understanding software that you don't have full control over. It's a lot easier to start from scratch with a blank canvas all the time. A lot of companies out there that will do that, but there aren't a lot that will kind of understand the value in your existing systems that are already working pretty well and generating revenue. Then you have things like updating stakeholders, communicating with them so that everybody's on the same page, sharing your learnings and ideas with peers, learning from your peers, collaborating with maybe QA staff, user experience, designers, project managers, and lots of others. Then you have just other collaborative activities like creating, reviewing, and updating documentation. This is pretty important if you don't want lots of problems in the future, peer reviewing code, and then writing and even rewriting code tests infrastructure. So the coding part almost comes last on the list once you've got everything else right. That's the way it should be, getting the balance right between the focused implementation time, you know, this kind of stereotypical view of a developer, headphones on, furiously coding away. That's important, right? But it only makes sense if you have the right context to set, the mission is clear, and you've communicated with everybody else, because otherwise you could be in isolation, furiously coding in the completely wrong direction, something that will have to be rewritten or thrown away later. That's not good. So the other thing is too many meetings, interruptions can kill momentum.
So you have to be conscious about these things and try and get the balance right between focus time and collaboration. Do a little communication will lead to a lot of misunderstandings, rework, huge amounts of waste. So overall, I would say the amount of code produced, it's not a good measure of productivity at all. That's a mistake. And it can lead to over engineering and creation of large amounts of technical debt. I think, generally, developers understand the concept of technical debt for other stakeholders, it's a little bit less tangible. And I think it can be thought about as any other kind of debt, really, if you imagine that on your journey to developing a system of value, every time you might need to make a trade off between getting something done quickly, like getting it implemented today, versus getting it done, right, maybe you have to wait for some external dependency, maybe you have to do some extra research, or you just have to put a lot of rework into the system in order to get it done right in a way that's really stable for the future.
Sometimes you have to make a trade off and say, okay, let's just take some sort of shortcut today, that's essentially borrowing some technical debt, you're borrowing some future time, really for the for right now to get something done. And a degree of debt is inevitable, technical debt is inevitable. But if you accumulate it without paying it off, just like any other kind of debt, you will eventually become technically bankrupt, essentially, you'll end up with a system that is unmaintainable, you can't add features to, you're constantly fighting bugs on, and that will have enormous cost and productivity impacts down the line.
Maybe we before we go on to our sprints, and all that process, we can summarize all this by saying that effective software development comes from having skilled experts and domain experts from a technical perspective, and a domain perspective, a whole team approach where everybody is working together, you have clear mission and success criteria. And you can get metrics from those criteria, then you've got your requirements, well defined, broken down into small pieces that can be delivered in fast iterations. And then you've got clear communication feedback mechanisms without any gatekeeping or silos between the customer and those tasked with delivering the product. Lastly, I think permission to fail, learn and improve is really critical. You can't have agile processes with continuous improvement, unless you have a very dynamic organization that has a permission to fail, learn from those failures and improve. And that's where you really get the best outcomes. So with that in mind, Ixiano, what does the first real day one working with the customers, sprint zero look like?
Luciano: Yeah, we actually call the first, the very first iteration sprint zero, just because it's a little bit special than what we will be doing for the rest of the project. And it's just because the first phase requires a little bit more preparation than the other phases where you already have a momentum and you have a little bit more clear planning of what's going to happen, and you keep iterating that way. So with sprint zero, we basically, it's something that we do for a shorter period of time. We generally prefer a sprint of two weeks, but for sprint zero, we generally go for just one week. And again, it's just because it's a preparation for then the more regular sprints. And it's kind of a workshop and then we prefer to do it in person where we try to go through all the available documentation, what is the output of the delivery. And in terms of activities, there are a few things that we try to do. It will be kind of a full time series of days, and we have different project stakeholders. There could be domain experts, architect developers, UX, UI experts, really depends on the type of product that we're building and the type of people involved in that project. But again, whoever has a perspective and an expectation on that project needs to be involved in this phase because they're going to have to define exactly what is their expectation, what is their success criteria.
And we need to figure out how to align everyone with that particular expectation in mind. So mission and objectives are at some point mutually agreed. Everyone is say, okay, this is what we want to achieve. And this is how we think we should be proceeding for the next sprints. So that basically helps us to create a product backlog where we try to define all the features that we need to build. And those features are not just in a random pile where anyone can just pick one, but it's very important to try to prioritize the ones that you want to do first, because generally they are the ones that are either going to enable other things or deliver the most value straight away. So it's very important that you focus a bit of time trying to prioritize what is really important for the business, because that's going to drive the next phases. So you definitely want to finish first the things with the most value or the things that are going to block more value for the future. The other thing that we need to do is if there is any dependency on existing systems or legacy software that is used in the company, that needs to be totally analyzed and documented, because this is generally where you can find surprises, where you can find blockers. And if you need to create some kind of integration, you really need to understand what that integration could look like and how much work there is involved in creating that integration. Sometimes you need access to systems like that. You need access to documentation and might not be an immediate thing to get that access. Maybe you need to go through a process where you need to request access, where you need to request access to documentation. So it's really important to identify this kind of dependencies straight away and make sure you try to unblock them as soon as possible, very early on in the process. At this point, we should have a clear enough idea to be able to design a first architecture. And we generally try to do that at two levels. One level is very high level. It's kind of a logical architecture. Like what do we expect the systems to do and how they are integrated with each other? The other one is a lot more detailed, what we call it physical level, where we actually describe what the implementation of the systems could look like. Are we going to use, for instance, DynamoDB or a SQL database and which other systems are going to connect to that database? Maybe we need multiple databases. So it's really trying to nail down also the technologies that you're going to be using for implementing this architecture.
And that leads to having a good technical vision for the engineering team. And you might also have wireframes if you are working, for instance, with designers. So it's really important to try to combine not just the architecture, but also what the product should look like. So it's really good when you can work with a team of people that can take care of designing wireframes and really define what the user experience should look like. So that's definitely part of the delivery in that sense. And we will agree on some KPIs. So metrics that are going to help us to assess, are we being successful in this implementation? Is this product really delivering the value that we imagined at the beginning to customers or users in general? We also got to mention if there are risks or if there are things that we need to do to reduce risk and all of this stuff is logged and it's something we can reuse later on in the following sprints as a something to double check. Like, are we really seeing this risk? Are we doing something to mitigate the risk? Or maybe this risk is not really that worrying after all, because we figured out or we learned something else during the sprints that can help us to be more confident that we can avoid certain risks. But it's really important to have a list of those risks because you always need to make sure you are assessing against them.
And finally, once we have all of this, so we have a clear understanding, we have an architecture, we have a list of tasks, there is the $1 million question, which is how long is this going to take? Everyone wants to know that. And of course, you can never have a precise answer, especially with an agile approach. You try not to have a super accurate answer because you are always learning. And as you learn more, you can be more accurate, but you need to have some kind of estimate anyway, because you cannot just tell the customer, we don't know this is going to take potentially forever. No, you need to be able to provide some value in a specific timeframe. So what we do is we generally assign a size to all the tasks. And we call this exercise T-shirt sizing, because we give a small, medium, large, extra large kind of sizing. And then we have a very simplistic model that allows us to see, depending on the size of cards and number of people involved and how easy it is to parallelize certain tasks, how much more or less that project is going to take. And it's just a ballpark figure that you have as a feeling to see, are we talking about weeks? Are we talking about months or years? And of course, the longer it is, the more you need to make an effort to try to reduce the scope so that you can come up with something valuable in a shorter amount of time. It's not uncommon to see that a project might take, I don't know, two years after you do all of this research. And that's when you have to go back and see, okay, this is not a realistic project that we can be successful in. We need to reduce the scope. We need to trim down, focus maybe on a smaller area, try to deliver that one first, reassess everything. And then eventually maybe you come up with like, this might be a three months worth of project. And at that point, you have the risk a lot, the success of that investment, because you are much more likely to come up after those three months with something valuable rather than investing years into something that might not turn out to be so important after the three years. So what happens next? At this point, we are ready to roll. How do we continue?
Eoin: When people think about this approach to software development, it sounds fuzzier and maybe less of a commitment. Maybe it's just more of like a cop out from the developers, because you're not committing to a rigid timeline. In fact, to do it right, it all it means is that you're doing more planning, you're just spreading it over time, and you're constantly planning. Now there's that old saying that says, plans are useless, but planning is critical. And that's what this really is about. Rather than say, okay, here's a plan that gives us very fixed scope in a fixed period is we time box our milestones. So we say, okay, well, here's what we're trying to achieve. But let's make an initial time box. And for a new customer, first greenfield project, or actually any type of customer, we try to limit the duration of the first engagement to like six to eight weeks, so maybe three or four sprints. And that gives us and the customer enough time to see a lot of value. And it's generally a good enough amount of time to set the foundation and deliver something that's really valuable from a business perspective to production that you can iterate on from that point. And it seems like a short amount of time to a lot of people. But if you're really focused and you've got a very high performing lean team, you can do a lot in that amount of time. After that, sprint zero, which is critical foundation, you generally have a good idea of what that value is. So you can kick off your regular sprint, sprint cadence. Sometimes agile teams work in sprints. Sometimes it's more kind of Kanban kind of continuous taking tasks off. More often than not, we work in two weeks sprints. And the idea of that is you've got, that's the kind of feedback loop in terms of talking to your stakeholders and end users and understanding how to course correct your plan.
The main activity within these sprints, of course, is coding, development, delivery of prioritized features and tasks, including all of the best practices like continuous deployment to your production environment. That's something we put in from the very start so that it doesn't become a big effort later. It's always good to get all of those production deployment, best practices, observability, quality control in from the very start. It's much cheaper to do it at the very start and then just increment them over time. So it involves programming, the creation of virtual infrastructure in a cloud environment, documentation, tests, all of that. Studies have shown that waiting until the end of a sprint or, you know, monthly or three monthly release cycles to deploy software, that's been shown to slow down high-performing teams. So we follow the practice of continuous delivery. So it just becomes a habit that you do without even thinking about it. That's automated, highly and tested. Once you've got good planning in place, hopefully the items on your backlog, those features, whatever it is, ideally they should take between like a half a day or three days to deliver. The reason for that is that if you've got unexpected hitches along the way, external blockers, sometimes as a developer, we end up spending a day or two resolving an unanticipated issue. But if your units of work are small enough, then the impact is less overall and the time to adapt and of course correct. It doesn't have such an impact on the overall delivery schedule. Two-week sprints, there's always meetings, of course. Some people live by these meetings, some that when they work well, they're great for everybody and they get a lot of satisfaction. They can become a pain for a lot of developer teams when these meetings are not done well, because they think it's just a bunch of managers distracting you from the work you're supposed to be doing just to talk rubbish. Believe me, I've worked in companies where agile software has been done really well and I've been lucky enough to be in that position. And when it works, it really, really works and makes everybody more productive.
So we try to get a good balance ration, try to keep the meetings focused and short and not take too much of people's times. Planning meeting at the start of a sprint with the product owner, the delivery team, that's about reviewing the top of the backlog, checking your priorities, adjusting, getting more clarity on specific features and just having a good sense of what you're going to try and achieve over the next couple of weeks. Then you have a daily check-in, which is about raising and tackling any blockers, reviewing your work done, and just basically self-organizing the people who are involved in delivering, self-organizing to get that day's activity done. And it should not take more than 10 or 15 minutes. If it is, there's a red flag there. It's typical at the end of the sprint then to review. You have a demo with your stakeholders where you showcase everything you've done. You get feedback on it. They say whether it meets the needs, fulfills the mission. You look at your KPIs, your checklists, look at your risks and your actions. Now it's pretty common in like agile methodologies like Scrum to have that end of sprint demo. What we like to do is actually a weekly demo at least, and also ad hoc demos to stakeholders even, because I think two weeks to wait to show them the work you did maybe 10 days ago, it's quite a long feedback loop and doesn't give you a chance to course correct quickly enough. It also often causes you to rush the demo and maybe do a little bit of a polished demo where you're just trying to show all of the nice things you've built. A demo is much more valuable for everybody where you show the stuff that didn't work as well as the show the stuff that did work. It goes back to that transparency principle we talked about at the start that we try to adhere to. It establishes a lot more trust. You can show things a lot more detail and as well it means there's not a lot of pressure for everybody to be there and to be super engaged in the demo at the end of every sprint. You can maybe miss one every once in a while and it's not such a big deal. So I encourage people to think about more regular and more honest and raw transparent demos as well. So getting the value in those meetings measured, checking that you're not wasting time, people aren't just phoning it in and checking in and nodding along to the meeting while they keep coding. That's important because what's the point if that happens? You have to make sure that they're valuable and I always think if you're not either contributing value to a meeting or getting value from a meeting just don't show up. You're better off doing something else and you're better off kind of raising it and saying okay how can we make this more valuable for everyone who is there. So we've gone through sprint zero. We've gone through the regular sprints. Luciano, should we talk about milestones and releases, things that are less discussed in agile processes?
Luciano: Yeah, I think it's really important to clarify that this process might seem like something that doesn't really have a clear vision but it's actually quite the opposite. Meaning that you are only trying to minimize the risk that something changes and you are not ready for it or there is that you misunderstood something and you realize that too late but you are still driven by business needs. And the business will probably need things like get new customers, get specific feature out that customers are going to be paying for or maybe there is a marketing element to it.
So you'll need to have certain things ready before a marketing campaign can be kicked off or maybe I don't know you are discussing with investors and you have agreed certain things with investors so you need to meet certain goals or features or number of customers to be able to secure another round of funding. So all these things are something that somehow needs to be taken into account. So even though you are following this approach that is very agile, you are still driven by milestones and releases that are very closely tied to what the business needs to effectively survive and be successful. So it's interesting to be able to find a good balance between keeping that vision in terms of business but also keeping a methodology that is flexible enough but it's definitely doable as long as you keep both things in mind. And I think this is very important. This is why it's very important to have both business people and technical people involved in the process and have them to be able to communicate effectively together and understand together what is each other's responsibility and how can they work together rather than working against each other.
So I think it's interesting that there might be different scenarios. For instance, when you are creating a new platform for Scratch, what we try to do in those six to eight weeks is generally come up with an MVP that we have in production. So it's something that we can show to people, we can show to customers, we can show to investors and that clearly communicates this is why this project makes sense. And this is why either as a customer you should buy this product or maybe as an investor you should invest in this business to move it to the next phase. So I think it's very important for us to agree with our customers. This is what we want to deliver and to give them a chance to make sure that they can get that value straight away. The sooner we do that, I think that the easier it's going to be for our customers to evaluate this kind of partnership and decide what to do next. And what to do next is always an interesting question. There might be different outcomes. Sometimes we realize and our customer realize that even if we did an excellent job, the customer might be better off continuing on their own. Maybe they can build their own technical team. Maybe they can just totally onboard something that we built together with them and continue developing it with their own internal team that they might already have.
So that's totally an option. And when that happens, it doesn't mean that we haven't been successful. Actually means that we've been extremely successful and we are going to help the customer to just move on to the next phase and continue building things on their own. In other cases, we can still decide to continue the partnership. Maybe the customer decides that they want to build more stuff with us, maybe expand that project, build more features, maybe build the next phase of that project, or even just move us to other projects because in bigger companies, they always have a number of projects going on at the same time and we might be helpful in other projects as well. Or maybe there are other projects that they want to start as more experimental activities and we might be the partner that helps them to try to build something new. So in summary, today, what we did is covering what are our principles, how do we work as a company, and we try to be trusted partners for our customers. We don't just want to build hours of engineering time, but we want to make sure our customers will succeed with their own business goals. And we are the enablers from a technical perspective to make sure that they're going to deliver the best technical stuff that can fulfill that specific business need. We also spoke a little bit about the software world and how there are so many misconceptions on how you should be building software, even though it's not a perfect science and everyone has its own kind of incarnation of what good software development looks like. I think it's important to recognize that there are some principles, some guiding principles that are universally recognized and you should be using them. Then as long as you have a clear process and you can work effectively with your customers, it's fine to probably have slightly different take on how you actually organize the day to day. We have our own way that we described today, and hopefully you found that interesting. But we are also, as usual, very curious to hear it from you. Do you use a very similar process when you work with technology projects? If you are also a consultant, what are we saying? Does it make any sense or do you do something entirely different? I think it's really important to have a healthy conversation in our circle to compare all the different processes and learn from each other. What does it really work? What doesn't work? How can we grow together and get better at this craft? Definitely let us know what you think. Leave us a comment either on YouTube or reach out to us on socials. All the links will be in the show notes as usual. Thank you very much and we'll see you in the next episode.