Help us to make this transcription better! If you find an error, please
submit a PR with your corrections.
spk_0: Did AWS just build an IDE to rival your favorite code editor? Yes, they did. Meet Kiro, an AI-centric take on VS Code that promises to turn an empty repo and a bunch of loose ideas into working code with an approach that they call specification-driven design. It imports your VS Code configuration so you can start fast, and then it guides you to creating requirements, design documents, and a clear task plan before you get your agents to work.
We tried Kiro on a real project, and today we will share what clicked, what tripped us up, and what's unique about Kiro. We will also look at pricing, limits, and where it fits among tools like Cursor or Cloud Code. So stick around until the end because we will share what we believe the future of Kiro could be and whether you should switch to Kiro or look for something else instead. My name is Luciano and I'm here with Eoin for another episode of AWS Bites Podcast. This episode is brought to you by fourTheorem. Stay tuned for more details about Forteorem at the end of this episode. So Eoin, maybe we can get started by giving a little bit more detail about what Kiro is. Yeah, so Kiro is an AI-centric IDE developed by AWS.
spk_1: I don't think a lot of people saw this coming, but a few weeks ago there was a lot of hype about it for a very short period. You'll find all the info on Kiro.dev. It's got a dedicated site and like many other tools of its kind, it's a fork of VS Code. So similar to Cursor or Windsurf. And that makes it easy to move from VS Code to Kiro because it'll import all of your settings, including themes, plugins, and config. And one of its top features is that it offers an agentic workflow. So unlike just single prompt iteration like you might do in Copilot before or in ChatGPT, you can describe in plain English what you want and it will try to achieve the given outcome. So it can read and edit files, run commands, search information online, and more. And it basically runs then in a loop trying to achieve the given goal. So as an example, you can tell it to run tests to make sure the changes it makes don't break anything. If the tests fail, it can read the output of the tests and then try to iterate and fix the issue until it reaches a stable state. And you can also turn it into a more interactive collaborator. So you can tell it to ask clarifying questions or to request more context if it needs. And you can also tell it to come up with a plan and request user validation before executing it. So it's a lot more collaborative than previous tools of its kind. So Luciano, what do you think? What can Kiro do for you? Yeah, I think it's fair to try to provide some example use cases.
spk_0: One that I really like is when you are starting on a project that has been created by maybe a team that you just joined. So you don't really have a lot of context. You can just open Kiro, open the project and ask it to explain the structure of the existing code base. And I think that's a really useful feature to just get you started quickly and do all of the onboarding stuff. And similarly, if you need to generate some documentation or improve existing documentation, that's another excellent use case. You can just add Kiro, like maybe read that some files and update the docs. And generally, this is the kind of tasks where these tools really shine. And another use case is totally when you start from scratch, and you literally have a blank slate, and you want to start creating a new project quickly on a prototype. Kiro actually can help you with that. And we'll talk more about an example that we played with. Or even you can work on features or fixes on existing projects. So if you've used Courser, Windsurf, Copilot Agent Mode, Cloud Code, Open Code, Cline (or Cline) for VS Code, or any other of these AI-agentic tools, the experience is exactly the same. It's just that this is the product that AWS is building and promoting. So it's a more AWS-centric take on this kind of products. So one thing where I think Kiro is trying to innovate a little bit more than all of the competitors is what they call the spec-driven approach or specification-driven approach. And we'll talk a little bit more about that later. But I think it's fair at this point to mention what is the current status of this project since it's very new.
spk_1: Well, it was launched back in early July with an open beta and quickly moved into a closed beta with a waitlist. And we were both lucky enough to get in early. It currently supports Claude Sonnet models 4 and 3.7. And during the beta access, the model access was totally free. And this is a big draw for a lot of people because it was completely unlimited. And that's very rare these days to get access to a powerful model like that for free. And possibly one of the reasons why everyone was so excited to try it. Of course, in reality, there are limits. We haven't used it enough to hit those limits. We'll get into that in a sec, but it seems that a lot of people in the AWS community have reported that these limits are very stringent and they make it hard to really evaluate the tool. Of course, these limits are going to change once Kiro is out of beta and becomes a paid product like Cursor and the others. So let's get into all of that pricing discussion later. But one of the big differences with Kiro is the vibe versus spec model. Luciano, do you want to go into that? Luciano you might go into that.
spk_0: Yes, so as I say, this is probably the most innovating piece of Kiro, this spec-driven development. And I think to really appreciate what that means, we need to say that AI-driven development is all about context. And we can define context as basically the combination of a specific prompt that you are giving to the agent with somewhat clear instructions and expected outcomes, plus everything that you have in your project. So basically you are giving the agent access to additional information like local files, or maybe other documents that you can provide in some other way. Sometimes can be even diagrams, pictures, because all these models support not just text, but they can also read images and audio and so on. So all of that together is basically something we can define as context. And context also can be enriched even more if you use tools like MCP servers, but we're not going to deep dive too much into that. Just keep in mind that that's yet another thing that you can use to basically provide even more information to your AI agents. So effectively, the idea is very simple. The more context and the more details you can provide about your projects and the features that you're trying to build, the more likely is that the agent is going to do a good job, and you end up with a result of an acceptable quality. And if we think about that, to be honest, humans work more or less the same way, right? If you are working with a colleague and you can provide very clear details and requirements, chances are that at the end you end up with a much more fruitful collaboration. If you just give very loose specs, so to speak, then your management might vary, right? You don't know what your colleague is going to come up with because they need to make a lot of assumptions and you might not be happy with some of these assumptions. So the main question in general, and this is a question that goes even beyond here, I think it's in the industry at this stage with all these new AI agentic tools is how do you provide good context to an AI model?
spk_1: It's still not a perfect science and the industry and everyone is still trying to figure out the best practices. I suppose every one of these agentic AI tools gives you ways to enrich your prompts with extra context, but they all tend to be very loose and it's still not ideal. Kiro differentiates itself a bit with a very structured and prescriptive approach, which is what they call spec-driven AI coding. And this is especially useful when you're starting a new project or feature work from scratch and you want to give it a bit more structure. So you start by giving Kiro a loose prompt with a high level idea of your goal. Then Kiro will guide you through the process of creating this specification. And the spec is ultimately a collection of three documents that are useful to describe what you want the agent to achieve. And they can also serve as human readable documentation too, which is no bad thing. So what are these three documents?
spk_0: Yeah, the first file that Kiro generates for you when you just provide a very loose prompt and we'll give you an example, a full example later on is called requirements.md. So it's a markdown file and it will contain an eye level introduction and a list of user stories with requirements. And there is actually a very specific format to the way that these stories are defined. And what I really like to see is that there is a clear acceptance criteria. So there is this structured way of saying when something happens, then something else needs to be done.
And you can see all these user stories written following this structure. So in a way, I like to think about this document as if you're working with a product manager that doesn't necessarily have lots of technical depth, but they can understand the product and tell you what the product needs to do on different features or things that are happening when users interact with the product. So when you when you get this document from Kiro, of course, this document is a sort of a collaboration if you want. So Kiro gives you a first draft and then you can update it yourself manually or you can give Kiro extra prompts for things that you wanted to improve on. And then when you are happy with it, you can effectively move on to the next phase. And Kiro is going to take your requirements.md file as an input and use it to generate a design.md. And the design.md is a more technical document because it contains things like an architecture section and it can even create diagrams in mermaid format. So you can actually visualize those diagrams and again, iterate on them until you feel that this is really what you want to achieve.
But it goes deeper because for every component in your diagram, there is a component architecture. And for that means that for each one of these components, you'll need to work with Kiro to provide details like what is the purpose of this component? What are the inputs that are expected for this component? What kind of outputs is going to produce? And if even key methods, so for instance, Kiro is going to try to infer from your project what kind of programming language you want to use, or you can also tell it explicitly. And it's going to start to generate at this point some sketches for key methods. And this is actually signatures of function or classes and methods telling, okay, for this component, this is what I expect to generate in terms of code. It doesn't generate all the code, but just the signatures of the name, the inputs, the outputs, which is generally good enough to really understand what is the purpose of that component. Then there is an entire section dedicated to data models. So very similar to components.
It provides details also generates some snippets of code, just to tell you these are the attributes, for example, that I expect to have for each of the different data models. And finally, there are other sections like error handling. So how do you want to handle errors? Do you want to just crash or have a more graceful fallback? How do you want to log these errors? How do you want to display them to the users? Even testing strategy. So do you want to do unit tests, integration tests, even test the data models, or which tools and libraries do you want to use for those? And deployment strategies as well. So of course, if you use Kiro, chances are you're using AWS and serverless. And this was our example as well.
So in that case, Kiro will start to ask you, okay, you are working with lambdas. What kind of configuration do you expect in terms of memory? What is the trigger for that lambda? Are there specific code dependencies that I need to keep in mind? And even performance expectations. So this is really a very technical document. And I imagine like you are sitting down with a CTO or a team lead, or somebody that's very technical in your team, and you have to work together to come up with all these details. And again, this is another step in the process, very interactive. When you are happy with it, you can click to go to the next step. And in the next step, all that you have created so far becomes input for the new step, which is called tasks.md. So this is a new file that effectively is almost again, like you are bouncing the ball back to the product manager and saying, okay, now we have a very clear idea of what we need to build, how we need to build it. We just need to break it down into manageable tasks. So effectively, Kiro generates a to-do list with some details referencing the other documents. And then for each item in this to-do list, it's almost like a checkbox. So you will have an interactive button there close to every item saying, start to work on this task. And this is where you start to run the other individual tasks to the agent to effectively come up with the implementation for them. Okay.
spk_1: So I guess it's worth noting that all of these documents are created in order and you can tweak them manually, or you could use more prompting to get Kiro to do it for you before you move to the next one. So that makes it feel like it is quite a structured guided process and you have control over it, but you don't have to do all the work yourself. And what we do like is that if you don't have a lot of clarity on what you want to build, this process helps you to kind of brainstorm it and rubber duck it with the agent and helps you to come up with a better picture before you move on to the implementation. This process alone has a lot of value. And I say that, you know, all of these AI code tools are great, but the biggest challenge in software is actually understanding what you're trying to build, communicating it well, understanding the needs of the users and really getting that into a specification. Because once you have that clarity, writing software is always easier anyway, whether you're using AI or not. It is interesting to see that Kiro makes this clear difference between vibe and spec, where vibe, which often for a very good reason gets a negative connotation is intended as like, just work off a single prompt and figure it out yourself. Spec is more on the line of, let me give you all the details and the structured format we just described and let's come up with an implementation plan. Now you can still do the vibe mode in Kiro. You can pick the approach that feels most appropriate for what you want to do. If you just want to update a readme file or do something very throwaway, you can probably just vibe it. But if you're working on something like a complex new feature, it makes sense to probably go with a spec-driven approach. Luciano, you were able to put this into practice with some real-world needs, which is what it's supposed to be designed for. So how did it go?
spk_0: Yeah, I'm going to try to describe the use case that I was trying to work with first, because it requires a little bit of context, but it's actually a real need. So it's a realistic idea, which hopefully helps to appreciate Kiro a bit more. So the problem that I was trying to work with is, I don't know if people here ever try to share a product URL from an eCommerce like Amazon. So let's say, a book, for example, and you want to have some easy way to give people a URL, and they should just be able to see the book and buy it if they like it. But with Amazon specifically, there is a problem, because Amazon is not just one single eCommerce globally, but there are 20, probably even more than 20 stores. There is like the United States one, Australia, UK, Italy, Ireland, Turkey, etc.
And each one of them has its own domain, for example, amazon.com, amazon.it, amazon.ie, etc. So if you want to link a product, you suddenly have a problem because you don't necessarily know the country that is going to be best for whoever is going to click the link. So the option, the most complete option realistically that you have is to create 20 plus URLs, one for each existing domain, and just put them somewhere in a page and then hope that the user is going to find and click the right one. So my idea was, okay, why don't we create a service, like a Lambda, for example, an HTTP Lambda, that acts as a single entry point, it figures out what is the user location, and then it redirects the user to the correct URL for that particular product. And of course, you can generalize this idea a little bit more. So not just focus on Amazon, which was my use case, but I thought, okay, let's make maybe something that can be more useful to more people. But in general, whenever you have the need of creating like a geo aware redirect system, you can have this entry point, single URL, and then some kind of routing configuration that tells you, okay, if we detect the user is from this particular country, this is going to be the target URL where we want to redirect the user.
And of course, you can make this configuration a little bit fancier if you want to support multiple URLs, multiple countries, maybe fallback URLs if you cannot detect the country, or if the current country is not in the lists that you have provided. So this was kind of the use cases that I want key row to help me with. So let's build this Lambda with all this logic and figure out how to package it in a way that we can ship it to AWS and have it running. So we started the process from scratch in spec mode. And as we described before, the process starts in a very simple way. So you just need to give it an initial prompt, which doesn't have to be detailed at all. So it needs to be a very high level description of what you want to achieve. And what I gave it was effectively not more than two lines, something that goes like a Lambda function written in Rust with an HTTP trigger could be API Gateway or function URL that allows you to redirect the user to different URLs depending on their geo location. That was just the prompt that I gave Kiro. And it started by generating the first file, which is a requirements.md. Now I'm not going to read the entire documents because it's quite long, but the document that was generated at an introduction, more or less similar to the prompt that I gave at the beginning. But then it started to generate requirements and user story. So for instance, there was a first user story that said, when a user makes an HTTP request to the Lambda, then the system shall determine the user geographic location. When the geographic location is determined, then the system shall match it against a configured set of rules. And then you keep going. When you identify the rule, then you need to create a direct response and so on. So very structured format. And this was the first acceptance criteria. And there were more acceptance criteria going even into performance and security.
And of course, it wasn't perfect as I imagined at the beginning. So this is where the collaboration is a very important step. I kept tweaking it a little bit, either by manual edits or by asking Kiro to fix certain things by itself. And at some point when I was happy with it, I clicked the button to move to the next phase. And that's where Kiro generated the design.md document. So the more technical document containing more implementation details. So here, one thing that I really appreciated is that it started with a high level architecture in mermaid format. It wasn't 100% correct as I imagined it, but it wasn't too bad either. So what I did, I basically copy pasted that mermaid specification into a mermaid editor. So I could see the preview real time and I did some tweaks myself, then copy pasted it back to the markdown and then also changed a few other details. And at that point, I think I was happy enough to move to the next phase, which is the generation of the tasks.md file. So this is where Kiro takes your design document and your requirements, and it comes up with a list of tasks to implement basically concrete steps. So it's not going to create very big steps only one go, but very manageable, small steps that incrementally, they will lead you to the final solution.
Now, this is where things started to go a little bit wrong. So, so far, I was very happy with the process. Like I really enjoyed this kind of collaboration and actually realized that I had a few gaps in my understanding of the project. So there were a few things where Kiro actually throw me questions that I haven't thought about. And I had to actually think really hard, okay, what do we need to do in this case? Let's define a clear expectation. Once you put all of that into writing, I think it's a really good exercise anyway for yourself, for the agent, and even for anyone else that is going to join the project in the future. So all these documents, I think, become persistent knowledge that everyone can access to even in the future. But then yes, moving to the implementation phase, I actually eat a very nasty bug. So the first task that Kiro generated was about scaffolding the whole project using a tool that generally people use when you want to do a Lambda Rust. We have an episode about that. We'll have the link in the show notes that is called Cargo Lambda.
And Cargo Lambda basically allows you to go from zero to a structured approach where you can say, I want an HTTP Lambda, and it generates an entire example for you that you can build on. So when I say we, I mean myself and Kiro, we kind of decided, okay, we want to use Cargo Lambda. So the first step is going to be to scaffold the project with Cargo Lambda. And because it's going to be an HTTP function, the first step that Cargo Lambda generally asks you is like, what kind of Lambda are you building? So it generates a more closer template to actually what you are trying to do. And the problem is that when Kiro tried to run Cargo Lambda in the terminal, it did use an argument that actually didn't really exist. So the command failed. And at that point, Kiro somehow didn't realize that the command had failed and what was the error. So it got stuck into a loop where it was working, waiting for the command to finish, although the command wasn't running anymore. And I was waiting for a while. And then eventually I had to ask, like, what's going on? Didn't you see that the command finished? And Kiro kind of pretended that at that point, he realized, oh, yes, the command finished, but it looked like he wasn't able to read the output of that command. So he didn't really know what was the next step to do. So I started to assume that there was something wrong, for instance, with my Rust configuration and tried to fix the Rust environment, rather than really understanding that he wasn't able to read the output of the command and that he had used a wrong command in the first place. So I had to do a lot of manual steering to get it to the point where he was able to run the command correctly. But again, even when the command was running correctly, it was stuck in this working mode because it didn't really understand that the command had finished successfully. So that seemed like a bug. And then I started to look online and many other people seem to have very similar bugs. So I reported it on GitHub myself and many other people are reporting that. So hopefully that's get fixed and it's going to make the experience much nicer because at this point, this is pretty frustrating from a user experience perspective. This is like the main thing you expect an agent to do, to be autonomous to some degree, once you have defined the tasks and the acceptance criteria. And if you have to continuously interact with it to fix this kind of misunderstanding of every single action, then you are just better off doing it yourself. Which by the way, funny enough, was one of the proposed solutions on GitHub is like copy paste the command yourself and plan it yourself. But then you're not using an agent anymore. You're just using a readme that tells you which commands to run. So this was our experience, although it was a bit frustrating. I think it's fair to say that this product is still in beta and bugs are to be expected. This can be a reasonable bug in the sense that everything has worked fine. The spec driven approach worked seamlessly. So I think I just expect all these things will be cleaned up and fixed as we go out of that beta phase. And at that point, I think everyone will have a nice experience with Kiro. So don't take this as negative feedback or as don't use Kiro. Of course, at the beginning bugs are expected, but we believe it's normal and it's only going to get better.
spk_1: Yeah, I hope that let's look at the GitHub issue list and keep an eye on it because it is great when Amazon tries to do a project like this in open source and on GitHub. But we have seen cases in the past where there's a lot of fanfare when things are open sourced at the beginning. But the repo does not get the attention and resourcing it deserves. So the fingers crossed that we'll see a bit of activity there. Maybe we should talk a little bit about a couple of other unique features Kiro has compared to other tools. So those are agent hooks and steering. So agent hooks lets you define a prompt that runs automatically when specific things happen in your IDE, like when a file is created, saved or deleted, as well as on a manual trigger. And this is a bit like custom commands in cloud code. So an examples to that for that are like, if you want to review changed files for potential security issues automatically, like check if you've got credentials in your code. Or another one might be like an internationalization helper. So when you add a new label in one language, make sure to highlight missing translations in other supported languages and provide an initial template translation. Then there's steering. So that steering is basically giving you persistent project knowledge stored in markdown files under dot Kiro slash steering. And Kiro will load that as context on every prompt. So it basically stops you from having to continuously steer it back in the kind of style or patterns you prefer to have. So it feels very similar to CLAUDE.md with Cloude code or Cursor rules if you use cursor, but it's a bit more structured. You get default files for product overview, your tech stack and your project structure. And you can also reference live files from the repo inside a steering file. And they'll be loaded as well into the context window of each prompt, which is, it seems like a nice approach. Now, all this is all very well and good, but it completely depends on what it costs. So what does it cost?
spk_0: Exactly. Let's talk about pricing finally. So this actually is very, very new because only last week AWS published a relevant blog post to disclose what they have in mind after the beta phase is completed. And we'll link in the show notes to that particular blog post so you can see all the details yourself. But let's try to give you a summary as well. So at the moment, there are four different plans.
So there is the free, the pro, the pro plus and the power. And each plan has a different price and limits in terms of requests and prompts that you can run, which by the way, is quite interesting because almost every other competitor is based on tokens. In the case of Kiro, they look at the number of prompts, basically, the free plan, which is, I don't know, I feel it's a little bit looser in terms of unit, but might work better because it's probably closer to what a user can understand.
So the free plan gives you, oh, by the way, there is an important difference between vibe mode and spec mode. We already mentioned that Kiro supports both. So vibe mode is just give it a prompt and it's going to try to guess what you really want based on a very loose prompt. While spec mode is that entire process where you work together through a series of documents before the work can actually start.
So of course, you can imagine that vibe mode is much lighter in terms of AI usage, while spec mode is a much more involved process that requires a lot more interaction with AI. So more expensive from just a computational perspective. So when you go with the free plan, you get 50 vibe requests per month and zero spec requests. So no spec, if you go with the free plan, the pro plan has 225 vibe and 125 spec, and it costs you $20 a month. The pro plus has 450 vibe and 250 spec, and it costs you $40 a month.
So it's exactly doubled the pro plan in everything. And then there is the power plan, which is like 2250 vibe and 1250 spec per month, and it costs you $200. So there is a steep increase there, but you also get a lot more usage in terms of vibe and spec. So there is actually a welcome bonus. So when you join the first 14 days, you have like a trial run where you can do 100 spec and 100 vibe included in that trial run. But of course, after the 14 days, all of this stuff is removed. So you better use it fast.
And also you can enable pay as you go. So if you reach one of these limits, and you still need to use the tool, you don't necessarily need to upgrade to the next plan because you can just pay for whatever else you need. And the rates are $0.04 per every single vibe and $0.20 per spec. And again, I want to underline that the pricing is not per token. So people might wonder, okay, what does a spec gives me? And actually, there is a note somewhere in the article that says that if you try to do complex actions, you might end up consuming multiple requests. So this is something worth noting that I think you cannot just game the system and try to use one prompt to do 1 million things. Because I think there is a mechanism that realizes and it will still be built accordingly to some kind of unit that I don't think is super well described. But yeah, you cannot just let it do everything with just one ask and expect that it's going to cost you only one request. There is, by the way, a very useful usage dashboard that shows exactly what is your current consumption for every kind of request. And if you are using any of the overages in case you enable that feature. So at least you can keep things in check and understand where you're going with your spend. Now, this pricing has been a little bit debated in the AWS community. I've seen some very common complaints. And I think some of them are fair to share here. Like for instance, when you get at the beginning free zero spec credits, it makes it really hard to really evaluate one of what I believe is the best features of Kiro. So it's a little bit of a shame that users might not even realize that how powerful is that feature just because they are discouraged by either pricing or maybe because they have a free plan and they don't have any more. Maybe they expired the 14 days trial. So they don't effectively have a way to test if the spec thing is actually really good for them. The other complaint is that cost feels a little bit unpredictable. One task can consume multiple spec or vibe. So what happens? How much is this going to cost me? You don't necessarily can you cannot tell upfront when you're starting to do an action how much it's going to cost you. And then the spec between vibe and sorry, the split between vibe and spec. It's nice in a way, but also adds cognitive load because you have to always think, okay, which mode do I need to use? How much credit do I have left for each mode? And so it kind of makes developers work a little bit more just to really understand what am I going to build on? And do I actually even have the credits for what I want to do? So I think this it's important to say that this pricing decision doesn't really seem final from the AWS perspective. So probably important for you to provide feedback. If you have ideas or complaints, make sure you express them to AWS, because I expect this is probably the best phase of this product to try to influence future outcomes in this sense.
spk_1: Yeah, that makes sense. I think it's interesting. I don't know how you can make a better pricing model, to be honest, because I think even when you're using token based pricing, it's still very difficult to anticipate how many you're going to use, especially when it's agentic and the agent is generating prompts for you with all sorts of different contexts from all over the place. So it doesn't seem like a deterministic pricing model is that easy to achieve. Let's see how this one evolves because AWS do, from time to time, move things in a more cost-effective direction. I just wanted to ask some questions since you've spent a lot more time than I have with Kiro. I just wanted to get your overall opinion. So do you think it's valuable as a product in its own right? I'd say yes, more yes than no.
spk_0: And just because of the spec-driven approach is, yeah, it made me feel like I'm always working together with a product manager and a very skilled technical person in a team, even when I'm working alone. And I think there's lots of value in that. And that whole process makes you go from an idea that sometimes even you think you have a really clear understanding of the idea, you'd be surprised after going through the process, how many gaps you had on the potential implementation of that idea. So definitely there is a huge amount of value in that process alone. Okay. Does it have a USP, like if we compare it to other tools in the market?
spk_1: Yeah.
spk_0: And I think that's a little bit of the challenge for AWS, because I will go as far as saying yes and no, meaning that the spec-driven approach at the moment is the unique value proposition of this product. But to be realistic, you could recreate a similar experience with any other AI tool. You just need to create prompts and steps, maybe a little bit more manually yourself. But I think you can easily achieve a similar experience with something like Claude Code or Coursor or any other Agentic AI just by defining a workflow that kind of describes what we just mentioned when we spoke about the spec-driven approach that Kiro has built in. And in a sense, if Coursor, for example, just to mention one, or Claude Code, if they realize, oh, this approach is actually really cool and we can see lots of our users wanting something like that, how long is it going to take them to recreate it and make it native in their own solution? I don't think it's going to be a difficult thing to recreate. So that can be a bit of a problem for AWS in the sense that they will lose this edge really quickly because it's not really defendable. Okay.
spk_1: Now, biggest question for me is, do you believe Amazon is committed to making this product a success? Yeah, to be honest, I'm unsure.
spk_0: Part of me would like to say yes, because I think I like the product and I see lots of potential value, especially projected into the future. But there were a few things that made me a little bit skeptical. For instance, at the beginning, I saw lots of hype. And this hype was, it felt like mostly marketing driven, like if there was a big push organized from the entire community, which is great. I think that's something that should be done from somebody like AWS. But then when I went back to the repo and I saw all the open issues and very little engagement from AWS engineers into all these issues, that made me a little bit worried thinking, okay, after all that marketing push, then there isn't an equivalent push from a development perspective, which maybe is just right now in this flow state, AWS hasn't allocated enough people into the project. So that's something that can be easily fixed, in my opinion. But at the moment, there's a little bit of a warning sign that maybe the hype is not well balanced with the actual development on the product. Yeah.
spk_1: And I'd reiterate, I think there are a few projects where we can say we've seen AWS invest in open source on a continued basis. I'm thinking primarily of the AWS Lambda power tools, particularly the Python one, where the open source contributions have been fantastic and sustained over a long period of time. But I think it's an outlier. And a lot of projects we see launched with hype tend to fade and gather dust. We really hope that doesn't happen here. Do you think it's worth migrating at this point from whatever IDE you're using right now to Kiro? That's a difficult one to generalize. I think it's very up to the individuals.
spk_0: I think if people are already using VS code, there is very minimal difference. Like even the team gets imported, so you don't even feel like you are changing an IDE. So in that sense, it's not a big deal. Although I think there is an important thing to call out that because Kiro as Cursor and Windsurf are forks of VS code, I think there is a huge amount of work that needs to happen behind the scenes to keep that fork in sync with the evolution of VS code. And to keep it always compatible, as the two products of this multiple forks go ahead in their own history. So if AWS is not really committed to invest in that, then eventually Kiro is not going to be a VS code fork anymore. It's just going to be its own thing, and it's going to be impossible to migrate from one to the other without losing features or losing configuration or whatever. So that's probably the risk in the long term. But in the short term, if you use VS code, you're literally not going to see any difference. So the migration is just like the name of the binary that you run effectively. But I would also say that if you come from other ideas or editors, I don't think there is a compelling enough story there for you to move on because that will require a lot of mental effort and just embracing a totally different tool. So in those cases, you might be better off with something more CLI driven, like Claude code is something that I really like because you can run it in parallel in another terminal window, and it doesn't affect your main workflow with your IDE. But if you like that experience and you want to use something that is a little bit closer to AWS, you can use Amazon Q developer CLI, which is somewhat similar to cloud code. Yeah. Yeah.
spk_1: I tend to agree with that because I'm someone who likes to switch between Vim and VS code and then specific tools, specific languages. Like I find VS code not good enough for Python in general. So I use PyCharm for that. So it would suit me better to use the CLI and allow me to control the editor separately. But given that AWS is moving into having its own IDE, maybe entering the same space as Microsoft with VS code, is there a larger opportunity for AWS with this tool beyond just agentic AI?
spk_0: Yeah, I think this is actually the big topic for us where we can make some guesses and maybe give some suggestions to AWS in this episode. I think there is a good opportunity here, not necessarily in the current form. I think the current form is just driven by the AI hype that is still there, it hasn't faded away yet. But I think the larger opportunity for AWS is if they are investing into creating their own IDE, it should be much more tightly integrated with the whole AWS ecosystem.
So AI, of course, should be a prevalent feature because these days you cannot have an IDE without AI. And we can see that even with very new projects like the Zed editor, which is a really nice project that is trying to disrupt the IDE space in their own way, they are also investing a huge amount into AI. So AI definitely needs to be a cornerstone in any IDE going forward. I think here AWS, as you said, can take a big step forward and trying to imagine Kiro as the equivalent of VS Code, sorry, Visual Studio, not VS Code, the full Visual Studio IDE that it is for Azure, try to use Kiro in a way that it becomes that central place where every AWS developer goes to do anything related to AWS. So from scaffolding a new project, writing it, testing it, deploying it, monitoring, operating it in production. When I've seen people using Visual Studio, I see that they only use the IDE to interact with everything in Azure, which is quite impressive. I think Kiro can have an opportunity to become that for the AWS world. So maybe, I don't know, maybe that place doesn't share this vision, but I think that would become a pretty powerful product if they invest in that kind of vision. What do you think? Do you agree with the stake or is it too wild?
spk_1: I think there's a huge number of users out there who would really like that, especially new users who need more of a guided approach. And especially if you're coming to AWS for the first time these days, the AWS console is so overwhelming and all the other tools can be too. If your IDE could have much more of a guided experience to help users there. We like to do everything with infrastructure as code and SDKs and CLIs, but we don't necessarily represent the majority of users out there. And if you, like you did look at users who are embedded in the Microsoft ecosystem, the tendency tends to be more around using the IDE and visual tooling. And those users need to be served as well. Like it's not necessarily that our approach is the one approach. So maybe there is an opportunity for AWS in terms of greater adoption of AWS compared to Azure.
spk_0: Yeah. And I think this brings us to the end of this episode, but before giving you the closing notes, I want to say thank you to Forteorem for sponsoring yet another episode of this podcast. At Forteorem, we believe that the cloud should be simple, scalable, and cost-effective, and we help teams to just achieve that. So whether you're diving into containers, stepping into event-driven architecture or scaling global SaaS platforms on AWS, or even just trying to keep cloud spend under control, our team can help you out and we have your back. So visit forteorem.com to see how we can help you and see our customer stories. And hopefully we get to work together.
So don't hesitate to reach out. Now, just to give you the closing notes, we did a very good overview of Kiro. We really enjoyed the spec-driven approach, and we believe that there is a lot of potential for this product in the future if AWS really commits to polish it out and maybe expand a little bit beyond just the AI focus to integrate it into the bigger AWS ecosystem. Of course, this is just our opinion, so we're really curious to hear what you think. Have you tried it? Do you see yourself using it more in the future or maybe you just don't like the idea, you prefer other tools? What do you think of the current pricing? And yes, if you have any answers to the questions, we'd really like to know. So reach out or leave us a comment. We definitely love your opinion and we'll use that for sure in the following episodes. So stay tuned for more. Thank you and see you in the next one.