Help us to make this transcription better! If you find an error, please submit a PR
with your corrections.
Eoin: If you haven't been living under a rock lately, you have heard a lot of noise and excitement about AI tools. GitHub Copilot, DALL-E and ChatGPT are just some of the latest tools. ChatGPT and other large language models in particular are being heralded as game changers for the future of work across all industries. So how do we escape the noise and make a measured assessment of where this is all going and what it means for software engineering?
Well today we're going to try and do exactly that. We'll talk about what we've been using, what works and what doesn't work, and give some thoughts for the future. Stay tuned to find out how you can use all these tools to your advantage. I'm Eoin, I'm joined by Luciano, and this is the AWS Bites Podcast. AWS Bites is sponsored by fourTheorem. fourTheorem is an AWS partner for migration, architecture and training. Find out more at fourtheorem.com. The link is in the show notes. Luciano, maybe we can just chat through this and talk about what we've been using first. It's not that new in so many ways because for a while we've been using things like autocomplete and autocorrect on our mobile phone keyboards. We've had smart AI-assisted tools in Google Docs and Gmail Smart Reply. Even Google Translate is something probably anybody who's been abroad has used once or twice. What else have you been using recently, or what do you find is particularly cool?
Luciano: Yeah, absolutely. I think now there is just an acceleration of new tools with more overwhelming and unexpected capabilities. And especially the ones that I've been using the most are GitHub Copilot, a bit of ChatGPT, Notion as well, some interesting tools. So maybe we can talk a little bit more in detail about why are we using them and for what. Yeah, it's interesting. With Copilot in particular, I used it when it was part of a developer early access preview.
Eoin: I used it quite extensively and really loved it actually, and I was pretty impressed by it. But I stopped using it after a while, and I'm not sure why exactly, but I think a little bit of it was I was afraid of getting a little bit lazy and not thinking for myself anymore. So I just tried to just turn it off for a while and see how I'd get on. I haven't had the major urge to go back to it, but at the same time, when we're pairing together and pairing with other people who are using Copilot, I do have a bit of FOMO. So what do you think it's good for then, or where does it really shine?
Luciano: Yeah, to be fair, I also started to use it almost as a joke. Like I didn't expect it to be useful. But then I immediately saw value and I didn't go back to not using something like Copilot. So maybe I'm biased because I'm really enjoying using it and I'm seeing lots of good use cases for it. For instance, recently I was writing some unit test and I needed to create an object that was necessary for the test that was a little bit convoluted.
It's like complex objects with lots of like nested substructures and everything needed to be generated in a way that was making some sense for the test. And then of course, I needed to generate some assertion after I was doing some operations in this object. And as soon as I started creating the first line of that object definition, Copilot immediately completed like 10 lines of code. And when that happens, I'm always a little bit skeptical because when you write too much code, I am always afraid that I'm going to be missing some important detail, maybe a comma there or maybe some name somewhere else.
So I did check everything and it was actually quite good. And then it also generated a bunch of assertions which were like 99% close to what I was trying to do. So I still needed to review everything and fix a few small things, but I think it did save me a lot of time. And that's something that I mean, it's happening all the time that I feel that I'm becoming a bit more productive just because it gives me that suggestion and also gives me that suggestion in a way that it's easy for me to decide, do I want the suggestion or not? And if I don't want it, I just keep writing and ignore whatever is saying. So definitely useful and I'm seeing value basically using it every day for coding. Yeah, I've seen a few people then use the AWS to code Whisperer. I did try it for a little while, but I found that the responses are just very slow to arrive, which slows down your development flow.
Eoin: And that's a real problem, but I'm sure that's going to improve in the future. AWS does tend to release things early on and see how people get on and then improve them over time. I've also seen that you've got Copilot X, which I think is the future generation of Copilot that has been previewed. I think you can get access to preview. There's a wait list. And from what I see, that's going to allow you to do a lot of other developer tasks like create PRs and documentation easily and maybe more like ChatGPT, use a conversational interface. And you can also use it from your terminal. So if you're trying to find out what's the command to search for a certain string and extract the third column and sort them and find unique values, you could do that from the terminal. Another example I've seen is like generate the FFmpeg command to extract the audio from this and convert it to MP3 or something like that. So that all sounds pretty useful. Again, I wonder, will everybody forget how to do these things for real and what does that actually mean?
Luciano: I should try to see if it helps me with Git rebases.
Eoin: Yeah, yeah. What could go wrong? So let's talk about maybe the headline grabber ChatGPT. I found it pretty useful, but I've had also very mixed results. What do you think it's good for?
Luciano: Yeah, I've been using it quite a bit, especially for this podcast. I find it extremely useful for coming up with good video descriptions or episode descriptions, starting from our notes or our transcripts. Also for generating, for instance, the YouTube tags for every episode, which is a very daunting task. So just trying to think what are 20 different keywords that are somewhat related to what we just mentioned today. And I think that ChatGPT can very quickly give you suggestions that, of course, you can take or not take, take some of that. So I think it's still important to have some degree of human intervention in these things, but it's just making your life easier and making you more productive when you use these tools in a very targeted way.
Eoin: I completely agree with using it for summaries and even also for idea generation, and you mentioned Notion. And I think ChatGPT 3 is integrated into Notion now. So if you're a Notion user, you can use ChatGPT just from within your documents. So you can select text, get it to summarize it, get it to rewrite it, get it to change the tone or language of it. Just expand it out with adding even more detail. But you can also use it for kind of brainstorming or coming up with ideas. Like one thing I tried was give me a list of ideas for company activities around Dublin. And it gave me like 10 to 20 very reasonable suggestions, actually. So that stuff is really good for when you're just stuck for inspiration and just need to unblock yourself a little bit.
Luciano: Yeah, I have another cool example, which I was actually really impressed with the result. I tried to ask something like, I want to create a course on Node.js Streams. What are the main topics that I should cover? And it literally gave me 10 different chapters. And the order was making a lot of sense. It was like a progression on the level of complexity of the different topics. And I think it was covering all the basics. Now, it's probably not too different from doing a Google search, looking at four or five different pages and then coming up with your own summary of all of that. But I have a feeling that this is just much faster to get a decent result this way. One of the things that ChatGPT is pretty good at doing is if you have text, you want to rewrite it or change the style of that text.
Eoin: It's pretty good. But I've actually also heard from people working at universities as researchers and writing a lot of academic papers. And it's quite common that you've got lots of international collaborators and people who are not native English speakers who are burdened with the task of creating formal academic English in papers. And this is obviously a difficult enough challenge on top of doing the research itself and that universities are actually starting to recommend people to run the text through ChatGPT before they submit it for review, because it can actually get rid of that cycle of people just commenting on language and small wording issues and just accelerate that whole process. So I think that's quite an interesting benefit as well. And I've also seen it used for just changing the tone, you know, make this less formal, make it more formal. I like that a lot. And it's something I'll definitely try to do in the future with my blog posts is just try to improve the quality of English, because everybody has their own style that they're accustomed to, I guess, when they're writing. And you don't sometimes see that maybe you're just a little bit verbose or you have a tendency to repeat yourself. And even just by asking ChatGPT to make paragraphs more succinct, it might help a lot there. Yeah, I think there is on one side the risk that is going to make basically creativity fade out a little bit because you're just going to take whatever the tool is giving you and not question if there is an alternative.
Luciano: But at the same time, it can be an opportunity for actually exploring different styles of things. Like for instance, I often try to do to take a piece of text that I have already written myself and say, can you try to make this more engaging? Can you try to change the style to be, I don't know, more mysterious? And I don't always get good results. But sometimes I get inspiration for changing what I was doing in a way that maybe I was not anticipating before. So I think you, on one side is true that there is a risk of losing creativity. On the other side, I think you can still use this tool to kind of foster more of your own creativity by just getting some ins and then developing from there. Yeah, that's really interesting because you would worry that it's going to take the personality out of writing or the individualism. But I suppose whether that happens or not depends on what you do with the response you get back.
Eoin: I think everybody, whether you're a native English speaker or not, or whatever language you're writing in or speaking, everybody's using AI to some degree almost at this point for grammar checking and spelling. Like in Google Docs, it's already built in there. Or if you're using Grammarly, it's using AI to suggest grammar. It should level the playing field a little bit for people.
Luciano: On that topic, I actually recently discovered a VS Code plugin that allows you to use Grammarly for markdown directly in your VS Code. And I was recently writing an article for my blog. And basically, this tool works a little bit like ESLint. Basically all the errors, you will see the file on the sidebar becoming red. And then if you go on that file, you see like a list of things to fix. So I immediately saw that all my older blog posts were like all red everywhere. And I did realize that there were a lot of small issues that I wouldn't have done today just because today I have this kind of tools to help me out and make me spot the issues straight away. So that's just another, I guess, confirmation of what you've suggested that, yeah, we have been using these tools for a while now. But if you compare maybe with a few years ago when we were not using these tools, there was a little bit more disadvantage for people that were not English speakers to come up with like decent written content.
Eoin: Yeah, it's interesting. I mean, when the advent of SMS back in the early 2000s, when people started using short messages, there was this grave concern from a lot of people that it would kill the English language or any other language because of all the abbreviations and acronyms and everything. So maybe this is going to go to the other extreme and kind of set this new standard for grammar which is unrealistic for humans and almost takes kind of idiomatic individual language out of the equation altogether. Just another disadvantage. But on the subject of language, we have used, if people have been listening to the podcast for a while, we have used the other open AI model, the whisper model for generating transcripts, and we did a whole episode on this. But if you go to awsbites.com you can also see transcripts for our almost 80 episodes so far all there and I think this has really been useful for us in a number of different ways. What do you think Luciano?
Luciano: Yeah, absolutely. I think it's first of all making the content a bit more accessible because if people want to read through rather than listening to it or watching the videos, they can do that now. But also for us, if we were to do that in a more, let's say manual way where we would either do the transcript ourselves or pay somebody else to come up with good transcripts, that would probably been something a little bit prohibited for us either for cost reasons or time reasons. So I think we found a very good trade-off there that is allowing us to make our content more accessible without really investing too much into it. So that's another, I guess, proof that these tools can be beneficial in kind of business-related context or content creation in general. I haven't seen this yet, but we both use Canva from time to time and you're a bit of a Canva expert at creating those amazing thumbnails for the podcast. But you mentioned that Canva is now getting a few AI features. What's that like?
Eoin: I haven't used most of them yet. Only the image generation one which looks a little bit like DALL-E so you can just write a prompt and it will generate an image for you.
Luciano: Now the level of quality I've seen so far wasn't really anything amazing. I wouldn't have used those for creating anything professional, I suppose. But who knows, maybe that's going to improve over time. Maybe they will start to use other models and get better results. But I also saw that they recently announced a bunch of additional AI power features more geared towards editing. So if you have, for instance, already a picture that you're working on and there is maybe a part of that picture that you don't like, you can easily highlight a portion of it and maybe say something on the line, can you please replace this part with something else? And I mean, the video I saw, they were getting amazing results. Now, of course, they probably tried to create a video that was somehow amazing and would push people to use the product. So I still need to use it in real life and see what are the results, but it seems very promising. And I saw that Adobe announced something similar, if not this week, last week. So there is definitely a push even in the space of graphics and art and design to use more and more AI features and AI power tools.
Eoin: Yeah, I've played around with DALL-E and Stable Diffusion and I've seen great things with Midjourney examples. It seems to be that in the image generation space, there's a lot of different models and a lot of different tools out there. There's also a lot of more controversy so far, I think, around, you know, using copyrighted images to train these models and then who ultimately is responsible for the intellectual property and there's some lawsuits and everything going on.
But there's also pretty interesting use cases. There's a nice mix of both of them. There's a serverless application that David Boyne put together. I don't know if you saw that Luciano but it's an interesting one where he can generate bedtime stories for his kids every night using an automated schedule of the serverless application. He uses ChatGPT to generate a story, then DALL-E to generate some images, and he gets like a children's story every day. Yeah, okay we'll try and throw that link in the show notes for people who want to check it out. Why is this happening now? What's the sudden acceleration? Or is it a sudden acceleration or is it just that the mass consumer market has suddenly awoken to the fact that all of this has been going on?
Luciano: Probably the latter. I mean, there is definitely an acceleration in the technology itself. Probably this technology now is more, I don't know, fine-tuned, more accessible. I assume also cheaper because if it was possible to make it so accessible, probably now training the models but also running the inference is, I assume, much cheaper than it used to be a few years ago. And on the other side, definitely there is an education piece of the market that is starting to realize that this tool exists, how people can use them, how can they maybe help people to accomplish specific tasks that would have been much more time consuming before. And plus all the news that are catching up and promoting in a way or another that these things exist and people can use them. So everything is kind of creating a vicious circle where this technology is kind of promoting itself almost, right?
Eoin: We've mentioned a few things already about what it's good for in terms of generating readable English, summarizing content, generating documentation. I think we mentioned that in the context of Copilot X, but I think I've also been using a ChatGPT to do that. Generating a readme based on basic information or some code, it's pretty good at starting that off for you. But everything you generate tends to need a bit of scrutiny. What about generating code then in general? So we mentioned that Copilot is how it's useful, but you also need to check it if it's long enough. What about using ChatGPT to generate code? How have you got on with that?
Luciano: I actually tried something recently. I was trying to rewrite something I had in Python in Rust. And just for fun, I basically said to Copilot, to ChatGPT, can you do this for me? And I was actually impressed of one specific detail that in the process I forgot to copy part of the code. And ChatGPT somehow realized that and basically gave me a placeholder saying, you didn't really give me this particular function, so somewhere you need to write the implementation of this function.
So basically in the place where I was calling the function, there was a comment on the side saying, remember to implement this function or something like that, which I thought was quite impressive. So I'm not too sure how that like these are the cases where it might seem that ChatGPT is actually understanding and is kind of having some kind of logic process there, which we know is not really the case.
So I'm not really sure what was happening there in the model in order to figure out that some piece of information was missing and then generating some kind of note there that was telling me, be aware that you are missing some important information there. And then in general, the rest of the code was actually quite good for that particular example. But I also had other examples where it was totally hallucinating. For instance, I said, can you convert this React component into Solid.js?
And because I think Solid.js is a much newer technology, maybe there isn't really a lot of examples out there. It was kind of getting some things right, like the imports and so on. But then like importing the right libraries, but then it was importing functions that didn't exist in Solid.js, but they were existing in React. So it was kind of getting there, but not quite. So it was making a lot of false assumptions there. So at the end of the day, the code that was generated was totally rubbish, even though it looked legit. So that's probably another risk there where you need to be really careful that if you assume the code is always going to be right, be prepared for disappointment or for nasty surprises. I think it's good to have a starting point, but always validate it yourself and make sure that everything that is generated makes sense to you. Maybe write some tests or at least test it manually. Don't just trust it blindly.
Eoin: Yeah, I like those two examples, because the first one you mentioned about it, like recognizing that you'd missed part of the code when you pasted it in. That's the kind of thing that leads people to say, oh, this is getting close to artificial general intelligence. And then when you see the second example, you realize, OK, this is just basically a really advanced search engine. That's what it is. It's a language model. It's not general intelligence. And it's pretty good at spoofing as well. So that's one of the unfortunate drawbacks of GPT is that it doesn't really tell you very well when it doesn't know. Instead, it just tries to make something up. Yeah, in many ways, it's like a software developer's when we're at our worst.
Luciano: That's true. We should probably use it for coding interviews.
Eoin: I'm sure it would do pretty well. It's already passed, GPT4 has passed the Google coding interview, I believe. Yeah, I have used it for content creation. And I think that's a really good one. Like if you want to create a slide deck presentation about something, you could say, give me enough slides for one hour presentation and the topics and the titles and the bullet points. You can even say, suggest me some images, visuals I could use. So it won't generate images for you, but it can describe images maybe that you could put in.
When you're faced with a blank slide deck and you have to create a deck for a new talk or to explain a concept or whatever, there's a bit of friction and inertia when you're just getting going and creating that initial structure and the format. And if you have something like this generated to start, then it gets over that. You can always reshape it and customize it and personalize it. I do kind of think like you've been in the process of writing a book. So have I in the past and you have that same problem, right? It's very difficult to know how to structure it and what to write. Would you use, if you're writing another book starting today, would you use GPT to help in the process? Because I think I would really find it difficult to avoid using GPT now that I've seen what it can do.
Luciano: I think I would use it to some extent, definitely not going to write the entire book just with whatever ChatGPT write me this book. I don't think that will give you value. Like you're not going to give the readers lots of value. I think at least at the level of quality we have today, it still needs a lot of checking. It still needs a lot of human input. So I see that more as an assistant where maybe you have written down a lot of notes and you want to just, I don't know, somebody to help you break it down or figure out what are the main chapters. What is the progression of topics or maybe even just rewrite something that you wrote in a very verbose way into something that is a little bit more digestible. So I think in all these cases you can definitely get value. But again, it's nothing more than an assistant and you still need to put the work in and make sure that everything that is generated makes sense and fits with the rest of the content that you have there.
Eoin: But it is like an assistant at the same time, right? So it is like, you know, sometimes when you are in the process of writing, you'd love to have somebody there who you could just whenever you wanted to turn to them and say, what do you think of this? What do you think of this structure with ChatGPT? You kind of got something equating or approximating that, I think.
Luciano: Another example is that I recently wrote an article on my blog, which was based off of a presentation that I already had. I had the slides. I was actually writing the slides in markdown. So it was even quite easy to just copy paste everything and say, convert this into an article. And the result was very mixed. There were lots of parts of the article that I liked and I kind of took and reshaped them a little bit and other parts that I had to rewrite entirely because I didn't like the generated output at all.
It was either too verbose or it was missing out the important bits. So I think overall I saved some time anyway because I wrote that article, which is something you can probably read in half an hour. So it's a relatively long article in probably four hours. And I think in the past it would have taken me, I don't know, a couple of days to do the same thing. So again, there is definitely value, but don't just take the output and use it. I always encourage everyone to try to make sense of the output and decide for yourself what's good, what's not good and what needs to be changed. When it comes to the quality of stuff, one thing I've seen is that there's going to be, I suppose, an interface between a lot of these tools and non-I services.
Eoin: So ChatGPT in itself, because it has to make up something and always give you a response, it can hallucinate and give you suggestions that are completely off the wall. But they are also working on these plugin ecosystems so you will be able to integrate into proper sources of information and also follow up on actions. So the ability to book a flight or a restaurant through ChatGPT or do your shopping or perform some computation.
Another one is they're going to integrate with Zapier so you can run Zapier workflows and then you can integrate with thousands of different services. So I think that's actually quite good because it just finally opened, like for a while we had this conversational AI hype bubbling around and we had Amazon Lex and Alexa and all these voice enabled systems, but they're kind of limited in what they can do and they have to have very structured menus and options and they're always a bit hit and miss.
But I think when you combine proper voice recognition with ChatGPT and these other integrations that it will bring all of that to a new level. When it comes to the code quality, I mean, I've seen ChatGPT do some amazing things. Since we're talking about this on an AWS podcast, I did try and see what it would be like to try and build a complete AWS serverless application with ChatGPT. And it started off pretty well. So I asked it to build a skeleton CRUD API. Actually, it was a shopping list application, I think. And it generated serverless.yaml and it generated five Lambda functions for me. And all of the Lambda functions code was completely perfect, I would say. Completely reasonable. I had to make some minor tweaks that it would deploy, but it did deploy and the API worked. So it had deployed a DynamoDB table, Lambda functions, IAM roles, API gateway.
Luciano: By the way, did you ask it explicitly to use serverless framework or was?
Eoin: Yeah, I asked it to use serverless framework. So I asked it to use generator serverless.yaml and the Lambda functions. Then I said, okay, well, this is a very, very common serverless 101 example out there. So I'm sure it's got lots of examples of that in its training data. I asked it to add custom domains, DNS. And at that point it went a bit haywire. Like the output looked reasonable, but it was like somebody really trying to fake it.
You know, somebody who's just like really trying to convince you that they know what they're talking about, but they don't. And I could tell by looking at the cloud formation that it wasn't going to work. So then I said, okay, forget about custom domains. But I thought it would be interesting to try something a little bit more, you know, specialist. So I said, okay, can you reduce the code duplication by introducing Middy?
And first of all, I asked it to do validation as well using Middy. So at first it gave me like a mix of using the Joi library and Middy that didn't really make that much sense. Mustn't be the canonical way to do it. But then I said, okay, just use a Middy approach rather than mixing in Joi. It started to give me really weird syntax and then started to add in weird serverless framework plugins that I didn't need. And it had like 12 different Middy middlewares in there for no reason. It was also using like old versions of Middy syntax. So I said, please, could you use Middy 1? It like upgraded the package.json version, but the code was the same. So I thought it was interesting. This was the GPT-3 engine, by the way. So maybe GPT-4 is going to be better at this. But I think it also points to the fact that as you get into more and more specific technology questions where you know the training data volume is going to be lower, it will struggle.
Luciano: That's an interesting example. But you also mentioned that Jeremy Daly, I remember we were chatting about this, has written about using AI tools. Is it worth summarizing all of that?
Eoin: Yeah, yeah, I think this is really kind of insightful article. It's actually in the premium version of the Off By None newsletter. So I think he mentions it in the free version. But basically the point that Jeremy was making, like he's been using a fair few AI tools lately and is finding them all useful, as we have been as well. But his approach was to think about university education and what the impact will be on university education, particularly in the US where the college fees and everything is really expensive and really it's a massive investment.
And yeah, he is suggesting that universities will really need to think about reevaluating the curriculum for all these courses, and as well that we kind of need to reevaluate our learning and focus on areas where human creativity is really indispensable instead of, I suppose, what are now becoming easily automatable tasks. And yeah, he's basically questioning that maybe that we need to think about the value of a computer science degree as it currently exists, given that AI can do so much with this for us. So I think it was very interesting. You're just saying like everyone, we need to think about adapting how we work to focus on areas less likely to be automated. And I thought that was quite interesting. I mean, it does bring up the question and maybe we can go into some of the drawbacks of all of these tools but does it, is it really automating away these skills, or is, are we just able to use them, if you've already got the knowledge, are they allowing you to accelerate your workflow, does it help you if you are at the start of your learning journey about to embark on a college degree, will it short circuit all of that in some way?
Luciano: Honestly, I don't know because I have a feeling that education will have to change at some point just because there are a lot of tasks that now, they don't make too much sense anymore, maybe tasks that are too mnemonic, or that are easy automatable with tools. For instance, I remember that ages ago, people would have their logarithmic table. That's something that's been obsolete for quite a while. Maybe we will see something similar in the education where a lot of things that we used to do become less relevant, so we focus on other areas. And definitely, it is going to be interesting to see if we are going to be able to focus more on areas that require more kind of deep engineering or creativity, rather than things that just require sheer knowledge and mnemonic skills, which they don't really require AI today. I think it's just computing in general, it's, and the web are solving that problem already for a while. So with AI is just a little bit of the next step in trying to automate the research and giving you easier access to the results.
Eoin: I think there's probably a couple of ways to look at it. I mean, if you think about it, if you were to believe that everybody will eventually move to cloud computing, or most people will, then the number of roles I would imagine for, you know, data center engineers, you would expect will concentrate just on cloud providers. So they may become more specialist niche roles. It might also be the same for software engineering in general, that if these tools allow people to do a lot of what we currently do in a more automated way, then there will still be a need for specialists who are more rare but also understand what this generated code is doing and where it can go wrong and troubleshoot it and understanding the underpinnings.
I mean, this is probably always the case as new generations of technology emerge. If you were learning software development in the 60s, you would have to learn a low level programming language, and you would need to really understand about how CPU and memory works. That hasn't been the case really for a while and CPU and memory are so cheap, and also have become more complicated that it is, it's something people don't really think about that much anymore. Not in general, I would say.
But at the same time, people who do know those things and retain that knowledge and do dive deep, become increasingly valuable then, as it becomes like a rarer skill. So that's one way to look at it. But, yeah, I also think that in general the education system does need to adapt quickly to this sort of thing and kind of introduce. Another thing you could say is that understanding how AI works and AI models work and how to interact with them and really getting into that area as a specialty is also somewhere, an area where you could take advantage today because if this is going to change everything into the future, then it will need a lot more expertise and people who can maintain those systems and work on them. So, every challenge becomes new opportunities as well.
Luciano: Definitely a growing market so there will be opportunities there.
Eoin: Another area I'm kind of interested in is, like, what we do quite a lot is in understanding existing legacy systems and this is a skill that's actually quite difficult to get and to find in the market, you know, being a people who are willing to go into a legacy system and not just look at it and go oh my god this is awful let's just rewrite it from scratch. But the ability to understand what's happening in it, capture the value in it, capture the history and all the tasks that knowledge has been built up and retain the arts, as you migrate to whatever the target is, be it a serverless architecture or microservices architecture in the cloud. This is a challenging thing to do. And a couple of our colleagues at fourTheorem have been working on this Fission project in collaboration with DCU University here, and have had really good results in like using AI and other techniques to actually analyze code bases and do refactoring on them. All of these new tools introduce that capability or the possibility at least that you could use language models, point them at an existing code base that humans really don't want to look at. And it can tell you okay well this is what it's doing, these are the domains, these are the bounded contexts and the entities in this application. And here's what a microservices architecture for the system might look like, and even generate code templates or part of the code for you.
Luciano: Yeah, and I think we've been using this and some case studies with some good degree of success, so it's definitely not rocket science, it is feasible today to do this kind of things using AI.
Eoin: And I suppose the interesting thing about that is that what we've learned from our experience using these tools is that it really helps you to accelerate a lot of that forensic analysis of legacy code bases, but you still have to use your logic and reasoning to figure out what the target architecture would look like, and then how you get there. And figure out how you do it in a way that doesn't disrupt the business and is incremental and works and doesn't fall over the first time you run it.
Luciano: Yeah, I suppose even just deciding which service you should create first is not trivial at all, like you need to have so much business context that probably still going to be a human decision for a while to decide which one is that. Would you trust ChatGPT to generate code for you in an area where you don't have so much expertise? So if you were asked to, I don't know, I'm going to pick a language that I haven't used very much at all, which is Golang.
Eoin: So if somebody asked you in the morning to write a CRUD application in Golang, would you use ChatGPT for it? I'm pretty entrusted to generate good output if you didn't have the knowledge to scrutinize it like you do with the many languages you do understand.
Luciano: That's a good question. I'm always a little bit skeptical of using these tools in general, not just for code, in areas where I don't feel I have enough expertise, just because I am afraid that eventually I am responsible for that content. And if I don't know what I'm doing, like AI is not necessarily going to give me something that I can rely on blindly. I guess I always need to double check. I find it just easier to use these tools to make my life easier when I know something and I just want to speed up the process.
And I had so many instances of the generated code being far off from reality that I learned not to trust it and not to use it in cases where I wouldn't feel confident myself doing all the work. But maybe that's just me. Maybe that's something that is going to get better over time, like the quality of the generated output, and then it could make sense to use it even in areas where you don't have knowledge and then you can use the generated output as an excuse to learn something. And actually trying to validate the generated output and see, okay, it generated this, why? Let me go and figure out what this function means and why this function actually does what I'm trying to do. Yeah, it might change the perception. And I think a good question is, if I was a junior engineer, would they use it? Would they have a different perspective? And I generally don't know the answers. I don't know if you have a special point of view there, but it's an interesting question.
Eoin: If I was at the start of my learning process and career, I think it would be very appealing because there's so many frustrating times when you're trying to figure things out and you are lacking knowledge and skills in so many different areas and experiences. That it's a real shortcut. So that becomes what you mentioned just there is actually quite an optimistic point is that you can use these tools to get started quickly, but also find out what you need to start being curious about in a way. And it can give you a template that can help you get stuff done. But if you have the right mindset, you can take what it gives you and try to understand it yourself. And it just possibly reduces the amount of endless trolling the web and getting distracted and finding articles that are not really written in a way that suits what you're trying to look for. Maybe they can give it, condense that information for you and help you to absorb it better. There's pros and cons to how it applies to junior developers, but I think what you mentioned there gives me a little bit more optimism than I had before.
Luciano: Yeah, the other point of view is, is this better than just copy pasting from, I don't know, Stack Overflow or articles that you find by searching on Google? And I kind of have mixed feelings there as well, because from just a perspective of getting stuff done quickly, of course, this is more appealing because it does most of the work for you. But on the other side, I think when you go off and do your own research and you try to read different examples and make sense of them, and then copy paste different parts from different places and glue everything together in some way that should make sense. I think all of that is a learning journey. And you have many opportunities to learn related things that now, I don't know if that just the amount of opportunities for learning are decreasing just because these AIs are doing so much work for you. And kind of pros and cons, of course, the more manual approach is lower, but I think you have more opportunities to learn, while the automated approach is much faster, but you might just give everything for granted and move on and lose a lot of important details there. Yeah, it throws up so many questions and kind of potential drawbacks and open unanswered questions that might give you a little bit uncertainty about where the future is going.
Eoin: We mentioned that it's useful for content generation, but if that becomes the norm, then where does the content come from that trains these models in the first place? Is this stuff going to drown out authentic original content generated by humans with creativity and real intelligence? In the code context, even if an AI is developing an application, who runs it? Who maintains it? Who's operating the system? Does it make it easy for fake content to just proliferate the internet?
Luciano: That's definitely the risk. And I think you're certainly right, when that happens, if it happens, like what content means anymore, right? We're not going to be able to say, I don't know, where is the value of content anymore? If everything is auto-generated, you can have endless content, but it's going to be very hard to figure out what's valuable and what's not valuable. So maybe there is an opportunity for people to try to be even more creative, just to try to beat this huge flood of generated content. Probably there is an opportunity there to stand out even more if you can put the effort to come up with more original content, maybe to have ways of doing things that, of course, is going to be a lot more personal, something that is going to be very hard to be automatically generated by an AI.
Eoin: Plagiarism is already a concern for a lot of content creators on the internet, but then if your content gets picked up for training data and gets repurposed in answers or generated content, how do you protect that if you want to protect it? It's okay, I think, in general for you and I, we want our content to be shared widely and consumed as widely as possible, but for some people, maybe they're relying on the revenue from the content itself directly, and that's a little bit more of a concern.
It's a worry. And also for companies, companies will probably know that lots of people are using this today, maybe they're not being totally transparent about it, and I think companies need to figure out what is their policy for these tools really, really quickly. And there's, again, strengths, weaknesses, opportunities and threats in all of this, right? So that analysis is definitely worth doing.
What happens when people are pasting sensitive information into the ChatGPT? If they're pasting in a sensitive document in order to get a summary of it, that can help to communicate a message to the company at larger, which is a great benefit, but potentially you're leaking sensitive company information at the same time into the public domain. There's a whole question about AI as a game changer, right? So many people are predicting that this is a game changer and will change the way work is done irreversibly.
We first of all have a challenge for us as individuals, which is maybe a bit of an existential threat, or maybe like what opportunities, how do I change how I work? But also then for our companies, because if you're in a company with a large team of people, it's okay to say, well, how does this change me as an individual? But more importantly, how does it change your company and how you all work together? Because it's probably much better that you're tackling this as a group rather than individually thinking, how do I get an edge here? If one person is secretly using ChatGPT to look more productive than their colleagues, I don't really like the sound of that very much, but if you can look at it as a company and say, okay, let's really assess what we do every day, where we spend our time or where we can be more productive. That's probably where you can really get an edge because the whole organization can potentially benefit.
Luciano: Yeah, and I think that relates to something we mentioned before that in some cases, people are afraid to ask questions to colleagues and therefore it's easier to just ask AI. So probably I would like to take the opportunity to invite people to refrain from doing that as much as possible because I think the opportunities you get by asking a colleague are still much higher than what you can get from AI.
Not just because you can build a relationship, you can get chances to work with other people, but I think there is a much more in-depth exchange when you try to communicate with somebody else, especially if they are people with either they work in a different area of the business, so you can have an opportunity to learn more from their experience and what they do in the company. Or maybe somebody that has just more seniority and then you can use that excuse to learn other things that you didn't anticipate, or even if it's somebody with less seniority, then you can still use that opportunity to learn what kind of challenges are they facing, if you can help them in any way, or if there's something in the system that should be improved. So yeah, I think if we end up letting AI do everything, we will miss out so many opportunities and everything will become kind of flat and standard, and instead we should be looking for opportunities not to do that as much as possible. I find it interesting when people say, oh, like this is already making me two times more productive, and when you dig into it a little bit more, it's because they have a sounding board and they have something to ask questions of.
Eoin: But this benefit, this 2x or 10x improvement in productivity, I think you can also achieve it pretty well by having a more collaborative spirit with colleagues and friends. The barrier there is just ultimately ego and pride, and I recognize it in myself and I see it in myself and try to stamp it out wherever possible. But when you see yourself thinking, I'm just going to keep working on this and figure it out myself because then I have, as an individual, achieved something, but you could also equally just ask a colleague, admit that you don't know, and learn from them. The cost is just a little bit of humility, but the benefits, I think, when you end up achieving something together as a pair then or as a team, and I found when that works well in organizations and in teams I work with, the benefit in terms of work satisfaction, ethos and mood in the team, and then the knock-on effects on productivity are far greater than some of the benefits that people report from just getting from interacting with ChatGPT. So I think there's a lot to learn from it, and I think it's an interesting parallel.
Luciano: Yeah, I agree 100%.
Eoin: Will AI take the fun out of software development?
Luciano: That's a good question. I don't think that today there is a risk of that, I mean, in the short term, just because we have seen with all the many examples we mentioned today and that we have seen online that it doesn't really understand what software development is about. It's just taking data from the web and reshaping it in a way that sometimes is correct, some other times is close to correct, other times is totally hallucinating and doesn't make any sense. So I think until there is more of a general intelligence that can make logic reasoning and connect different thoughts and understand the problem and try to come up with solutions for these problems, I think until then we still will have a lot of fun trying to do the work ourselves and building solutions ourselves. So again, it's just going to be an enabler, something that can make our work a little bit faster, but I don't think it's going to take all of that fun away of building things, thinking about problems, solutions, architectures, trade-offs and so on.
Eoin: I guess the part where I think potentially you might lose out on the fun is when you really have to figure something out yourself and you don't have almost like the crutch of your language model to lean on to help you. And you have to go deep and explore it and it's like a little bit of exploring uncharted territory and you emerge with an answer at the end of it. Whether you do that as an individual or in a group, I worry that some of the challenge will be gone from it and while that has obvious productivity gains, then it might also hinder enjoyment as well.
If you've seen an open letter published asking these large model developers to halt development on the more advanced versions until we kind of figure out what they can do because of the risk, the potential existential threat, the risk to jobs, all of that. I also think there's a lot to that and I've seen a lot of analysis on that saying there's other more subtle threats out there like tools like this can potentially expand the rich-poor divide by just giving the privilege more access to more tools that allows them to be more productive and earn more money and be more profitable. And it just makes existing societal issues worse in that sense. On the other hand, I'm wondering can the fact that AI allows people to come up to the level of people who either have the privilege of being native English speakers in the software development world or having more experience or more access to education, can these tools be a leveler for people? Can it be a democratizing effect?
Luciano: Yeah, that's a good question that I don't know if I have the answer, but in general I've seen that technology has been helping to get to a more democratized society. Like just if you look at the web today, you can have so much information that I think 20, 30 years ago maybe would have been accessible to only very few people in very specific roles. And for them it would be of course even much slower in order to extrapolate the information.
Today it's not that anyone in their pockets they have a device that can basically give answers to most of the human questions that you can ask today, right? So I think in general technology can have that kind of positive impact, but of course there are always lots of negatives and they can be very hard to predict. So I think we always need to be vigilant there and make sure that we always try to use technology in the right way. Probably one of the key things is if these kind of technologies are pretty much accessible to everyone, maybe they can become a leveler. If they are kind of gate kept, maybe they are very expensive to use, very complicated to run, and only few people or organizations can use them, then there is a risk. That is that there is going to be more divide in society making the richer even richer and the poorer even poorer.
Eoin: And I guess there are so many potential knock on effects as well. I mean I've been even trying to find out from the point of view of power consumption in training these massive models and running inference on these massive models at scale. If they are to become accessible to everybody, what is then the sustainability, environmental impact of these models? And it's quite difficult to get access to data making this very clear, but it's all very well to be increasing productivity.
Even with a democratizing leveling effect for everybody, but on the other hand it's just kind of another mindless growth advancement in technology that really hurts the environment that we need to sustain ourselves, then it might be also a step towards shooting ourselves in the foot. Maybe we've come to a natural end at this point. I think we've probably agreed that this is a game changer and has already been changed things irreversibly to some degree. There's so many open questions here that I think we need everybody out there watching and listening to contribute your ideas and let us know what do you think about the future of AI? What tools have you been using? What has really blown your mind? And also perhaps even more interesting, have you had any AI generated disasters? Please let us know and we really appreciate you listening to this special episode on AI tools in the software industry. Until next time, we'll see you.