Exploring the Role of AI in DevOps with Michael Dawson and Alex Kearns - DevOps 232
In this engaging episode of Top End Devs, join host Warren Parad, co-host Jillian, and interviewer Will Button as they delve into a compelling conversation on the pervasive influence of AI across industries. Special guest Alex Kearns from UberTask Consulting shares his expertise on the real-world applications of AI, navigating through its rapid evolution and discussing both the opportunities and challenges it presents.
Show Notes
In this engaging episode of Top End Devs, join host Warren Parad, co-host Jillian, and interviewer Will Button as they delve into a compelling conversation on the pervasive influence of AI across industries. Special guest Alex Kearns from UberTask Consulting shares his expertise on the real-world applications of AI, navigating through its rapid evolution and discussing both the opportunities and challenges it presents. From the impact of generative AI on business processes to intriguing ethical considerations, this episode provides valuable insights for professionals in the DevOps field. Tune in as the panel explores the dynamic relationship between technology, responsibility, and innovation, offering listeners a thought-provoking exploration of AI's role in shaping the future.
Transcript
Will Button [00:00:01]:
Welcome everyone to another episode of Adventures in DevOps. Joining me in the studio today, Warren Parad. Warren, how are you?
Warren Parad [00:00:09]:
Thanks for having me back. You know, I actually have a good fact for for today that I I I thought was really interesting to share. There's a malware out there called Otter Cookie, and I know the economy for engineers is not so great at the moment. However, a lot of advertisements out there for job recs may be from malicious attackers who are trying to get you to run, GitHub repos or download packages from the Internet to pass the interview. And those things will get installed on your machine and either try to steal your local crypto wallets or worse, be used for attacking whichever company you do get hired by. So I know it's a real struggle that you wanna complete whatever take home assignment or, you know, get the next job, but you really gotta be careful in today's economy, because this tech is out there that's just waiting to capitalize on a simple mistake.
Will Button [00:01:01]:
That's just nuts. Like, let's just kick someone while they're down. Right?
Jillian [00:01:06]:
Yeah. Like, all of that is nuts. I think getting homework from an interview is nuts. I think, you know, potentially installing something on your computer that's gonna make your computer go wild. It's not like it's just it's multiple levels of crazy.
Will Button [00:01:23]:
Speaking of which, hi, Jillian. Welcome to the show.
Jillian [00:01:27]:
Thanks for having me back.
Will Button [00:01:29]:
You guys are making me feel guilty. Thanks for having me back. Like, it's conditional at this point. Just show up.
Alex Kearns [00:01:36]:
Oh, I
Jillian [00:01:36]:
don't know. I'm not sure that it is for me, but, you know, thank you.
Michael Dawson [00:01:40]:
I'm still appreciative for being here, I guess, is what I'll say.
Will Button [00:01:43]:
Well, I'm happy to have you both here. You guys make my job a lot more fun and entertaining. And speaking of fun and entertainment, I'm looking forward to this episode. We have Alex Kearns joining us in the studio today, principal solution architect from dude, you just told me how to say this. My mind already went blank.
Michael Dawson [00:02:02]:
UberTask Consulting. UberTask Consulting.
Alex Kearns [00:02:04]:
I've heard every every possible permutation of how to pronounce it. But you did pronounce my surname correctly, which a lot of people don't do.
Will Button [00:02:13]:
Well, welcome to the show, man. I'm happy to have you here.
Alex Kearns [00:02:16]:
Great to be here. Thank you.
Will Button [00:02:17]:
Cool. So give us a little bit about your background now that we know how to pronounce
Alex Kearns [00:02:22]:
where you work. Yeah. Great. I mean, I I come from a a software engineering background, if you cannot go way back to to the post university career, and then made the move into into cloud, both kind of internal consultancy and and sort of platform type work, and then consultancy in terms of of the traditional customer external facing consultancy. But I I'm still very technically driven. I I like to get my hands dirty. That's that for me is is what's the the most exciting part. It's building things, breaking things, learning from it.
Alex Kearns [00:03:04]:
Yeah. Paper architecture is is not my not my fun.
Will Button [00:03:10]:
I hear you. There's there's, like, a certain, I think, for people who succeed in this industry for a long time, there's a certain amount of entertainment value that you get from your job.
Michael Dawson [00:03:22]:
Oh, I'm definitely doing it wrong then. I mean, I I my new my new belief is that when I retire, I'm just gonna go back into drawing boxes and lines. Like, that's just, like, the best part of my job. Like, when I can get on a piece of paper or a whiteboard and boxes and lines, you know, not even any words necessarily. Like, that's prime enjoyment right there.
Will Button [00:03:45]:
I'm gonna get my typewriter, and I'm gonna go to my cabin in Montana. Screw all you guys.
Jillian [00:03:51]:
So so speaking of just, like, I'm gonna do something really simple and turn my brain off. I bought, like, a paint by numbers kit and that has been it just reminded me a lot of what you said because it's I just sit there and I just paint in the numbers and I don't I don't care that I'm nearly 40 and doing an adult, like, an activity for children. It's great. And it's just so you just turn your brain. It's great. It's great. Anyways
Will Button [00:04:12]:
I've actually heard quite a few people comment how, like, therapeutic and relaxing that is.
Jillian [00:04:17]:
They really are. They're very relaxing. It's great.
Alex Kearns [00:04:21]:
I think you need to rethink that it's the tech industry brains are so switched on all the time. There has to be a way to to switch off. Otherwise, you know, work life balance is is pretty nil.
Michael Dawson [00:04:36]:
Well, it's interesting you bring that up because, actually, very commonly, we're keeping in a always production mode. Like, everything we do is happening at a critical level, and we have to pass that test. Like, whatever work we're doing, there's no practice involved. It's always run time for us. And there's a lot that there's there's a bunch of research out there that says, like, we have to go into practice mode where mistakes can be made, failures can be had, and we can actually learn from it intentionally. And without that, we will like, that's actually one of the biggest causes of burnout. So, you know, if it's going home and doing watercolor, water painting, you know, whatever it takes there, if that somehow helps you recharge realistically, definitely do it.
Will Button [00:05:18]:
There's also a lot of evidence showing that, taking on those kind of activities while you're not consciously thinking about the problem, your subconscious is continuing to work on it, and that's when someone like the big insights and big breakthroughs for you occur. Like, I know there's a really common antidote of, like, when you're in the shower and you have this great idea, that's like a a really fixed example that whole process in action.
Alex Kearns [00:05:44]:
This is so true.
Jillian [00:05:45]:
Don't do important jobs anymore. Like, I'm just I'm just not doing a fine. Like, I mean, like, they matter, but not, like, not on, like, a huge I'm always in production. Everything has to be perfect. Like, it's fine. It'll it'll be fine if it's if it's done a couple days later. I used to do important stuff, though, and I don't want to anymore. So that's I suppose that's your lesson as this is where I'm at doing my paint by numbers and things that don't really matter.
Will Button [00:06:13]:
Well, one of the things we were talking about before we started recording the episode was, leveraging Gen AI. And, Alex, you've got some experience with that, specifically some experience of, like, real world examples where you've done that. And I think that's one of the big I think that's one of the cool things about AI. You know? It it goes through this this buzz cycle, but people who are actually putting it to real world use. So I'm interested to hear your take on that.
Alex Kearns [00:06:43]:
Yeah. I think it's a it's a really it's a really interesting topic. It's it's something where you go back eighteen months maybe. ChatGPT was kind of just about starting to be established as almost household name. People aren't necessarily using it actively, but tools like that are are becoming more and more common. And then I think with with any technology, it's it's when it gets kind of democratized, when it gets put in the hands of of people that aren't having to to spend millions on hardware and and do those kind of things, it actually really starts to become an awful lot more prevalent. So, I mean, as as we saw with with any technology, so you think back to to kind of mid twenty tens, I suppose, where things like AWS Lambda came out, so kind of serverless technologies. And then the years after that where every SaaS company that existed was going for a we now have a serverless offering.
Alex Kearns [00:08:00]:
And it's like, what it is it serverless? Is it just managed service? Is it what is your what is your, definition of serverless? Right. So you see the buzz around that. You you see buzz around even cloud, which is I mean, cloud is is public cloud, twenty years old if you if you go back to AWS's first service. So it's not it's not new. It's not shiny anymore. And AI, I think, is going for the same thing, but just at a a much, much faster pace. So
Michael Dawson [00:08:38]:
that's a really interesting comparison, though. Like, I just I wanna stop you for a second there because I I feel like there's a there's sort of a weird duality where serverless made it easier for people to get into building stuff and releasing applications because it didn't require you to purchase or allocate huge data center capacity in order to make that happen. I feel like the where where the point is AI is currently at is it actually does require only the most expensive access to hardware or service providers to be able to get that. So I don't think like, I don't know if it's been democratized yet. I mean, there's a lot of services out there that claim to get you access to some facet of AI, and I know there's, like, chat GPTs and the LLMs out there that questionable how much value they're returning to you. But I think for, like, the real core aspect of being able to provide the underlying resources or technology to people, I think, is still much too far away.
Alex Kearns [00:09:36]:
I think the way you described it is great. It's it's giving people access to a a facet of AI. I mean, if if we think about AI as a general topic, artificial intelligence more broadly has has been around for decades. It's only really that when you sort of start breaking it further down into machine learning and deep learning and and now generative AI is a a kind of subset of that. The the the generative AI and the AI, terminologies are now almost interchangeable, certainly from an industry perspective. I think it very much depends on what what people are wanting to do with AI and and how how specific their use case is. So if we think about things like like chat GPT, that's obviously a a very specific use case. It gives you very generic responses to things.
Alex Kearns [00:10:39]:
It hasn't got access to your, your specific business data, but it's free. And, obviously, with any any free product, you are you are normally the product. Of course, you can you can opt out of things, but by default, it's it's collecting that chat history, to improve the service for for everyone. You've then got things like, Amazon Bedrock. So Bedrock was AWS's generative AI offering that came out at their conference twenty twenty three that was announced. So Bedrock offers kind of two two different modalities, I suppose, in in how you can use it. One is is on demand, where you pay per thousand tokens. So that's the way where you can you can go and build something.
Alex Kearns [00:11:37]:
Again, it's it's similar to chat GPT in its sense of its generic knowledge. It's whatever the large language model has been trained on. But because, as with any any kind of cloud hosted managed service, they can take advantage of economies of scale and give you pay as you go pricing. The moment that you want to fine tune that model or train a different model with your specific data, you go from paying kind of fractions of a cent per thousand tokens to having to commit to 30,000 for for three months because you are now the one that's bearing the cost of all of that hosting rather than than AWS making it available. And, of course, there are there are ways that you can augment the use of large language models with your own data without going to that extent. So even just including examples of your specific data in a prompt, or kind of retrieval augmented generation where you can can load your own documents into a a vector database and have it retrieve that data, from there. There's lots of ways you can kind of get get quite a long way without spending huge amounts of money. But, yeah, the moment you want to get to the I have complete control of my model, I train it with my specific data, then you'd hope that that's when you start getting to the the type of customers who can afford to to spend that kind of money.
Michael Dawson [00:13:16]:
So in your capacity at your your current job where you're interfacing with with clients and whatnot, do you find, there is one particular provider or one set of tools that you're constantly going to, or whole breadth of set, but for different types of tasks to help assist you?
Alex Kearns [00:13:34]:
Yeah. So I think it we we are a an AWS consultancy, so everything tends to to center around Amazon, tools. But, of course, Azure and Google both have have AI offerings now as well. Although Microsoft, with their their investment into OpenAI, are the only provider that that offer the OpenAI models on a public cloud, and I think that will stay the same for for a number of years. In terms of the the tools that we reach to, there's definitely a combination of kind of vendor specific but also open source tools. So in terms of hosting large language models, the the kind of most frictionless way to to access them is is through Amazon Bedrock. Really, really straightforward API, easy to write scripts to to interact with that either synchronous, asynchronous chat, however you however you need to. But then you can start to bring in open source tools.
Alex Kearns [00:14:39]:
So things like, Landchain is a really popular open source framework where you can use Bedrock. You can use Microsoft's, hosted models, Google, OpenAI, however you need to to interact with your your large language models and then bring in those other parts like retrieval augmented generation where you can say, I've got a database full of full of documents. These are my business documents, my sales reports, my financial reports, whatever they need to be. And then when your large language model takes your prompt, it can then use that data that you've provided, without having to specifically train the model to augment the generation of its response. And there's lots of open source tools that that can do things like that. I think what'll be really interesting as as these kind of I wanna say years, but I think it's gonna be months, with with how things are going at the moment. As the next few months go by, so many open source packages are popping up, but it's how these open source packages stay around long term. So it's unless they are backed by by a big business, what makes them commercially sustainable? I think we've seen seen frameworks like CrewAI, which is a a framework for building AI agents and multiple agents and kind of orchestrating, like, what agent would get called for a particular type of task.
Alex Kearns [00:16:25]:
They've now introduced all the commercial model where they can take on some of the management and, observability around those agents, or you can just use the framework open source. So I I I I feel
Michael Dawson [00:16:40]:
like I wanna ask about that. Do you you're supporting your customers in utilizing AI within their businesses. Have you seen a significant change, say, over I mean, I I think chain things are changing very frequently, in a month period. So, you know, compared to, you know, early twenty twenty three Mhmm. To now, what like, what's the next thing? Like, what are customers now most interested in utilizing? Is it one of the particular providers more so than others? Is it just a smattering of everything, or do you really see something, taking off specifically in the businesses that you're working with?
Alex Kearns [00:17:18]:
So I think it's it's worth worth making it kind of provider agnostic and thinking more use case and and drivers for for use of AI. So thinking back twelve months, or even twenty four months, there was a lot I think there was a lot more of the people wanting to use AI for the sake of using AI. So we've we've we've had conversations with customers who have said, my board member has said, as a business, we've got to be using AI because investors wanna see it, public needs to see it. Can you help us use AI? It's like, well, of course, but let's let's take that step back. Let's try and work out where there is a genuine use case. I think we've we've kind of getting through, if we use the the Gartner hype cycle as a a framework, I suppose, here where I think we've we've definitely passed that peak of inflated expectations, that that kind of top of the top of the hype hype cycle where everyone is is using AI for the sake of using AI. People wanna do way more with it than is is really feasible and ethical, sustainable, everything. And we know I think we're we're starting to quite rapidly get into that point of, what the hype cycle terms as the the trough of disillusionment where people are thinking, I'm seeing AI so much.
Alex Kearns [00:18:59]:
Every service, every tool, every news article. I mean, even in the in The UK today, our prime minister came out and announced a big plan for for rolling out AI and growth programs across the country. So it's it's gone from just the those big tech providers to to government, to politics, to everything. It's dominates everywhere. And I think it's almost sort of starting to see a little bit of fatigue in companies where it's a it is a case of every vendor is talking to us about AI or their latest AI powered offering. And the that next stage, which I don't think we're far away from now, is is working out and really being visible the use cases for AI that are here to to stick around.
Michael Dawson [00:19:54]:
So the one Yeah. No. No. I totally get it. I I mean, it is quite in the media. It's everywhere, as you said. And I'm I I sort of wanna ask our resident, ML expert here, you know, what she's seen, you know, comparative to with what you brought up. I I know she loves to talk about it.
Jillian [00:20:13]:
I love AI. I think I think AI is, very cool. I'm still really seeing people on the upward side of the hype cycle. Like, I have a small AI service that I offer. I had to stop offering it, like, publicly because people were just coming in with these very outsized expectations. And now I'm like, okay. We have to we have to schedule, like, a ten, fifteen minute talk first so that I can, like, you know, adjust some of these expectations and things. But besides that, I think it it's very it's great if you're using it kind of for what it's good at, and then it's terrible if you're not.
Jillian [00:20:46]:
Like, you know, I think probably a lot of the public policy stuff might be a little bit, maybe a little bit, like, outsized in terms of what it can do. But maybe, you know, but maybe people making these kind of requests and just being like, well, shouldn't it be doing this? That's probably what's gonna drive innovation forward. So I have kind of I have kind of, like, mixed feelings about it, I guess.
Will Button [00:21:06]:
Well, I think that's there's, like I wanna drill in on that for a second. Like, you having the right expectations for it, Alex and Jillian, y'all both brought that up. What are some of the good use cases where you've seen AI really make a difference, Alex?
Alex Kearns [00:21:27]:
So I can talk about one one kind of specific customer, use case where as part of a migration, there was, we had 250 PHP, cron jobs that were running, on an on premises server. These were, some of these were 15 years old, kind of dating back to a PHP early PHP five and PHP four with with some of them. And some of them were they were little scripts. Some of them were 50 lines. Some of them were four or 500 lines. And then you have the ones that import from different files, and we we kind of took the view of actually, is is there a way we can can do something here to speed up the analysis of of these scripts? There are a few things that we we need to know. We need to know what databases does the script talk to. Does it interact with any any other services, so APIs? Does it interact with things like, in the case of the on premises servers, things like the the send mail binary, on a server because this particular customer, for the rest of their business used a a managed SMTP service now and not just sending emails from from on premises servers.
Alex Kearns [00:23:00]:
So we did a bit of a proof of concept around just well, primarily just as a way to to speed up our own analysis, because yeah, I mean, nobody wants to read through 250 script line by line.
Will Button [00:23:16]:
No. Say it's not so.
Michael Dawson [00:23:18]:
Well, I mean, if you're looking at just as an example, you know, you you said, accessing particular database. Right? Like, if you're have calls out to some on prem or third party provider data provider and you're and they're migrating to AWS, they may be going to either a NoSQL option or or RDS. And so you you wanna make sure that those get converted, assuming you just port the scripts, you know, lift and shift into easy twos. So, I mean, a question that I may have is, where do you find yourself on the risk scale? Like, what happens if the l m, which is not gonna be perfect? I like, I mean, obviously, if it makes up tables and whatnot that aren't being actually used, it's that's fine. But I think the false negatives would be more of a problem. Like, what happens if it missed the table? Would that have caused an issue during the migration, and how did you potentially think about, mitigating those sorts of risks?
Alex Kearns [00:24:08]:
Yes. So what what we did was, rather than kind of invest the time upfront to build a a fully working solution, it was let's let's do a a test on a few scripts first. Let's let's try out over three or four scripts scripts that we have done the analysis by hand on. So we know we've got a a golden, answer that there is a ground truth to to what we're comparing against. When we did run it across the full dataset, then we did the sampling to to make sure that a reasonable portion, of them were accurate. The other thing that we we did with these was when migrating those scripts made use of different environments. So these were going to, Kubernetes cluster in AWS. And because we had a data in a staging and prod environment, we knew that we could run these scripts, in a in a sandbox preproduction environment.
Alex Kearns [00:25:10]:
If they failed, kind of so be it. It's not going to to bring the business down. It's not gonna send customers emails because we can we we can trap emails. We can, make sure that any calls outside of a particular network are are monitored and blocked. So we could quite easily see, actually, we haven't got to just cut over straight production. I think the the point you're the risk and the the the trust that we often put in or people are increasingly putting in large language models is really interesting because as you start to see it used in more in more regulated industries, there's an in incredible amount of power, I suppose, that large language models are being given. And when we get to that point of a model being able to be the only process and the only tool in a loop? I don't know, but I don't think we're there yet. Even when you think about more traditional machine learning use cases, like, one of my favorites is the the kind of fraud detection in banking where you make a purchase on your credit card.
Alex Kearns [00:26:38]:
If it doesn't look like something that you would typically do or it's it's in a country that you haven't been to before, an anomalous amount, then models should pick that up and say, yeah. It's not a not a transaction we're gonna allow to to process because we think it's not you. But that's been honed and refined over probably decades. Our large language model is going to be accelerated quickly enough for us to be able to make use of of that level of power and that level of trust.
Michael Dawson [00:27:14]:
I mean, that's a that's a good point. I mean, these scripts that you were migrating, I mean, worst case scenario is they just didn't run, and they didn't necessarily impact user activity. And if they crash, you get the logs, and then you can go investigate. So using an element in this area was inherently unrisky. It just helps speed up the initial analysis. But at the end of the day, it did didn't really matter. Right? You know, the the whole amount of work, it's like, well, a human could have made a mistake there too, and it wouldn't have had that big of an implication. But, how are you starting to see customers utilizing AI in a well, in, like, not ones that should be risk averse, but are leaning into it more than they should.
Michael Dawson [00:27:58]:
And how do you even evaluate that, or how do you better avoid that? And I think you you did sort of lean on doing sampling and verifying the outputs, but maybe there's something holistic because I I feel like with the adoption of AI coming more and more, the companies do will do the wrong thing. Right? Engineers will either accidentally via negligence or just, you know, laziness or whatever it is, you know, get in a state where, like, this is a great way to absolve myself of the challenge of doing all of this work. How do we counteract that? So
Alex Kearns [00:28:33]:
I think with with customers, we're we're quite upfront. So I think probably diff different consultancies, different, different companies may have a an alternative view. But speaking from a, yeah, employed official hat, if a customer is trying to do something that doesn't make sense and we don't think there's going to be kind of significant business value in it, then we will be aware be be open about that and and make the customer aware that actually what they're doing with it isn't likely to be successful. Obviously, if if there are ethical or or legal concerns around what they're doing, then kind of we're a a technical consultancy. We can raise concerns, make make customers aware, but, ultimately, due diligence is either on a customer side.
Michael Dawson [00:29:42]:
It's it's coming, though. Right? Like, I I I really would like to see some concrete accountability. I mean, we don't have anything quite to the level of the Knight Capital disaster where they lost, like, I wanna say, like, $460,000,000, and that didn't involve AI at all. That was just pure automation and legacy systems causing a mistake. And now we definitely have automated car companies that I believe there was an incident where there was a death in Arizona or New Mexico a lot of years ago, and the company didn't get sued or anything like that. So, I mean, it does seem like accountability is something that's going to come up more and more, and I don't really see anyone working on, adequate safeguards here. I mean, there's like, oh, we're afraid of AI, but it's we're not really talking about the companies that are utilizing it, I feel like.
Alex Kearns [00:30:30]:
So I think there's there's two parts. There's there's accountability and there's explainability, as well. So, again, thinking more about traditional machine learning, some some algorithms are significantly more explainable than others. So there are a lot of algorithms, and increasingly so as we get into generative AI and and large language model space where they're a bit of a black box. They have loads of data that go in, and it's very hard to explain why a particular result has come out. And again, models aren't deterministic, so you can do as much testing as you want, but you can never be truly 100% certain. You can be very, very confident. But if anybody can give a % guarantee, then that probably isn't machine learning that's that's making the final call.
Michael Dawson [00:31:28]:
Yeah. But you could ask the model if it's correct. Right?
Alex Kearns [00:31:30]:
Yeah. I think It's right.
Jillian [00:31:33]:
I mean, if it says it's right, it's it's obviously right. Haven't you ever helped a kid with their math homework before? Same thing.
Alex Kearns [00:31:40]:
There there's some interesting stuff that, the AWS are are working on. So for a while now, they've had Bedrock guardrails, which is there to to try and prevent a large language model from responding about certain topics. But, of course, you have to give it a you have to give it a list to start with of topics to not talk about. And if you haven't thought of the topic, then, yeah, you again, you are you're relying upon a human to to have that extensive list before you can prevent the model. Another one that's come out very, very recently, and it's the last within the last month or so, is, AWS make use of automated reasoning, in all places across across their cloud. So when it comes to kind of cryptographic operations and making sure that encryption is doing what it should be doing and not making sure there's mathematical proof behind some of these operations. That's something that yeah. So that is being used across all of AWS, but only very recently has now come to Bedrock.
Alex Kearns [00:32:59]:
And I I think the feature is called Bedrock automated reasoning where you can build rules up to say, I want mathematical proof the response given is in accordance with these rules, which is quite cool, and I'm I'm yet to play with that. But it looks very promising. And AWS are are generally fairly good on the research and that side of things. They're certainly not not perfect in terms of some of the decisions, I think, around releasing services that are maybe, like, just the wrong side of MVP. We have seen a few times, but that is, I think, a side effect of being customer obsessed in the perhaps as a a customer that needs a particular set of features, and the easiest way is to build a service for it. But maybe it's not suitable for every use case and every customer just yet. But, yeah, I think there's some some really interesting stuff going on around automated reasoning around for AWS's, first party model. So the Amazon Nova family of models, they've got the, AI service cards, which go into quite a lot of detail about how the models are built and are are fairly transparent around the way they work.
Alex Kearns [00:34:29]:
And that's that's something that end users of model need to be needs to be more aware of. I think at the moment, it is very, very easy to log on to to kind of or chat.com, I think, is the domain that OpenAI spent quite a lot of money on. And, yeah, type something, but you don't know where that data has come from to to build that response. You don't know the reasoning behind that response being given. I think as as these systems get more and more embedded in people's processes and and workflows in business, it's needing to understand the why and the ask the question of or ask the questions and have the ability to give a because, this has happened because the model is doing this.
Will Button [00:35:27]:
I don't know.
Michael Dawson [00:35:27]:
I I I worry that if we're relying on education to make people be able to utilize our AI and technology better, I I I feel like we're not going in the right direction. Like, I and I I say this because very frequently in the security domain, we have the same sort of mantra whereas you are gonna stop security events, prevent attackers, remove vulnerabilities through straight education. I mean, there's only so much that you can achieve there. And if your your security strategy is educate the users, I mean, it you might as well have given up already, if that's your and I worry if that's the direction that we're going where, oh, in order to use the models effectively, in order to understand what's going on, you have to be an expert still, which I feel like has been the case for a lot of years now, Going back twenty, thirty years in our ML development, it's always sort of been the case. And until we can break through that, I feel like, you know, real adoption really isn't going to amount to good things. And we're really close to utilizing it in more and more concerning situations where security is involved or human safety is involved or whatever Jillian is doing with automatic creation of, you know, protein folding. You know, I I honestly can't remember, but I I am I am curious there.
Jillian [00:36:47]:
Being the human in the loop like that. If we're we're we're doom spiraling, it's, the greed. Oh, man. Why why is it echoing? What happens? What did I do? Okay. I'm sorry. I've been calling for a few minutes here while I figure out what happened to my my camera.
Michael Dawson [00:37:03]:
What Jillian's saying is she is worried that she's in an envelope.
Jillian [00:37:08]:
But I'm the greed. I could very well be the greed in this situation. It could happen.
Alex Kearns [00:37:16]:
I think there's there's
Jillian [00:37:17]:
the lack of some really interest
Alex Kearns [00:37:20]:
Just kidding. I think there's some really contact. And there's some really interesting ethical bits as well around it. So the topic of kind of self driving cars has come up a few times as we've been talking. And the things that as a as a human driving your car, you might instinctively make certain decisions. So if you've got a self driving car and there is a a certainty of a crash, but one scenario means five people. One scenario means two people, but they are young people. Then it's like, what what does the car choose? Right? At that point, there's no there's no human emotion.
Alex Kearns [00:38:10]:
There's no nothing in it. It's it's a someone has to program a model to say this life is worth more than this life. And how do you do that? Right?
Michael Dawson [00:38:22]:
So there was actually an interesting and so this is the trolley problem. Right? Mhmm. And it's sort of the moral dilemma more than an ethical one. It was actually released I think there was a quick study by Stanford where, like, it gauged random sampling of people of which they would pick. Like, where where should the car actually go to? And I think there there was like, at the end of the study, there was, like, a clear hierarchy of what humans actually preferred. And I feel like it was something like, you should kill cats first and then, old men and then old women and then and then dogs or something like that. And I think it had to do with the fact that, like, cats will just get out of the way. Like, that was the expectation that that these people will either get out of the way or, you know, in worst case scenario, this is what they would pick.
Michael Dawson [00:39:07]:
And it was really interesting that that they had done this, and there was, of course, some, you know, not nice things said about the fact they hadn't gone through with the study. But, you know, I think humans will, sort of adapt to preferential picks for what they are okay with.
Alex Kearns [00:39:24]:
I think it's it's it's, that's an industry. AI is is going to drop to one way or another. I I often see people who are saying JiraCy is going to have as much impact as cloud did. And if it is having that much of an impact, then with cloud, there there weren't really that many drawbacks that I can think of. There were there was there was concerns. There were there was uncertainty around who owns my data, who controls my data, is my data secure if it's in Amazon's data center versus, a server in my own cupboard. But with generative AI, there's there's almost that kind of rabbit in the headlights approach that a lot of people are are taking at the moment where I think there is a real a real danger without, as we talked about, without the control, without either education or forced guardrails, to be able to to use this effectively. I mean, I remember seeing, seeing an article.
Alex Kearns [00:40:46]:
I think it was tail end of last year. It was it was being talked about in the yes, and there were talks of develop models being legally required to report on things like whether their models could be used for purposes that would have a a national security implication, which I think is is absolutely absolutely right from a, again, from a moral and security perspective. There's always that that kind of fine balance of any emerging technology. How how far do you regulate it? If you regulate it too much, does it stifle or prevent innovation? But on the flip side, if something terrible happens because somebody has used a large language model to to teach themselves how to to carry out an attack, then the argument is, well, there wasn't enough regulation.
Will Button [00:41:59]:
I think going back to the self driving car example, the solution there is just go into the settings of the car, and you get a little order of preference. Like, I'm okay with hitting this. I'm not okay with hitting this, and then problem solved. Right?
Alex Kearns [00:42:15]:
It's it's so tough. It's it's
Jillian [00:42:18]:
I mean, we choke, but, like, this is what we unfortunately have to do as humans sometimes. Like, that's how all of health care works. Right? Like, when when there's a shortage, like, during COVID, there was the shortage of the ventilators. You think the doctors didn't have to, like, make decisions? Like like, it's very unpleasant, and nobody nobody wants to think about them or talk about them, but the reality is we do anyways, and it has to happen. And I'd imagine that it also has to be a part of, like, self driving cars because cars are terrible, terrible death machines. I hate driving. I if I mentioned the show yet, just how much I hate driving and having to have a car. It's such a pain.
Alex Kearns [00:42:56]:
I think
Michael Dawson [00:42:56]:
I think it's it I think it's because we're in this intermediary state. Because once we get to the point where there are there's automation all around us and we have adapted to that fact, it's no longer as big of a problem because it's whose fault is it if you step in front of a train? It's so like, oh, well, the train was supposed to stop. It was, you know, supposed to know. And some of them do have safety protections in place. But, realistically, you don't go on the tracks when the train is coming unless you have some reason you really want to be there. And and I think, realistically, the same thing will happen if if we're in the automated car space where the AI self autonomous cars are driving around. I mean, realistically, you know, don't step into the street. I mean, why would you go there? And I think with the the AI cars, we will need want to really redesign, egress and flow for traffic, and we will be able to do that effectively once everything has been automated.
Alex Kearns [00:43:53]:
I think there's there's a really a really interesting change coming in terms of, I guess, similar to what we we would have seen with processes being automated in businesses. Is and we say, I'll go and have a be a real step change in efficiency. Is there is there going to be a resistance from within businesses to work on these generative project because they think it's going to put themselves out of a job. I think it's it's a it's a really challenging space to try and try and thread the line of improving efficiency that is making redundancies. I think going back many, many years and using things like the industrial revolution and inevitably changes like that mean that some jobs are no longer needed, but that people adapt, they define different laws. And if this kind of continue at the pace that it's going and disrupts industry at at the pace it's predicted to, then people need to to kind of change with it because, yeah, very quickly, you are you're already seeing not adverse to have experience in AI as essential. It's no longer a bonus than it's a. You must be able to work with Copilot tools and effectively know how to integrate AI and do DevOps or migrations or anything like that because come needs effect and their customers expect.
Will Button [00:45:53]:
It's a really interesting change to see how the job landscape has changed just over the last twelve months with the introduction of AI. You know, it felt like twelve months ago using Copilot and tools like that was seen as cheating. But now it's just seen as, like, a part of the job, and I'm interested to hear how it's being like, how are how are people testing or qualifying your your skills for that in the employment space?
Alex Kearns [00:46:22]:
I I take fairly to use of tools. If you want to use them, then that's fine. It does come with it comes with other challenges. If someone is overdependent on AI to the job and that's masking real true by understanding, then I don't think AI ever really come into it. Because if you're using a generic code, but don't understand the code that's integrated, And you you might the good chance of building software or building solutions that are go, but there's also a pretty good chance that they're not or you've left a a security hole in it or it's going to cost 10 times as much because there's there's one best practice that you've you've not realized because these models are trained on open code. So from from my perspective, I mean, I I've tried a few different Copilot tools, to get how Copilot came came kind of fairly early doors, given Amazon queue, ago as well. At the moment, I'm trying out the Wind Surf editor and and cursor as a a sort of more integrated IDE experience. And both of them are are are fairly good.
Alex Kearns [00:48:01]:
There's some really cool stuff being able to, like, if you wanna just hack about on a project. I do quite a lot with the the streamlet Python package, which is a a great way to build some data and AI apps with a an acceptable user interface without having to know how to write good front end code. And being able to to just say, like, create a project using streamlet, it needs to be able to interact with Amazon Bedrock, like, stuff out the methods for me and get something fairly quickly. It is good for that. I think the only way these tools will excel will really be with true understanding and context of of what would be in a human brain. It's the getting into the the flow state of I understand this whole repository. I I see how different pieces, kind of connect together. But, also, if you've got microservice architectures and, actually, the context you need is in a different repository, then it also kinda needs that as well.
Alex Kearns [00:49:14]:
So, of course, there are there are limitations. There are, because only so far these things will go. But from I think from my stance, if somebody kinda turned up at an interview and they wanted to use an AI tool as part of a a tech exercise, for example, then it wouldn't necessarily be a be hypocritical of me, right, to say, this is happening. But I think I'd be more I'd certainly be more, aware and more, yeah, be a bit more careful about asking the right questions about the code that has been generated because
Michael Dawson [00:49:59]:
Well, I think that's that's sort of, like, one of the really important parts here is that most companies don't spend enough time evaluating their interview process for what the right questions are. And now they're starting to realize that AI is getting in the way of them, quote, unquote, effectively evaluating the candidates. And I think that really goes to the fact that the question didn't make sense and their evaluation strategy didn't make sense and that there are tools that can easily solve that. And if they can solve it during an interview or during a take home, interview test, then they could potentially or likely using that tool during their job. And I think we're already starting to see some companies intentionally telling candidates to use AI, LLM specifically, to solve the problem because it is something that other engineers on their team are are utilizing and that they would expect someone who comes into the team to also understand on how to utilize those tools because things like configuration or linting, etcetera, will be changed fundamentally by AI. And through that way, if someone who comes into your team that you're hiring into doesn't have experience utilizing LLMs to help them sufficiently or handling the weaknesses, whatever they are, then they're gonna not be as an effective team member when the rest of the team is ex expecting someone who is able to do
Alex Kearns [00:51:21]:
that. I mean, abs absolutely. And I I think with with how much of the industry and how much of businesses, generated there has the the potential and promise to to reach, there's there's almost prerequisites that a lot of people aren't aren't really thinking about at the moment because they get they kinda get blindsided by the AI is great. AI is gonna solve all the problems. Whereas in reality, there are there are certain things that are foundational to successful use of AI. Like, okay. AI is software. How do you monitor your software? How do you make sure the responses from your AI powered applications are what you expect them to? If you were deploying an API, you'd spend time in thinking about, okay, what what are the useful metrics? Is is CPU a useful metric, or is is latency, a useful metric? What's what are the things that actually have an impact on the end use of this? And AI should be no different.
Alex Kearns [00:52:35]:
You should have that kind of production wrapper. You should have monitoring. You should have security concerns that you're proactively, protecting against. But then also, it's that it's that foundation of data. It's the if you're an organization that has lots of data in lots of different places and you want to use it in AI, you you need to have, like, a good data platform. And you you have conversations with people around productionizing some of their AI kind of proof of concepts or, very early stage experiments they've done. And you'll say, okay. Well, you've you've managed to get these little samples of data from various data stores to to prove of this as a as a concept could work and is worth putting into production at a wider scale.
Michael Dawson [00:53:30]:
So one of the problems is that with the pricing with a lot of the models today, if you're not running it yourself, you're pretty much paying for input and output tokens. The amount of context you're adding, which you need a very high context to get an add adequate answer even. And then for whatever reason, these companies are charging you for garbage nonsense coming out of the models. Their decoding process to get back a readable answer has a lot of nonsense in it, and then you're paying for that. So I I think the industry is being driven towards trying to optimize for these two things, which have nothing to do with the quality of the answer in the first place. And I I know Will asked the question about, you know, bringing AI into the interview process. And I feel like, you know, Will, you're now in a great position, for me to ask you. Do you feel like your interview process has been changing, to respond to the increased usage both in the workplace as well as candidates using potentially using AI, during the interview process itself?
Will Button [00:54:33]:
For me, no. Because my interview process is probably a a lot more old school than than most people. Like, if if I'm interviewing a candidate, we're gonna have a straight up bullshit session because I work largely around infrastructure. And just through a casual conversation, I feel like I can get a lot better feel of whether you know what you're talking about or whether you have heard the terms but don't really understand what they mean. So very a very small amount of my interview process has anything to do with, like, hands on the keyboard technical coding.
Michael Dawson [00:55:17]:
Do you have some part of it that's like a any sort of technical validation or, technical systems design or anything like that which could be impacted by AI at all?
Will Button [00:55:32]:
Potentially. Yeah. Because we'll do, like, a a an exercise of, you know, throw together a couple microservices and explain to me the interaction between them. But then I'll spend a lot of time just talking about that, like, well, how does this part work? How does that part work? Tell me what happens if this does that. And I dig into a lot more of the operational stuff, I think. And, like, if if a candidate could pregame all of that and use AI, good for them. I just think it's unlikely given the dynamic nature of my interview process.
Michael Dawson [00:56:12]:
So I don't wanna spoil it, but there is a product out there where, in a remote interview, the candidate will run it, and it will listen to the audio that's coming across and watch the chat and then dynamically generate text response for the candidate to answer on the call. Now I I do think that there is a challenge here of being able to adequately understand what words are important in a sentence. Like, if you're if you have a thought and you're sharing that thought, you know what the point of that thought is and which nouns are more important than others. But if you're reading a response from something else, you might as well all say it all monotone because there isn't any part of that that makes sense upfront to you. You almost need to read it first and then answer back, but that exists. So unless you're bringing people into the office, and, obviously, we wanna optimize for more remote, working environments, you know, our our company, is a % remote. I I know a part of yours is Will. I don't wanna swear to that, but you do have different,
Will Button [00:57:09]:
Yeah. We're a % remote as well.
Michael Dawson [00:57:11]:
Yeah. So, I mean, there's only so much you can do there. I mean, you're not gonna meet in person every single candidate, at a coffee shop or something and go through, sort of validation that they're not doing that. I mean, you have to use your other skills to sort of figure out whether or not there are you know, they believe what they're saying.
Will Button [00:57:31]:
Yeah. And I think I would in that in that scenario, I just rely on, like, our our sixty day window. Like, every candidate we bring on has, like, a sixty day trial period. And if if the expectations didn't line up, you know, we have sixty days to resolve that, and if not, cut ties.
Michael Dawson [00:57:50]:
Are you at least for us, we're really transparent about that. Like, if you're cheating during the interview process, that hurts you because we're just gonna fire you a cup,
Will Button [00:57:58]:
you know, couple days
Michael Dawson [00:57:59]:
or months later. Is that what you want? I mean, you're risking it by coming and joining us just like we're risking it on you, and we have we both have the capability of ending that relationship. So, you know, if if you manage to get through our interview process, faking every step of the way, and then you also manage to fake the next couple of years successfully. I mean, I actually think that was a pretty good hire.
Will Button [00:58:20]:
Yeah. Agreed. Like, if if you're faking it because this is really where you want to be and I pick up on that, I'll do everything I can to help you get there because that's how I got here. I lied through my teeth on job interviews.
Jillian [00:58:34]:
How I got here too. I was just like, hey. I had a baby. That baby needed some food, and I was like, alright. I don't know what we're talking about, but I have Google. Let's go figure this out.
Will Button [00:58:44]:
Yeah. I mean, I read all 500 pages of the Microsoft SQL Server six book because that was the job I wanted.
Alex Kearns [00:58:52]:
And I think this is where we can almost flip it on its head a little bit because AI can post interview then play advantageous. Because if your interview process is cultural and it is, assessing primarily person fit to a business and their ability with how they learn, how they interact with other people, then if you can make a great hire that has the ability to very quickly learn, land on their feet, is a great team player, then AI is a great tool to be able to assist in their upskilling of things they don't know, technically, once they have got in the door. So it's there's there's two sides to it, I suppose.
Will Button [00:59:42]:
Oh, that's a really cool idea. I hadn't thought about that before. So instead of yeah. So you just kinda flip it around and say, hey. AI, here's this dude. Where should I be helping them?
Michael Dawson [00:59:54]:
I think the part of the trouble with that is you would need to have only verbal, for the most part, interaction with the candidate during the interview and have to go through the process of, like, okay. You know, we wanna record this session so that, you know, later we can feed it back through and make it available. And I know that, you know, there's no reason why this should be a problem. But, you know, every single additional obstacle you add there is another risk for potentially losing out on a candidate. I feel like, you know, hey. Can we record this session and have this as a recording so we can share with the rest of the team? It's just another one of those things. So, I mean, if you're getting the value out, you know, great. I I think that's where there's a I would love to see the tool that actually helps there.
Alex Kearns [01:00:33]:
I think of it from a from an individual. So if if you are a hiring manager for a role and you're hiring someone that is, like, 90% of the way there, but they are a 20 in terms of their ability to learn, you know that they will know the stuff given the opportunity to learn it because they've demonstrated that I don't know. They've they've picked up on these technologies in every job they've changed. They've come in relatively fresh. That's where they, as an individual, are able to potentially use AI, to augment their learning, whether it is through Copilot type tools, whether it is Charge EPT, those kind of things where previously you would reach for Stack Overflow, which, again, looking back on it, you think, well, yeah, Stack Overflow is great because you would get loads of answers to things. But you would still have to understand the answer because you don't know who that person is that's that's written that answer, much like you have no guarantee of of what the model is is outputting from its response. You can rely upon, well, you can take indication from the amount of kind of crowdsourced endorsement, I suppose, of a particular answer. And that's, I guess, where responsibility maybe goes more onto model providers to actually say, are the answers you are providing accurate? Maybe large language models are just too too general for something.
Alex Kearns [01:02:18]:
Maybe this is where the the more niche models that are specifically focused on Python coding, for example, are a better fit because they have been trained on vetted best practice Python code. Yeah. So so many angles that you can explore with with this. I feel like we could talk for for hours. Right. One of
Will Button [01:02:47]:
the things I'm doing, we're doing our annual performance reviews, and it consists of each employee gets peer review. They do a self review, and then I write their final review. One of the things I've been doing, it's taking a huge amount of time, but I feel like it's still worth it, is I'm giving AI the peer reviews and their self review and my review along with the responsibilities for their current level and the next level and then asking AI, how can I help this person better meet the expectations of their current role and meet and and start growing towards their next level within the company? And it's it's been insightful because it's picking up on things that I'd overlooked when reading through the reviews myself.
Alex Kearns [01:03:41]:
I think that's a great great use case, but the the key bit in that is it's grounded by human effort that somebody has put in to start with. It's it's grounded by truth and actual knowledge. If you took I mean, if if you took the the part where you write your review of that that person away and said, okay. Well, they've written a self review. They've had a peer review from somebody else. Now let's use AI to to write the the employer review, make it this tone of voice, hear some metrics about this person, how many lines of code they've written this year. Those kind of things where you you could quite easily use AI for that, but that's where it then turns into the, this is a little bit too far.
Will Button [01:04:35]:
Yeah. For sure. Because at that point, I, personally, I would feel like, well, I'm not really adding any value here. Their review came from AI at that point. I feel like I still gotta put some skin in the game and and do my job to help them.
Jillian [01:04:50]:
Well, I think with that said, like, a lot of industries are gonna be creating verification processes that are specific to the problem. So this whole idea that the AI is gonna be running amok, it's like, well, no. We don't really do that. That's not, like, how things in the real world work. So for example, in biotech, I think there's gonna be a ton of AI generated drugs, but they still have to go through the same verification process as all the other drugs, which take years. You have to be able to create it, like, just just being able to create the thing. Just because the computer says that it's a valid, like, it's a valid drug and all that, it doesn't doesn't mean that it is. It still has to go through clinical trials and still has to go through peer review.
Jillian [01:05:31]:
And I feel like every industry is gonna have some it has something similar. Right? Like, I don't know. So I don't I don't worry about that one quite as much, except I worry a lot about greed in the loop. Like, that's that's the one that's the one that I worry about. Like, oh, look. Now now we can make all these biosimilars to, you know, this drug that this patent expired for and we can just be pumping these out. And then, like, you know, if there's not enough, like, regulation and things or if somebody can sort of get it pushed through any of these, any of these kind of verification steps, then I could see that going very, very sideways. So I'm just gonna hope that doesn't happen.
Jillian [01:06:10]:
And if it does, I'm gonna move off to the woods, and there's gonna be no more computers in my life. And that's
Alex Kearns [01:06:16]:
gonna be better.
Michael Dawson [01:06:17]:
I I think you hit on something that's quite ingenious here actually, Jillian. If you just go through previous patents for drugs and then you ask an LLM to generate a new drug, that has the same bonding, you know, activation sites, the same, you know, interactions with other molecules, but is fundamentally different enough that it could be classified as a new drug that could be patented, then, these companies will start losing, a lot of money, because they won't their patents won't mean a lot anymore and will have a lot cheaper medication in the world.
Jillian [01:06:54]:
That's the hope. That's not actually how things have been going. I mean, like, I really I really appreciate your optimism there. But I'm not I don't know. I'm not sure. Like, if you look at, like, biologics. Right? Like, biologics are probably I think are, like, one of the biggest medical innovations, you know, in, like, decades. And they're so expensive.
Jillian [01:07:16]:
They cost, like, I don't know. I think Humira cost, like, a couple grand a month without insurance or something like that. So then, yeah, hopefully, we do get this next wave where we're creating all the biosimilar drugs and so on and so forth. But I mean, when when the biologics first came out, they have their patents and they had an absolute lock on the market. Legally speaking, you could not create a drug that was, you know, slightly similar. They're called biosimilars. You couldn't do that because there's legal red tape and stuff. But, yeah, I hope so.
Jillian [01:07:45]:
That would be great if, you know, if, like, just producing producing these drugs got cheap enough that the patents were no longer even worth it, then that would be, like, a great huge disruptor to the medical industry.
Michael Dawson [01:07:55]:
I mean, it could've been even worse because the company that produced the drug initially, if they had used LLMs anywhere in their process, technically, they can't patent it in the first place. So I think we're very close to the point where, there will not be any patentable, artifacts that exist in the world because the fundamental laws are fundamentally changed.
Jillian [01:08:19]:
I think we're already there though, like, with the and patents are still around. So I I think it's less about the, the computery stuff, you know. So, like, I work with a lot of companies and they're like, oh, can I patent this process, like the software process? And it's like, well, no. You shouldn't even really bother with that. Go patent the process that you use in the lab to actually create the drugs. And so that's where everybody's at. The actual, like, data generation is is like a throwaway kind of Yeah. You know, like, yeah.
Jillian [01:08:47]:
It's just it's just throwaway. Because a lot of that has to be open anyways. Like the data generation that you use to actually get to your drug because it has to it has to be like peer reviewed and all that kind of thing. But everything that goes on in the lab can still be a lot more, I don't know, can it can still have a patent. See? And this is why greed in the loop is such a problem because then there's always, like, there's always ways around the things. And then people wanna be making money, which I guess.
Michael Dawson [01:09:10]:
I feel like that's its that's its that's its own episode in in itself, I think. Greed and softwarecom and tech companies. And,
Jillian [01:09:19]:
Yeah. It could. A little bit depressing, though. It's not like a fun topic.
Michael Dawson [01:09:22]:
Okay. So Jillian's like, I have tons of optimistic topics that we should talk about. Let's pick one of those, especially regarding any sort of AI or ML. Okay. I'm all for it.
Jillian [01:09:32]:
Yeah. Let's just talk about the cool stuff and not talk about, you know, potentially people flooding the market with crazy patents and then nobody getting their drugs. That's how that works.
Will Button [01:09:43]:
So speaking of cool topics, Alex, you work with a lot of companies implementing AI into their business processes. A lot of our listeners are in the DevOps field or deal with, software engineering and infrastructure. What are the key pieces of advice you would have for them to continue their career and be ready for the next evolution?
Alex Kearns [01:10:07]:
That's a great question. I think it it's about embracing it, but also being super critical of the solutions and the tools that are available. So it's very, very easy to feel overwhelmed, I think, by the number of AI solutions that are available in in any space. I think in in DevOps, particularly, if we're including kind of the developer side of that tools in there, we're just going to see more and more come out. I mean, there's there's, a company I came across who their niche is copilot tools, but they only offer models trained on your company's data. So they haven't got, like, a public offering. They're aiming as a enterprise. So the idea is they they train model based on your organization's code bases, and that is your private copilot.
Alex Kearns [01:11:21]:
So I think there's there's lots and lots to come in this area. Operationally, I think the whole, I guess, principle of of DevOps is trying to break down that as a metaphorical wall, between the two sides and empower developers to do operational tasks. I think we're seeing quite a lot come out around explaining operational events using generative AI and that kind of almost, like, trace analysis of all this happened. We've got 10 different data points here. How do we correlate them? How do we say this happened because this happened and this happened and this happened, and the chain reaction was this? Being able to even as as simple as as put those things together in a some sort of structured data and then using a large language model to summarize it into a Slack message saying, this has failed. This is why. I think the one the one piece of advice I'd I'd give people if they're looking to to start experimenting with AI is, like, solve your own problems, find things in your processes, in your workflows that take most time, And use AI almost as like a like your shadow, I suppose, would be a good way to describe it. So if you haven't got confidence in it straight away, tell it to do the same things that you would do, but build it so that it does it as a a dry run.
Alex Kearns [01:13:11]:
Make sure it is going to execute the same steps.
Will Button [01:13:14]:
Right on.
Alex Kearns [01:13:15]:
Yeah.
Will Button [01:13:16]:
Right on. Very cool. And that feels like a good segue into some picks. What do you think?
Alex Kearns [01:13:26]:
Some picks. So I would say I would say, do your picks have to be physical, or can they be be software?
Will Button [01:13:36]:
No. Anything anything goes.
Alex Kearns [01:13:38]:
Anything goes. Okay. So I'm gonna go for some that are some that are related, some that are are not. So my my first one is AWS have a a free to public, generative AI experimentation website called Party Rock. This was born from a it's actually an AWS engineer that built it internally as a way to experiment with with large language models, and then it got adopted by AWS as a organization. So if you go to I think the URL is pathyouwalk.AWS. There's, yeah, no credit cards or anything required. Go on.
Alex Kearns [01:14:28]:
You get a free, amount of usage each month, and you can build these these generative AI apps. One caveat is it's free. It's public. So we go and upload, like, your company's financial records to it or personal data. Right? It's if you're gonna use that kind of data, just anonymize it first. Then what else? I I'm gonna go for something a lot of developers spend probably some money on, but I don't think you could spend enough money on it, which is keyboard and mouse. Like, pull yourself out with a comfortable mouse and a comfortable keyboard. The one that comes with your Mac or comes with your PC, it it's functional.
Alex Kearns [01:15:39]:
Right? But after a while, it it's gonna hurt your wrists.
Michael Dawson [01:15:43]:
So what what do you got?
Alex Kearns [01:15:45]:
My little Logitech MX Master three. It has far too many buttons on it to to know what to do with, but it's comfortable. And then I have a Keychron k two mechanical wireless keyboard. Nice. Really slim for a mechanical keyboard, but super nice type on as well. And you said socks were cool.
Will Button [01:16:16]:
Oh, for sure.
Alex Kearns [01:16:17]:
That was my that was my prep for this was socks. So I have some really cool socks, but they've all come from conferences. I can't I can't give people links to
Michael Dawson [01:16:30]:
So maybe, like, who which, which vendor gives out the best socks?
Will Button [01:16:35]:
Right.
Alex Kearns [01:16:37]:
There were some I got from, InfluxDB last or year before last, which were they were really cool. They were, like, every color under the sun. Super stripy, but really, really comfortable socks. Or, the, like, holy grail of conference swag, which is the red hat red hat,
Will Button [01:17:05]:
Yeah.
Alex Kearns [01:17:06]:
Which you might be able to see, like, up there on top of the bookcase. Yep. Yeah. Like, swag is a, yeah, bit of a debatable topic, but, you can normally go to a conference with significantly less clothes than you need. Yeah. On day one.
Will Button [01:17:31]:
For sure. Alright, Jillian. What'd you bring for picks?
Jillian [01:17:37]:
Actually, I have a tech pick this week. I was looking for some type of UI to build out my Terraform code, mostly because of this AI product that I was talking about where I deploy it, like, on the client site and it has to have, you know, like, that database and s three bucket, a couple Lambda functions, and then you see two instance. And I was like, wouldn't it be nice if there was just, like, a parameterized UI that I could just go and and type and click a couple buttons because I'm really in my I don't wanna be typing era of my life. And I found Resourcely and it is very, very cool. I would like to point out there is no way that I can afford their, like, their plan that's actually very useful. So this is part pick and part me e begging, you know. If the guys at resource link, you know, this is I could be the voice of your tool on the podcast and, like, I'm sure that would just be amazing. So there you go.
Jillian [01:18:28]:
But it is it's really neat, and I like that the back end is all just run by Terraform and Cookiecutter because those are, like, my two those are just my two favorite tools of all time. It's like half my life is run with Terraform Cookiecutter and make files. And then once you grow in to make files, it's, like, 90% of my life.
Michael Dawson [01:18:44]:
We we definitely have a, like, a full sponsor section on adventuresindevops.com, where if if if someone wants to be a sponsor of this podcast, they can go there and read what we have and then decide if it's for them. You know, Jillian, what I found that works is ask your customer if, you know, how the incidentals work and whether or not the usage of third party tools to help, cut down the amount of of time that you would have to charge them for would be included under the contract. A lot of times, the contracts that when I was doing consulting, you would include those in there. And, of course, you would, charge that to the customer ads to optimize what they're, actually getting out of the value that you're providing.
Jillian [01:19:25]:
So it's always really tricky for me just because, like, the companies that I work for, they're not creating technology as their product. Like, if if they could get rid of me, they would. Okay? Like, if they could just be like, you just just go away. We just wanna work on our laptops with Excel. Like, they absolutely would. So that one that one is always a little bit of a tough sell for me. Instead, I just start emailing people and try to get stuff for free, which is probably questionably, like, in terms of ethics, but
Alex Kearns [01:19:53]:
it's not
Michael Dawson [01:19:54]:
different. I get emailed all the time, people asking stuff for free. So, I mean, I don't think you're doing anything, especially wrong there.
Jillian [01:20:01]:
Alright. Well, thank you for thank you for the ethical vote anyways. I do appreciate that. Sometimes I am a little bit like, maybe I'm a little bit too much on the side of the e begging, but we do like money. So here we are. But But it is anyways, it is a really great tool. It actually does generate you, like, this really nice UI. You can do, it has the sort of parameterized, like, in multi tenancy built in that I really like because I find a lot of tools, they just I don't know.
Jillian [01:20:30]:
They just they just don't have that, and that tends to immediately not work for me because I'm so rarely working on my own AWS account. Right? Like, my AWS account is as bare bones as it can possibly be just for, you know, dev for whatever it is that I'm working on, and then everything else is deployed on client sites. So it did, it did genuinely look like a really nice tool that has, like, everything that I want. And I think that I can even make the free plan, like, mostly work for me, but we'll see.
Alex Kearns [01:20:57]:
Right
Will Button [01:20:57]:
on. Warren, what'd you bring for a pick?
Michael Dawson [01:21:00]:
Yeah. So I've got something really interesting. It's actually a old paper research paper from Yale in 2010, and the name of the paper is comparing genomes to computer operating systems in terms of the topology and evolution of the regulatory control networks, or regulatory control networks for short. It compares the Linux operating system to e coli bacterium. And I find this really interesting, from an architecture standpoint of how what we build, in technology is so wrong if we look at bio the evolution of biology, for millions of years. You look at the evolution of E. Coli and you see what's currently there, and the the set it's only a six page paper. It's very short, and it really gives a lot of insight into the sorts of things we're building and whether or not we're building them effectively.
Michael Dawson [01:21:51]:
And being in the infrastructure, you know, systems design space, having new insights for how to build things or what actually is really important, I always find really interesting.
Will Button [01:22:03]:
Is that from Wade Schultz?
Michael Dawson [01:22:06]:
I don't think so.
Will Button [01:22:07]:
Okay.
Michael Dawson [01:22:08]:
But I I could be totally wrong, and so I don't wanna swear to it, and I will have to confirm for you after the the episode is over.
Will Button [01:22:15]:
Right on. Because Wade's a really good friend of mine, and he's the, the head of computational health over at Yale, and that sounds exactly like something he would offer. Right on. So my pick for the week, I'm picking a book this week, a sci fi fiction book called Juris Juris x Machina. I think that's how it's pronounced. It's from John w Maly. It's a really cool book that has a lot of tie ins to the episode we've talked about here today. It's a future Earth where the legal system has been largely replaced by AI, and, the the main hero of the story has been wrongly convicted, goes to prison.
Will Button [01:23:01]:
But it's a really well written book. It's got a lot of super cool nerdy tie ins into it, and the writing is well done. It's fast paced, so you get sucked into it immediately. And on top of that, in about a month, we're going to have John on the show to talk about the book and AI. So I'm looking forward to that episode. And, so that's my pick for the week.
Michael Dawson [01:23:27]:
Awesome. I'm gonna have to read this in preparation.
Will Button [01:23:30]:
Yeah. It's it's been a really cool book. Like, I struggle to get into to fiction books, but this one just, like, slurped me on in. Right on. Alex, thank you so much for joining us on the episode. It was a pleasure talking with you.
Alex Kearns [01:23:44]:
Thank you for having me. It's been good fun.
Will Button [01:23:46]:
Right on. And to all our listeners, thank you for listening. Appreciate your support. Jillian, Warren, thank you for joining me, co hosting with me. And, we'll see everyone next week.
Welcome everyone to another episode of Adventures in DevOps. Joining me in the studio today, Warren Parad. Warren, how are you?
Warren Parad [00:00:09]:
Thanks for having me back. You know, I actually have a good fact for for today that I I I thought was really interesting to share. There's a malware out there called Otter Cookie, and I know the economy for engineers is not so great at the moment. However, a lot of advertisements out there for job recs may be from malicious attackers who are trying to get you to run, GitHub repos or download packages from the Internet to pass the interview. And those things will get installed on your machine and either try to steal your local crypto wallets or worse, be used for attacking whichever company you do get hired by. So I know it's a real struggle that you wanna complete whatever take home assignment or, you know, get the next job, but you really gotta be careful in today's economy, because this tech is out there that's just waiting to capitalize on a simple mistake.
Will Button [00:01:01]:
That's just nuts. Like, let's just kick someone while they're down. Right?
Jillian [00:01:06]:
Yeah. Like, all of that is nuts. I think getting homework from an interview is nuts. I think, you know, potentially installing something on your computer that's gonna make your computer go wild. It's not like it's just it's multiple levels of crazy.
Will Button [00:01:23]:
Speaking of which, hi, Jillian. Welcome to the show.
Jillian [00:01:27]:
Thanks for having me back.
Will Button [00:01:29]:
You guys are making me feel guilty. Thanks for having me back. Like, it's conditional at this point. Just show up.
Alex Kearns [00:01:36]:
Oh, I
Jillian [00:01:36]:
don't know. I'm not sure that it is for me, but, you know, thank you.
Michael Dawson [00:01:40]:
I'm still appreciative for being here, I guess, is what I'll say.
Will Button [00:01:43]:
Well, I'm happy to have you both here. You guys make my job a lot more fun and entertaining. And speaking of fun and entertainment, I'm looking forward to this episode. We have Alex Kearns joining us in the studio today, principal solution architect from dude, you just told me how to say this. My mind already went blank.
Michael Dawson [00:02:02]:
UberTask Consulting. UberTask Consulting.
Alex Kearns [00:02:04]:
I've heard every every possible permutation of how to pronounce it. But you did pronounce my surname correctly, which a lot of people don't do.
Will Button [00:02:13]:
Well, welcome to the show, man. I'm happy to have you here.
Alex Kearns [00:02:16]:
Great to be here. Thank you.
Will Button [00:02:17]:
Cool. So give us a little bit about your background now that we know how to pronounce
Alex Kearns [00:02:22]:
where you work. Yeah. Great. I mean, I I come from a a software engineering background, if you cannot go way back to to the post university career, and then made the move into into cloud, both kind of internal consultancy and and sort of platform type work, and then consultancy in terms of of the traditional customer external facing consultancy. But I I'm still very technically driven. I I like to get my hands dirty. That's that for me is is what's the the most exciting part. It's building things, breaking things, learning from it.
Alex Kearns [00:03:04]:
Yeah. Paper architecture is is not my not my fun.
Will Button [00:03:10]:
I hear you. There's there's, like, a certain, I think, for people who succeed in this industry for a long time, there's a certain amount of entertainment value that you get from your job.
Michael Dawson [00:03:22]:
Oh, I'm definitely doing it wrong then. I mean, I I my new my new belief is that when I retire, I'm just gonna go back into drawing boxes and lines. Like, that's just, like, the best part of my job. Like, when I can get on a piece of paper or a whiteboard and boxes and lines, you know, not even any words necessarily. Like, that's prime enjoyment right there.
Will Button [00:03:45]:
I'm gonna get my typewriter, and I'm gonna go to my cabin in Montana. Screw all you guys.
Jillian [00:03:51]:
So so speaking of just, like, I'm gonna do something really simple and turn my brain off. I bought, like, a paint by numbers kit and that has been it just reminded me a lot of what you said because it's I just sit there and I just paint in the numbers and I don't I don't care that I'm nearly 40 and doing an adult, like, an activity for children. It's great. And it's just so you just turn your brain. It's great. It's great. Anyways
Will Button [00:04:12]:
I've actually heard quite a few people comment how, like, therapeutic and relaxing that is.
Jillian [00:04:17]:
They really are. They're very relaxing. It's great.
Alex Kearns [00:04:21]:
I think you need to rethink that it's the tech industry brains are so switched on all the time. There has to be a way to to switch off. Otherwise, you know, work life balance is is pretty nil.
Michael Dawson [00:04:36]:
Well, it's interesting you bring that up because, actually, very commonly, we're keeping in a always production mode. Like, everything we do is happening at a critical level, and we have to pass that test. Like, whatever work we're doing, there's no practice involved. It's always run time for us. And there's a lot that there's there's a bunch of research out there that says, like, we have to go into practice mode where mistakes can be made, failures can be had, and we can actually learn from it intentionally. And without that, we will like, that's actually one of the biggest causes of burnout. So, you know, if it's going home and doing watercolor, water painting, you know, whatever it takes there, if that somehow helps you recharge realistically, definitely do it.
Will Button [00:05:18]:
There's also a lot of evidence showing that, taking on those kind of activities while you're not consciously thinking about the problem, your subconscious is continuing to work on it, and that's when someone like the big insights and big breakthroughs for you occur. Like, I know there's a really common antidote of, like, when you're in the shower and you have this great idea, that's like a a really fixed example that whole process in action.
Alex Kearns [00:05:44]:
This is so true.
Jillian [00:05:45]:
Don't do important jobs anymore. Like, I'm just I'm just not doing a fine. Like, I mean, like, they matter, but not, like, not on, like, a huge I'm always in production. Everything has to be perfect. Like, it's fine. It'll it'll be fine if it's if it's done a couple days later. I used to do important stuff, though, and I don't want to anymore. So that's I suppose that's your lesson as this is where I'm at doing my paint by numbers and things that don't really matter.
Will Button [00:06:13]:
Well, one of the things we were talking about before we started recording the episode was, leveraging Gen AI. And, Alex, you've got some experience with that, specifically some experience of, like, real world examples where you've done that. And I think that's one of the big I think that's one of the cool things about AI. You know? It it goes through this this buzz cycle, but people who are actually putting it to real world use. So I'm interested to hear your take on that.
Alex Kearns [00:06:43]:
Yeah. I think it's a it's a really it's a really interesting topic. It's it's something where you go back eighteen months maybe. ChatGPT was kind of just about starting to be established as almost household name. People aren't necessarily using it actively, but tools like that are are becoming more and more common. And then I think with with any technology, it's it's when it gets kind of democratized, when it gets put in the hands of of people that aren't having to to spend millions on hardware and and do those kind of things, it actually really starts to become an awful lot more prevalent. So, I mean, as as we saw with with any technology, so you think back to to kind of mid twenty tens, I suppose, where things like AWS Lambda came out, so kind of serverless technologies. And then the years after that where every SaaS company that existed was going for a we now have a serverless offering.
Alex Kearns [00:08:00]:
And it's like, what it is it serverless? Is it just managed service? Is it what is your what is your, definition of serverless? Right. So you see the buzz around that. You you see buzz around even cloud, which is I mean, cloud is is public cloud, twenty years old if you if you go back to AWS's first service. So it's not it's not new. It's not shiny anymore. And AI, I think, is going for the same thing, but just at a a much, much faster pace. So
Michael Dawson [00:08:38]:
that's a really interesting comparison, though. Like, I just I wanna stop you for a second there because I I feel like there's a there's sort of a weird duality where serverless made it easier for people to get into building stuff and releasing applications because it didn't require you to purchase or allocate huge data center capacity in order to make that happen. I feel like the where where the point is AI is currently at is it actually does require only the most expensive access to hardware or service providers to be able to get that. So I don't think like, I don't know if it's been democratized yet. I mean, there's a lot of services out there that claim to get you access to some facet of AI, and I know there's, like, chat GPTs and the LLMs out there that questionable how much value they're returning to you. But I think for, like, the real core aspect of being able to provide the underlying resources or technology to people, I think, is still much too far away.
Alex Kearns [00:09:36]:
I think the way you described it is great. It's it's giving people access to a a facet of AI. I mean, if if we think about AI as a general topic, artificial intelligence more broadly has has been around for decades. It's only really that when you sort of start breaking it further down into machine learning and deep learning and and now generative AI is a a kind of subset of that. The the the generative AI and the AI, terminologies are now almost interchangeable, certainly from an industry perspective. I think it very much depends on what what people are wanting to do with AI and and how how specific their use case is. So if we think about things like like chat GPT, that's obviously a a very specific use case. It gives you very generic responses to things.
Alex Kearns [00:10:39]:
It hasn't got access to your, your specific business data, but it's free. And, obviously, with any any free product, you are you are normally the product. Of course, you can you can opt out of things, but by default, it's it's collecting that chat history, to improve the service for for everyone. You've then got things like, Amazon Bedrock. So Bedrock was AWS's generative AI offering that came out at their conference twenty twenty three that was announced. So Bedrock offers kind of two two different modalities, I suppose, in in how you can use it. One is is on demand, where you pay per thousand tokens. So that's the way where you can you can go and build something.
Alex Kearns [00:11:37]:
Again, it's it's similar to chat GPT in its sense of its generic knowledge. It's whatever the large language model has been trained on. But because, as with any any kind of cloud hosted managed service, they can take advantage of economies of scale and give you pay as you go pricing. The moment that you want to fine tune that model or train a different model with your specific data, you go from paying kind of fractions of a cent per thousand tokens to having to commit to 30,000 for for three months because you are now the one that's bearing the cost of all of that hosting rather than than AWS making it available. And, of course, there are there are ways that you can augment the use of large language models with your own data without going to that extent. So even just including examples of your specific data in a prompt, or kind of retrieval augmented generation where you can can load your own documents into a a vector database and have it retrieve that data, from there. There's lots of ways you can kind of get get quite a long way without spending huge amounts of money. But, yeah, the moment you want to get to the I have complete control of my model, I train it with my specific data, then you'd hope that that's when you start getting to the the type of customers who can afford to to spend that kind of money.
Michael Dawson [00:13:16]:
So in your capacity at your your current job where you're interfacing with with clients and whatnot, do you find, there is one particular provider or one set of tools that you're constantly going to, or whole breadth of set, but for different types of tasks to help assist you?
Alex Kearns [00:13:34]:
Yeah. So I think it we we are a an AWS consultancy, so everything tends to to center around Amazon, tools. But, of course, Azure and Google both have have AI offerings now as well. Although Microsoft, with their their investment into OpenAI, are the only provider that that offer the OpenAI models on a public cloud, and I think that will stay the same for for a number of years. In terms of the the tools that we reach to, there's definitely a combination of kind of vendor specific but also open source tools. So in terms of hosting large language models, the the kind of most frictionless way to to access them is is through Amazon Bedrock. Really, really straightforward API, easy to write scripts to to interact with that either synchronous, asynchronous chat, however you however you need to. But then you can start to bring in open source tools.
Alex Kearns [00:14:39]:
So things like, Landchain is a really popular open source framework where you can use Bedrock. You can use Microsoft's, hosted models, Google, OpenAI, however you need to to interact with your your large language models and then bring in those other parts like retrieval augmented generation where you can say, I've got a database full of full of documents. These are my business documents, my sales reports, my financial reports, whatever they need to be. And then when your large language model takes your prompt, it can then use that data that you've provided, without having to specifically train the model to augment the generation of its response. And there's lots of open source tools that that can do things like that. I think what'll be really interesting as as these kind of I wanna say years, but I think it's gonna be months, with with how things are going at the moment. As the next few months go by, so many open source packages are popping up, but it's how these open source packages stay around long term. So it's unless they are backed by by a big business, what makes them commercially sustainable? I think we've seen seen frameworks like CrewAI, which is a a framework for building AI agents and multiple agents and kind of orchestrating, like, what agent would get called for a particular type of task.
Alex Kearns [00:16:25]:
They've now introduced all the commercial model where they can take on some of the management and, observability around those agents, or you can just use the framework open source. So I I I I feel
Michael Dawson [00:16:40]:
like I wanna ask about that. Do you you're supporting your customers in utilizing AI within their businesses. Have you seen a significant change, say, over I mean, I I think chain things are changing very frequently, in a month period. So, you know, compared to, you know, early twenty twenty three Mhmm. To now, what like, what's the next thing? Like, what are customers now most interested in utilizing? Is it one of the particular providers more so than others? Is it just a smattering of everything, or do you really see something, taking off specifically in the businesses that you're working with?
Alex Kearns [00:17:18]:
So I think it's it's worth worth making it kind of provider agnostic and thinking more use case and and drivers for for use of AI. So thinking back twelve months, or even twenty four months, there was a lot I think there was a lot more of the people wanting to use AI for the sake of using AI. So we've we've we've had conversations with customers who have said, my board member has said, as a business, we've got to be using AI because investors wanna see it, public needs to see it. Can you help us use AI? It's like, well, of course, but let's let's take that step back. Let's try and work out where there is a genuine use case. I think we've we've kind of getting through, if we use the the Gartner hype cycle as a a framework, I suppose, here where I think we've we've definitely passed that peak of inflated expectations, that that kind of top of the top of the hype hype cycle where everyone is is using AI for the sake of using AI. People wanna do way more with it than is is really feasible and ethical, sustainable, everything. And we know I think we're we're starting to quite rapidly get into that point of, what the hype cycle terms as the the trough of disillusionment where people are thinking, I'm seeing AI so much.
Alex Kearns [00:18:59]:
Every service, every tool, every news article. I mean, even in the in The UK today, our prime minister came out and announced a big plan for for rolling out AI and growth programs across the country. So it's it's gone from just the those big tech providers to to government, to politics, to everything. It's dominates everywhere. And I think it's almost sort of starting to see a little bit of fatigue in companies where it's a it is a case of every vendor is talking to us about AI or their latest AI powered offering. And the that next stage, which I don't think we're far away from now, is is working out and really being visible the use cases for AI that are here to to stick around.
Michael Dawson [00:19:54]:
So the one Yeah. No. No. I totally get it. I I mean, it is quite in the media. It's everywhere, as you said. And I'm I I sort of wanna ask our resident, ML expert here, you know, what she's seen, you know, comparative to with what you brought up. I I know she loves to talk about it.
Jillian [00:20:13]:
I love AI. I think I think AI is, very cool. I'm still really seeing people on the upward side of the hype cycle. Like, I have a small AI service that I offer. I had to stop offering it, like, publicly because people were just coming in with these very outsized expectations. And now I'm like, okay. We have to we have to schedule, like, a ten, fifteen minute talk first so that I can, like, you know, adjust some of these expectations and things. But besides that, I think it it's very it's great if you're using it kind of for what it's good at, and then it's terrible if you're not.
Jillian [00:20:46]:
Like, you know, I think probably a lot of the public policy stuff might be a little bit, maybe a little bit, like, outsized in terms of what it can do. But maybe, you know, but maybe people making these kind of requests and just being like, well, shouldn't it be doing this? That's probably what's gonna drive innovation forward. So I have kind of I have kind of, like, mixed feelings about it, I guess.
Will Button [00:21:06]:
Well, I think that's there's, like I wanna drill in on that for a second. Like, you having the right expectations for it, Alex and Jillian, y'all both brought that up. What are some of the good use cases where you've seen AI really make a difference, Alex?
Alex Kearns [00:21:27]:
So I can talk about one one kind of specific customer, use case where as part of a migration, there was, we had 250 PHP, cron jobs that were running, on an on premises server. These were, some of these were 15 years old, kind of dating back to a PHP early PHP five and PHP four with with some of them. And some of them were they were little scripts. Some of them were 50 lines. Some of them were four or 500 lines. And then you have the ones that import from different files, and we we kind of took the view of actually, is is there a way we can can do something here to speed up the analysis of of these scripts? There are a few things that we we need to know. We need to know what databases does the script talk to. Does it interact with any any other services, so APIs? Does it interact with things like, in the case of the on premises servers, things like the the send mail binary, on a server because this particular customer, for the rest of their business used a a managed SMTP service now and not just sending emails from from on premises servers.
Alex Kearns [00:23:00]:
So we did a bit of a proof of concept around just well, primarily just as a way to to speed up our own analysis, because yeah, I mean, nobody wants to read through 250 script line by line.
Will Button [00:23:16]:
No. Say it's not so.
Michael Dawson [00:23:18]:
Well, I mean, if you're looking at just as an example, you know, you you said, accessing particular database. Right? Like, if you're have calls out to some on prem or third party provider data provider and you're and they're migrating to AWS, they may be going to either a NoSQL option or or RDS. And so you you wanna make sure that those get converted, assuming you just port the scripts, you know, lift and shift into easy twos. So, I mean, a question that I may have is, where do you find yourself on the risk scale? Like, what happens if the l m, which is not gonna be perfect? I like, I mean, obviously, if it makes up tables and whatnot that aren't being actually used, it's that's fine. But I think the false negatives would be more of a problem. Like, what happens if it missed the table? Would that have caused an issue during the migration, and how did you potentially think about, mitigating those sorts of risks?
Alex Kearns [00:24:08]:
Yes. So what what we did was, rather than kind of invest the time upfront to build a a fully working solution, it was let's let's do a a test on a few scripts first. Let's let's try out over three or four scripts scripts that we have done the analysis by hand on. So we know we've got a a golden, answer that there is a ground truth to to what we're comparing against. When we did run it across the full dataset, then we did the sampling to to make sure that a reasonable portion, of them were accurate. The other thing that we we did with these was when migrating those scripts made use of different environments. So these were going to, Kubernetes cluster in AWS. And because we had a data in a staging and prod environment, we knew that we could run these scripts, in a in a sandbox preproduction environment.
Alex Kearns [00:25:10]:
If they failed, kind of so be it. It's not going to to bring the business down. It's not gonna send customers emails because we can we we can trap emails. We can, make sure that any calls outside of a particular network are are monitored and blocked. So we could quite easily see, actually, we haven't got to just cut over straight production. I think the the point you're the risk and the the the trust that we often put in or people are increasingly putting in large language models is really interesting because as you start to see it used in more in more regulated industries, there's an in incredible amount of power, I suppose, that large language models are being given. And when we get to that point of a model being able to be the only process and the only tool in a loop? I don't know, but I don't think we're there yet. Even when you think about more traditional machine learning use cases, like, one of my favorites is the the kind of fraud detection in banking where you make a purchase on your credit card.
Alex Kearns [00:26:38]:
If it doesn't look like something that you would typically do or it's it's in a country that you haven't been to before, an anomalous amount, then models should pick that up and say, yeah. It's not a not a transaction we're gonna allow to to process because we think it's not you. But that's been honed and refined over probably decades. Our large language model is going to be accelerated quickly enough for us to be able to make use of of that level of power and that level of trust.
Michael Dawson [00:27:14]:
I mean, that's a that's a good point. I mean, these scripts that you were migrating, I mean, worst case scenario is they just didn't run, and they didn't necessarily impact user activity. And if they crash, you get the logs, and then you can go investigate. So using an element in this area was inherently unrisky. It just helps speed up the initial analysis. But at the end of the day, it did didn't really matter. Right? You know, the the whole amount of work, it's like, well, a human could have made a mistake there too, and it wouldn't have had that big of an implication. But, how are you starting to see customers utilizing AI in a well, in, like, not ones that should be risk averse, but are leaning into it more than they should.
Michael Dawson [00:27:58]:
And how do you even evaluate that, or how do you better avoid that? And I think you you did sort of lean on doing sampling and verifying the outputs, but maybe there's something holistic because I I feel like with the adoption of AI coming more and more, the companies do will do the wrong thing. Right? Engineers will either accidentally via negligence or just, you know, laziness or whatever it is, you know, get in a state where, like, this is a great way to absolve myself of the challenge of doing all of this work. How do we counteract that? So
Alex Kearns [00:28:33]:
I think with with customers, we're we're quite upfront. So I think probably diff different consultancies, different, different companies may have a an alternative view. But speaking from a, yeah, employed official hat, if a customer is trying to do something that doesn't make sense and we don't think there's going to be kind of significant business value in it, then we will be aware be be open about that and and make the customer aware that actually what they're doing with it isn't likely to be successful. Obviously, if if there are ethical or or legal concerns around what they're doing, then kind of we're a a technical consultancy. We can raise concerns, make make customers aware, but, ultimately, due diligence is either on a customer side.
Michael Dawson [00:29:42]:
It's it's coming, though. Right? Like, I I I really would like to see some concrete accountability. I mean, we don't have anything quite to the level of the Knight Capital disaster where they lost, like, I wanna say, like, $460,000,000, and that didn't involve AI at all. That was just pure automation and legacy systems causing a mistake. And now we definitely have automated car companies that I believe there was an incident where there was a death in Arizona or New Mexico a lot of years ago, and the company didn't get sued or anything like that. So, I mean, it does seem like accountability is something that's going to come up more and more, and I don't really see anyone working on, adequate safeguards here. I mean, there's like, oh, we're afraid of AI, but it's we're not really talking about the companies that are utilizing it, I feel like.
Alex Kearns [00:30:30]:
So I think there's there's two parts. There's there's accountability and there's explainability, as well. So, again, thinking more about traditional machine learning, some some algorithms are significantly more explainable than others. So there are a lot of algorithms, and increasingly so as we get into generative AI and and large language model space where they're a bit of a black box. They have loads of data that go in, and it's very hard to explain why a particular result has come out. And again, models aren't deterministic, so you can do as much testing as you want, but you can never be truly 100% certain. You can be very, very confident. But if anybody can give a % guarantee, then that probably isn't machine learning that's that's making the final call.
Michael Dawson [00:31:28]:
Yeah. But you could ask the model if it's correct. Right?
Alex Kearns [00:31:30]:
Yeah. I think It's right.
Jillian [00:31:33]:
I mean, if it says it's right, it's it's obviously right. Haven't you ever helped a kid with their math homework before? Same thing.
Alex Kearns [00:31:40]:
There there's some interesting stuff that, the AWS are are working on. So for a while now, they've had Bedrock guardrails, which is there to to try and prevent a large language model from responding about certain topics. But, of course, you have to give it a you have to give it a list to start with of topics to not talk about. And if you haven't thought of the topic, then, yeah, you again, you are you're relying upon a human to to have that extensive list before you can prevent the model. Another one that's come out very, very recently, and it's the last within the last month or so, is, AWS make use of automated reasoning, in all places across across their cloud. So when it comes to kind of cryptographic operations and making sure that encryption is doing what it should be doing and not making sure there's mathematical proof behind some of these operations. That's something that yeah. So that is being used across all of AWS, but only very recently has now come to Bedrock.
Alex Kearns [00:32:59]:
And I I think the feature is called Bedrock automated reasoning where you can build rules up to say, I want mathematical proof the response given is in accordance with these rules, which is quite cool, and I'm I'm yet to play with that. But it looks very promising. And AWS are are generally fairly good on the research and that side of things. They're certainly not not perfect in terms of some of the decisions, I think, around releasing services that are maybe, like, just the wrong side of MVP. We have seen a few times, but that is, I think, a side effect of being customer obsessed in the perhaps as a a customer that needs a particular set of features, and the easiest way is to build a service for it. But maybe it's not suitable for every use case and every customer just yet. But, yeah, I think there's some some really interesting stuff going on around automated reasoning around for AWS's, first party model. So the Amazon Nova family of models, they've got the, AI service cards, which go into quite a lot of detail about how the models are built and are are fairly transparent around the way they work.
Alex Kearns [00:34:29]:
And that's that's something that end users of model need to be needs to be more aware of. I think at the moment, it is very, very easy to log on to to kind of or chat.com, I think, is the domain that OpenAI spent quite a lot of money on. And, yeah, type something, but you don't know where that data has come from to to build that response. You don't know the reasoning behind that response being given. I think as as these systems get more and more embedded in people's processes and and workflows in business, it's needing to understand the why and the ask the question of or ask the questions and have the ability to give a because, this has happened because the model is doing this.
Will Button [00:35:27]:
I don't know.
Michael Dawson [00:35:27]:
I I I worry that if we're relying on education to make people be able to utilize our AI and technology better, I I I feel like we're not going in the right direction. Like, I and I I say this because very frequently in the security domain, we have the same sort of mantra whereas you are gonna stop security events, prevent attackers, remove vulnerabilities through straight education. I mean, there's only so much that you can achieve there. And if your your security strategy is educate the users, I mean, it you might as well have given up already, if that's your and I worry if that's the direction that we're going where, oh, in order to use the models effectively, in order to understand what's going on, you have to be an expert still, which I feel like has been the case for a lot of years now, Going back twenty, thirty years in our ML development, it's always sort of been the case. And until we can break through that, I feel like, you know, real adoption really isn't going to amount to good things. And we're really close to utilizing it in more and more concerning situations where security is involved or human safety is involved or whatever Jillian is doing with automatic creation of, you know, protein folding. You know, I I honestly can't remember, but I I am I am curious there.
Jillian [00:36:47]:
Being the human in the loop like that. If we're we're we're doom spiraling, it's, the greed. Oh, man. Why why is it echoing? What happens? What did I do? Okay. I'm sorry. I've been calling for a few minutes here while I figure out what happened to my my camera.
Michael Dawson [00:37:03]:
What Jillian's saying is she is worried that she's in an envelope.
Jillian [00:37:08]:
But I'm the greed. I could very well be the greed in this situation. It could happen.
Alex Kearns [00:37:16]:
I think there's there's
Jillian [00:37:17]:
the lack of some really interest
Alex Kearns [00:37:20]:
Just kidding. I think there's some really contact. And there's some really interesting ethical bits as well around it. So the topic of kind of self driving cars has come up a few times as we've been talking. And the things that as a as a human driving your car, you might instinctively make certain decisions. So if you've got a self driving car and there is a a certainty of a crash, but one scenario means five people. One scenario means two people, but they are young people. Then it's like, what what does the car choose? Right? At that point, there's no there's no human emotion.
Alex Kearns [00:38:10]:
There's no nothing in it. It's it's a someone has to program a model to say this life is worth more than this life. And how do you do that? Right?
Michael Dawson [00:38:22]:
So there was actually an interesting and so this is the trolley problem. Right? Mhmm. And it's sort of the moral dilemma more than an ethical one. It was actually released I think there was a quick study by Stanford where, like, it gauged random sampling of people of which they would pick. Like, where where should the car actually go to? And I think there there was like, at the end of the study, there was, like, a clear hierarchy of what humans actually preferred. And I feel like it was something like, you should kill cats first and then, old men and then old women and then and then dogs or something like that. And I think it had to do with the fact that, like, cats will just get out of the way. Like, that was the expectation that that these people will either get out of the way or, you know, in worst case scenario, this is what they would pick.
Michael Dawson [00:39:07]:
And it was really interesting that that they had done this, and there was, of course, some, you know, not nice things said about the fact they hadn't gone through with the study. But, you know, I think humans will, sort of adapt to preferential picks for what they are okay with.
Alex Kearns [00:39:24]:
I think it's it's it's, that's an industry. AI is is going to drop to one way or another. I I often see people who are saying JiraCy is going to have as much impact as cloud did. And if it is having that much of an impact, then with cloud, there there weren't really that many drawbacks that I can think of. There were there was there was concerns. There were there was uncertainty around who owns my data, who controls my data, is my data secure if it's in Amazon's data center versus, a server in my own cupboard. But with generative AI, there's there's almost that kind of rabbit in the headlights approach that a lot of people are are taking at the moment where I think there is a real a real danger without, as we talked about, without the control, without either education or forced guardrails, to be able to to use this effectively. I mean, I remember seeing, seeing an article.
Alex Kearns [00:40:46]:
I think it was tail end of last year. It was it was being talked about in the yes, and there were talks of develop models being legally required to report on things like whether their models could be used for purposes that would have a a national security implication, which I think is is absolutely absolutely right from a, again, from a moral and security perspective. There's always that that kind of fine balance of any emerging technology. How how far do you regulate it? If you regulate it too much, does it stifle or prevent innovation? But on the flip side, if something terrible happens because somebody has used a large language model to to teach themselves how to to carry out an attack, then the argument is, well, there wasn't enough regulation.
Will Button [00:41:59]:
I think going back to the self driving car example, the solution there is just go into the settings of the car, and you get a little order of preference. Like, I'm okay with hitting this. I'm not okay with hitting this, and then problem solved. Right?
Alex Kearns [00:42:15]:
It's it's so tough. It's it's
Jillian [00:42:18]:
I mean, we choke, but, like, this is what we unfortunately have to do as humans sometimes. Like, that's how all of health care works. Right? Like, when when there's a shortage, like, during COVID, there was the shortage of the ventilators. You think the doctors didn't have to, like, make decisions? Like like, it's very unpleasant, and nobody nobody wants to think about them or talk about them, but the reality is we do anyways, and it has to happen. And I'd imagine that it also has to be a part of, like, self driving cars because cars are terrible, terrible death machines. I hate driving. I if I mentioned the show yet, just how much I hate driving and having to have a car. It's such a pain.
Alex Kearns [00:42:56]:
I think
Michael Dawson [00:42:56]:
I think it's it I think it's because we're in this intermediary state. Because once we get to the point where there are there's automation all around us and we have adapted to that fact, it's no longer as big of a problem because it's whose fault is it if you step in front of a train? It's so like, oh, well, the train was supposed to stop. It was, you know, supposed to know. And some of them do have safety protections in place. But, realistically, you don't go on the tracks when the train is coming unless you have some reason you really want to be there. And and I think, realistically, the same thing will happen if if we're in the automated car space where the AI self autonomous cars are driving around. I mean, realistically, you know, don't step into the street. I mean, why would you go there? And I think with the the AI cars, we will need want to really redesign, egress and flow for traffic, and we will be able to do that effectively once everything has been automated.
Alex Kearns [00:43:53]:
I think there's there's a really a really interesting change coming in terms of, I guess, similar to what we we would have seen with processes being automated in businesses. Is and we say, I'll go and have a be a real step change in efficiency. Is there is there going to be a resistance from within businesses to work on these generative project because they think it's going to put themselves out of a job. I think it's it's a it's a really challenging space to try and try and thread the line of improving efficiency that is making redundancies. I think going back many, many years and using things like the industrial revolution and inevitably changes like that mean that some jobs are no longer needed, but that people adapt, they define different laws. And if this kind of continue at the pace that it's going and disrupts industry at at the pace it's predicted to, then people need to to kind of change with it because, yeah, very quickly, you are you're already seeing not adverse to have experience in AI as essential. It's no longer a bonus than it's a. You must be able to work with Copilot tools and effectively know how to integrate AI and do DevOps or migrations or anything like that because come needs effect and their customers expect.
Will Button [00:45:53]:
It's a really interesting change to see how the job landscape has changed just over the last twelve months with the introduction of AI. You know, it felt like twelve months ago using Copilot and tools like that was seen as cheating. But now it's just seen as, like, a part of the job, and I'm interested to hear how it's being like, how are how are people testing or qualifying your your skills for that in the employment space?
Alex Kearns [00:46:22]:
I I take fairly to use of tools. If you want to use them, then that's fine. It does come with it comes with other challenges. If someone is overdependent on AI to the job and that's masking real true by understanding, then I don't think AI ever really come into it. Because if you're using a generic code, but don't understand the code that's integrated, And you you might the good chance of building software or building solutions that are go, but there's also a pretty good chance that they're not or you've left a a security hole in it or it's going to cost 10 times as much because there's there's one best practice that you've you've not realized because these models are trained on open code. So from from my perspective, I mean, I I've tried a few different Copilot tools, to get how Copilot came came kind of fairly early doors, given Amazon queue, ago as well. At the moment, I'm trying out the Wind Surf editor and and cursor as a a sort of more integrated IDE experience. And both of them are are are fairly good.
Alex Kearns [00:48:01]:
There's some really cool stuff being able to, like, if you wanna just hack about on a project. I do quite a lot with the the streamlet Python package, which is a a great way to build some data and AI apps with a an acceptable user interface without having to know how to write good front end code. And being able to to just say, like, create a project using streamlet, it needs to be able to interact with Amazon Bedrock, like, stuff out the methods for me and get something fairly quickly. It is good for that. I think the only way these tools will excel will really be with true understanding and context of of what would be in a human brain. It's the getting into the the flow state of I understand this whole repository. I I see how different pieces, kind of connect together. But, also, if you've got microservice architectures and, actually, the context you need is in a different repository, then it also kinda needs that as well.
Alex Kearns [00:49:14]:
So, of course, there are there are limitations. There are, because only so far these things will go. But from I think from my stance, if somebody kinda turned up at an interview and they wanted to use an AI tool as part of a a tech exercise, for example, then it wouldn't necessarily be a be hypocritical of me, right, to say, this is happening. But I think I'd be more I'd certainly be more, aware and more, yeah, be a bit more careful about asking the right questions about the code that has been generated because
Michael Dawson [00:49:59]:
Well, I think that's that's sort of, like, one of the really important parts here is that most companies don't spend enough time evaluating their interview process for what the right questions are. And now they're starting to realize that AI is getting in the way of them, quote, unquote, effectively evaluating the candidates. And I think that really goes to the fact that the question didn't make sense and their evaluation strategy didn't make sense and that there are tools that can easily solve that. And if they can solve it during an interview or during a take home, interview test, then they could potentially or likely using that tool during their job. And I think we're already starting to see some companies intentionally telling candidates to use AI, LLM specifically, to solve the problem because it is something that other engineers on their team are are utilizing and that they would expect someone who comes into the team to also understand on how to utilize those tools because things like configuration or linting, etcetera, will be changed fundamentally by AI. And through that way, if someone who comes into your team that you're hiring into doesn't have experience utilizing LLMs to help them sufficiently or handling the weaknesses, whatever they are, then they're gonna not be as an effective team member when the rest of the team is ex expecting someone who is able to do
Alex Kearns [00:51:21]:
that. I mean, abs absolutely. And I I think with with how much of the industry and how much of businesses, generated there has the the potential and promise to to reach, there's there's almost prerequisites that a lot of people aren't aren't really thinking about at the moment because they get they kinda get blindsided by the AI is great. AI is gonna solve all the problems. Whereas in reality, there are there are certain things that are foundational to successful use of AI. Like, okay. AI is software. How do you monitor your software? How do you make sure the responses from your AI powered applications are what you expect them to? If you were deploying an API, you'd spend time in thinking about, okay, what what are the useful metrics? Is is CPU a useful metric, or is is latency, a useful metric? What's what are the things that actually have an impact on the end use of this? And AI should be no different.
Alex Kearns [00:52:35]:
You should have that kind of production wrapper. You should have monitoring. You should have security concerns that you're proactively, protecting against. But then also, it's that it's that foundation of data. It's the if you're an organization that has lots of data in lots of different places and you want to use it in AI, you you need to have, like, a good data platform. And you you have conversations with people around productionizing some of their AI kind of proof of concepts or, very early stage experiments they've done. And you'll say, okay. Well, you've you've managed to get these little samples of data from various data stores to to prove of this as a as a concept could work and is worth putting into production at a wider scale.
Michael Dawson [00:53:30]:
So one of the problems is that with the pricing with a lot of the models today, if you're not running it yourself, you're pretty much paying for input and output tokens. The amount of context you're adding, which you need a very high context to get an add adequate answer even. And then for whatever reason, these companies are charging you for garbage nonsense coming out of the models. Their decoding process to get back a readable answer has a lot of nonsense in it, and then you're paying for that. So I I think the industry is being driven towards trying to optimize for these two things, which have nothing to do with the quality of the answer in the first place. And I I know Will asked the question about, you know, bringing AI into the interview process. And I feel like, you know, Will, you're now in a great position, for me to ask you. Do you feel like your interview process has been changing, to respond to the increased usage both in the workplace as well as candidates using potentially using AI, during the interview process itself?
Will Button [00:54:33]:
For me, no. Because my interview process is probably a a lot more old school than than most people. Like, if if I'm interviewing a candidate, we're gonna have a straight up bullshit session because I work largely around infrastructure. And just through a casual conversation, I feel like I can get a lot better feel of whether you know what you're talking about or whether you have heard the terms but don't really understand what they mean. So very a very small amount of my interview process has anything to do with, like, hands on the keyboard technical coding.
Michael Dawson [00:55:17]:
Do you have some part of it that's like a any sort of technical validation or, technical systems design or anything like that which could be impacted by AI at all?
Will Button [00:55:32]:
Potentially. Yeah. Because we'll do, like, a a an exercise of, you know, throw together a couple microservices and explain to me the interaction between them. But then I'll spend a lot of time just talking about that, like, well, how does this part work? How does that part work? Tell me what happens if this does that. And I dig into a lot more of the operational stuff, I think. And, like, if if a candidate could pregame all of that and use AI, good for them. I just think it's unlikely given the dynamic nature of my interview process.
Michael Dawson [00:56:12]:
So I don't wanna spoil it, but there is a product out there where, in a remote interview, the candidate will run it, and it will listen to the audio that's coming across and watch the chat and then dynamically generate text response for the candidate to answer on the call. Now I I do think that there is a challenge here of being able to adequately understand what words are important in a sentence. Like, if you're if you have a thought and you're sharing that thought, you know what the point of that thought is and which nouns are more important than others. But if you're reading a response from something else, you might as well all say it all monotone because there isn't any part of that that makes sense upfront to you. You almost need to read it first and then answer back, but that exists. So unless you're bringing people into the office, and, obviously, we wanna optimize for more remote, working environments, you know, our our company, is a % remote. I I know a part of yours is Will. I don't wanna swear to that, but you do have different,
Will Button [00:57:09]:
Yeah. We're a % remote as well.
Michael Dawson [00:57:11]:
Yeah. So, I mean, there's only so much you can do there. I mean, you're not gonna meet in person every single candidate, at a coffee shop or something and go through, sort of validation that they're not doing that. I mean, you have to use your other skills to sort of figure out whether or not there are you know, they believe what they're saying.
Will Button [00:57:31]:
Yeah. And I think I would in that in that scenario, I just rely on, like, our our sixty day window. Like, every candidate we bring on has, like, a sixty day trial period. And if if the expectations didn't line up, you know, we have sixty days to resolve that, and if not, cut ties.
Michael Dawson [00:57:50]:
Are you at least for us, we're really transparent about that. Like, if you're cheating during the interview process, that hurts you because we're just gonna fire you a cup,
Will Button [00:57:58]:
you know, couple days
Michael Dawson [00:57:59]:
or months later. Is that what you want? I mean, you're risking it by coming and joining us just like we're risking it on you, and we have we both have the capability of ending that relationship. So, you know, if if you manage to get through our interview process, faking every step of the way, and then you also manage to fake the next couple of years successfully. I mean, I actually think that was a pretty good hire.
Will Button [00:58:20]:
Yeah. Agreed. Like, if if you're faking it because this is really where you want to be and I pick up on that, I'll do everything I can to help you get there because that's how I got here. I lied through my teeth on job interviews.
Jillian [00:58:34]:
How I got here too. I was just like, hey. I had a baby. That baby needed some food, and I was like, alright. I don't know what we're talking about, but I have Google. Let's go figure this out.
Will Button [00:58:44]:
Yeah. I mean, I read all 500 pages of the Microsoft SQL Server six book because that was the job I wanted.
Alex Kearns [00:58:52]:
And I think this is where we can almost flip it on its head a little bit because AI can post interview then play advantageous. Because if your interview process is cultural and it is, assessing primarily person fit to a business and their ability with how they learn, how they interact with other people, then if you can make a great hire that has the ability to very quickly learn, land on their feet, is a great team player, then AI is a great tool to be able to assist in their upskilling of things they don't know, technically, once they have got in the door. So it's there's there's two sides to it, I suppose.
Will Button [00:59:42]:
Oh, that's a really cool idea. I hadn't thought about that before. So instead of yeah. So you just kinda flip it around and say, hey. AI, here's this dude. Where should I be helping them?
Michael Dawson [00:59:54]:
I think the part of the trouble with that is you would need to have only verbal, for the most part, interaction with the candidate during the interview and have to go through the process of, like, okay. You know, we wanna record this session so that, you know, later we can feed it back through and make it available. And I know that, you know, there's no reason why this should be a problem. But, you know, every single additional obstacle you add there is another risk for potentially losing out on a candidate. I feel like, you know, hey. Can we record this session and have this as a recording so we can share with the rest of the team? It's just another one of those things. So, I mean, if you're getting the value out, you know, great. I I think that's where there's a I would love to see the tool that actually helps there.
Alex Kearns [01:00:33]:
I think of it from a from an individual. So if if you are a hiring manager for a role and you're hiring someone that is, like, 90% of the way there, but they are a 20 in terms of their ability to learn, you know that they will know the stuff given the opportunity to learn it because they've demonstrated that I don't know. They've they've picked up on these technologies in every job they've changed. They've come in relatively fresh. That's where they, as an individual, are able to potentially use AI, to augment their learning, whether it is through Copilot type tools, whether it is Charge EPT, those kind of things where previously you would reach for Stack Overflow, which, again, looking back on it, you think, well, yeah, Stack Overflow is great because you would get loads of answers to things. But you would still have to understand the answer because you don't know who that person is that's that's written that answer, much like you have no guarantee of of what the model is is outputting from its response. You can rely upon, well, you can take indication from the amount of kind of crowdsourced endorsement, I suppose, of a particular answer. And that's, I guess, where responsibility maybe goes more onto model providers to actually say, are the answers you are providing accurate? Maybe large language models are just too too general for something.
Alex Kearns [01:02:18]:
Maybe this is where the the more niche models that are specifically focused on Python coding, for example, are a better fit because they have been trained on vetted best practice Python code. Yeah. So so many angles that you can explore with with this. I feel like we could talk for for hours. Right. One of
Will Button [01:02:47]:
the things I'm doing, we're doing our annual performance reviews, and it consists of each employee gets peer review. They do a self review, and then I write their final review. One of the things I've been doing, it's taking a huge amount of time, but I feel like it's still worth it, is I'm giving AI the peer reviews and their self review and my review along with the responsibilities for their current level and the next level and then asking AI, how can I help this person better meet the expectations of their current role and meet and and start growing towards their next level within the company? And it's it's been insightful because it's picking up on things that I'd overlooked when reading through the reviews myself.
Alex Kearns [01:03:41]:
I think that's a great great use case, but the the key bit in that is it's grounded by human effort that somebody has put in to start with. It's it's grounded by truth and actual knowledge. If you took I mean, if if you took the the part where you write your review of that that person away and said, okay. Well, they've written a self review. They've had a peer review from somebody else. Now let's use AI to to write the the employer review, make it this tone of voice, hear some metrics about this person, how many lines of code they've written this year. Those kind of things where you you could quite easily use AI for that, but that's where it then turns into the, this is a little bit too far.
Will Button [01:04:35]:
Yeah. For sure. Because at that point, I, personally, I would feel like, well, I'm not really adding any value here. Their review came from AI at that point. I feel like I still gotta put some skin in the game and and do my job to help them.
Jillian [01:04:50]:
Well, I think with that said, like, a lot of industries are gonna be creating verification processes that are specific to the problem. So this whole idea that the AI is gonna be running amok, it's like, well, no. We don't really do that. That's not, like, how things in the real world work. So for example, in biotech, I think there's gonna be a ton of AI generated drugs, but they still have to go through the same verification process as all the other drugs, which take years. You have to be able to create it, like, just just being able to create the thing. Just because the computer says that it's a valid, like, it's a valid drug and all that, it doesn't doesn't mean that it is. It still has to go through clinical trials and still has to go through peer review.
Jillian [01:05:31]:
And I feel like every industry is gonna have some it has something similar. Right? Like, I don't know. So I don't I don't worry about that one quite as much, except I worry a lot about greed in the loop. Like, that's that's the one that's the one that I worry about. Like, oh, look. Now now we can make all these biosimilars to, you know, this drug that this patent expired for and we can just be pumping these out. And then, like, you know, if there's not enough, like, regulation and things or if somebody can sort of get it pushed through any of these, any of these kind of verification steps, then I could see that going very, very sideways. So I'm just gonna hope that doesn't happen.
Jillian [01:06:10]:
And if it does, I'm gonna move off to the woods, and there's gonna be no more computers in my life. And that's
Alex Kearns [01:06:16]:
gonna be better.
Michael Dawson [01:06:17]:
I I think you hit on something that's quite ingenious here actually, Jillian. If you just go through previous patents for drugs and then you ask an LLM to generate a new drug, that has the same bonding, you know, activation sites, the same, you know, interactions with other molecules, but is fundamentally different enough that it could be classified as a new drug that could be patented, then, these companies will start losing, a lot of money, because they won't their patents won't mean a lot anymore and will have a lot cheaper medication in the world.
Jillian [01:06:54]:
That's the hope. That's not actually how things have been going. I mean, like, I really I really appreciate your optimism there. But I'm not I don't know. I'm not sure. Like, if you look at, like, biologics. Right? Like, biologics are probably I think are, like, one of the biggest medical innovations, you know, in, like, decades. And they're so expensive.
Jillian [01:07:16]:
They cost, like, I don't know. I think Humira cost, like, a couple grand a month without insurance or something like that. So then, yeah, hopefully, we do get this next wave where we're creating all the biosimilar drugs and so on and so forth. But I mean, when when the biologics first came out, they have their patents and they had an absolute lock on the market. Legally speaking, you could not create a drug that was, you know, slightly similar. They're called biosimilars. You couldn't do that because there's legal red tape and stuff. But, yeah, I hope so.
Jillian [01:07:45]:
That would be great if, you know, if, like, just producing producing these drugs got cheap enough that the patents were no longer even worth it, then that would be, like, a great huge disruptor to the medical industry.
Michael Dawson [01:07:55]:
I mean, it could've been even worse because the company that produced the drug initially, if they had used LLMs anywhere in their process, technically, they can't patent it in the first place. So I think we're very close to the point where, there will not be any patentable, artifacts that exist in the world because the fundamental laws are fundamentally changed.
Jillian [01:08:19]:
I think we're already there though, like, with the and patents are still around. So I I think it's less about the, the computery stuff, you know. So, like, I work with a lot of companies and they're like, oh, can I patent this process, like the software process? And it's like, well, no. You shouldn't even really bother with that. Go patent the process that you use in the lab to actually create the drugs. And so that's where everybody's at. The actual, like, data generation is is like a throwaway kind of Yeah. You know, like, yeah.
Jillian [01:08:47]:
It's just it's just throwaway. Because a lot of that has to be open anyways. Like the data generation that you use to actually get to your drug because it has to it has to be like peer reviewed and all that kind of thing. But everything that goes on in the lab can still be a lot more, I don't know, can it can still have a patent. See? And this is why greed in the loop is such a problem because then there's always, like, there's always ways around the things. And then people wanna be making money, which I guess.
Michael Dawson [01:09:10]:
I feel like that's its that's its that's its own episode in in itself, I think. Greed and softwarecom and tech companies. And,
Jillian [01:09:19]:
Yeah. It could. A little bit depressing, though. It's not like a fun topic.
Michael Dawson [01:09:22]:
Okay. So Jillian's like, I have tons of optimistic topics that we should talk about. Let's pick one of those, especially regarding any sort of AI or ML. Okay. I'm all for it.
Jillian [01:09:32]:
Yeah. Let's just talk about the cool stuff and not talk about, you know, potentially people flooding the market with crazy patents and then nobody getting their drugs. That's how that works.
Will Button [01:09:43]:
So speaking of cool topics, Alex, you work with a lot of companies implementing AI into their business processes. A lot of our listeners are in the DevOps field or deal with, software engineering and infrastructure. What are the key pieces of advice you would have for them to continue their career and be ready for the next evolution?
Alex Kearns [01:10:07]:
That's a great question. I think it it's about embracing it, but also being super critical of the solutions and the tools that are available. So it's very, very easy to feel overwhelmed, I think, by the number of AI solutions that are available in in any space. I think in in DevOps, particularly, if we're including kind of the developer side of that tools in there, we're just going to see more and more come out. I mean, there's there's, a company I came across who their niche is copilot tools, but they only offer models trained on your company's data. So they haven't got, like, a public offering. They're aiming as a enterprise. So the idea is they they train model based on your organization's code bases, and that is your private copilot.
Alex Kearns [01:11:21]:
So I think there's there's lots and lots to come in this area. Operationally, I think the whole, I guess, principle of of DevOps is trying to break down that as a metaphorical wall, between the two sides and empower developers to do operational tasks. I think we're seeing quite a lot come out around explaining operational events using generative AI and that kind of almost, like, trace analysis of all this happened. We've got 10 different data points here. How do we correlate them? How do we say this happened because this happened and this happened and this happened, and the chain reaction was this? Being able to even as as simple as as put those things together in a some sort of structured data and then using a large language model to summarize it into a Slack message saying, this has failed. This is why. I think the one the one piece of advice I'd I'd give people if they're looking to to start experimenting with AI is, like, solve your own problems, find things in your processes, in your workflows that take most time, And use AI almost as like a like your shadow, I suppose, would be a good way to describe it. So if you haven't got confidence in it straight away, tell it to do the same things that you would do, but build it so that it does it as a a dry run.
Alex Kearns [01:13:11]:
Make sure it is going to execute the same steps.
Will Button [01:13:14]:
Right on.
Alex Kearns [01:13:15]:
Yeah.
Will Button [01:13:16]:
Right on. Very cool. And that feels like a good segue into some picks. What do you think?
Alex Kearns [01:13:26]:
Some picks. So I would say I would say, do your picks have to be physical, or can they be be software?
Will Button [01:13:36]:
No. Anything anything goes.
Alex Kearns [01:13:38]:
Anything goes. Okay. So I'm gonna go for some that are some that are related, some that are are not. So my my first one is AWS have a a free to public, generative AI experimentation website called Party Rock. This was born from a it's actually an AWS engineer that built it internally as a way to experiment with with large language models, and then it got adopted by AWS as a organization. So if you go to I think the URL is pathyouwalk.AWS. There's, yeah, no credit cards or anything required. Go on.
Alex Kearns [01:14:28]:
You get a free, amount of usage each month, and you can build these these generative AI apps. One caveat is it's free. It's public. So we go and upload, like, your company's financial records to it or personal data. Right? It's if you're gonna use that kind of data, just anonymize it first. Then what else? I I'm gonna go for something a lot of developers spend probably some money on, but I don't think you could spend enough money on it, which is keyboard and mouse. Like, pull yourself out with a comfortable mouse and a comfortable keyboard. The one that comes with your Mac or comes with your PC, it it's functional.
Alex Kearns [01:15:39]:
Right? But after a while, it it's gonna hurt your wrists.
Michael Dawson [01:15:43]:
So what what do you got?
Alex Kearns [01:15:45]:
My little Logitech MX Master three. It has far too many buttons on it to to know what to do with, but it's comfortable. And then I have a Keychron k two mechanical wireless keyboard. Nice. Really slim for a mechanical keyboard, but super nice type on as well. And you said socks were cool.
Will Button [01:16:16]:
Oh, for sure.
Alex Kearns [01:16:17]:
That was my that was my prep for this was socks. So I have some really cool socks, but they've all come from conferences. I can't I can't give people links to
Michael Dawson [01:16:30]:
So maybe, like, who which, which vendor gives out the best socks?
Will Button [01:16:35]:
Right.
Alex Kearns [01:16:37]:
There were some I got from, InfluxDB last or year before last, which were they were really cool. They were, like, every color under the sun. Super stripy, but really, really comfortable socks. Or, the, like, holy grail of conference swag, which is the red hat red hat,
Will Button [01:17:05]:
Yeah.
Alex Kearns [01:17:06]:
Which you might be able to see, like, up there on top of the bookcase. Yep. Yeah. Like, swag is a, yeah, bit of a debatable topic, but, you can normally go to a conference with significantly less clothes than you need. Yeah. On day one.
Will Button [01:17:31]:
For sure. Alright, Jillian. What'd you bring for picks?
Jillian [01:17:37]:
Actually, I have a tech pick this week. I was looking for some type of UI to build out my Terraform code, mostly because of this AI product that I was talking about where I deploy it, like, on the client site and it has to have, you know, like, that database and s three bucket, a couple Lambda functions, and then you see two instance. And I was like, wouldn't it be nice if there was just, like, a parameterized UI that I could just go and and type and click a couple buttons because I'm really in my I don't wanna be typing era of my life. And I found Resourcely and it is very, very cool. I would like to point out there is no way that I can afford their, like, their plan that's actually very useful. So this is part pick and part me e begging, you know. If the guys at resource link, you know, this is I could be the voice of your tool on the podcast and, like, I'm sure that would just be amazing. So there you go.
Jillian [01:18:28]:
But it is it's really neat, and I like that the back end is all just run by Terraform and Cookiecutter because those are, like, my two those are just my two favorite tools of all time. It's like half my life is run with Terraform Cookiecutter and make files. And then once you grow in to make files, it's, like, 90% of my life.
Michael Dawson [01:18:44]:
We we definitely have a, like, a full sponsor section on adventuresindevops.com, where if if if someone wants to be a sponsor of this podcast, they can go there and read what we have and then decide if it's for them. You know, Jillian, what I found that works is ask your customer if, you know, how the incidentals work and whether or not the usage of third party tools to help, cut down the amount of of time that you would have to charge them for would be included under the contract. A lot of times, the contracts that when I was doing consulting, you would include those in there. And, of course, you would, charge that to the customer ads to optimize what they're, actually getting out of the value that you're providing.
Jillian [01:19:25]:
So it's always really tricky for me just because, like, the companies that I work for, they're not creating technology as their product. Like, if if they could get rid of me, they would. Okay? Like, if they could just be like, you just just go away. We just wanna work on our laptops with Excel. Like, they absolutely would. So that one that one is always a little bit of a tough sell for me. Instead, I just start emailing people and try to get stuff for free, which is probably questionably, like, in terms of ethics, but
Alex Kearns [01:19:53]:
it's not
Michael Dawson [01:19:54]:
different. I get emailed all the time, people asking stuff for free. So, I mean, I don't think you're doing anything, especially wrong there.
Jillian [01:20:01]:
Alright. Well, thank you for thank you for the ethical vote anyways. I do appreciate that. Sometimes I am a little bit like, maybe I'm a little bit too much on the side of the e begging, but we do like money. So here we are. But But it is anyways, it is a really great tool. It actually does generate you, like, this really nice UI. You can do, it has the sort of parameterized, like, in multi tenancy built in that I really like because I find a lot of tools, they just I don't know.
Jillian [01:20:30]:
They just they just don't have that, and that tends to immediately not work for me because I'm so rarely working on my own AWS account. Right? Like, my AWS account is as bare bones as it can possibly be just for, you know, dev for whatever it is that I'm working on, and then everything else is deployed on client sites. So it did, it did genuinely look like a really nice tool that has, like, everything that I want. And I think that I can even make the free plan, like, mostly work for me, but we'll see.
Alex Kearns [01:20:57]:
Right
Will Button [01:20:57]:
on. Warren, what'd you bring for a pick?
Michael Dawson [01:21:00]:
Yeah. So I've got something really interesting. It's actually a old paper research paper from Yale in 2010, and the name of the paper is comparing genomes to computer operating systems in terms of the topology and evolution of the regulatory control networks, or regulatory control networks for short. It compares the Linux operating system to e coli bacterium. And I find this really interesting, from an architecture standpoint of how what we build, in technology is so wrong if we look at bio the evolution of biology, for millions of years. You look at the evolution of E. Coli and you see what's currently there, and the the set it's only a six page paper. It's very short, and it really gives a lot of insight into the sorts of things we're building and whether or not we're building them effectively.
Michael Dawson [01:21:51]:
And being in the infrastructure, you know, systems design space, having new insights for how to build things or what actually is really important, I always find really interesting.
Will Button [01:22:03]:
Is that from Wade Schultz?
Michael Dawson [01:22:06]:
I don't think so.
Will Button [01:22:07]:
Okay.
Michael Dawson [01:22:08]:
But I I could be totally wrong, and so I don't wanna swear to it, and I will have to confirm for you after the the episode is over.
Will Button [01:22:15]:
Right on. Because Wade's a really good friend of mine, and he's the, the head of computational health over at Yale, and that sounds exactly like something he would offer. Right on. So my pick for the week, I'm picking a book this week, a sci fi fiction book called Juris Juris x Machina. I think that's how it's pronounced. It's from John w Maly. It's a really cool book that has a lot of tie ins to the episode we've talked about here today. It's a future Earth where the legal system has been largely replaced by AI, and, the the main hero of the story has been wrongly convicted, goes to prison.
Will Button [01:23:01]:
But it's a really well written book. It's got a lot of super cool nerdy tie ins into it, and the writing is well done. It's fast paced, so you get sucked into it immediately. And on top of that, in about a month, we're going to have John on the show to talk about the book and AI. So I'm looking forward to that episode. And, so that's my pick for the week.
Michael Dawson [01:23:27]:
Awesome. I'm gonna have to read this in preparation.
Will Button [01:23:30]:
Yeah. It's it's been a really cool book. Like, I struggle to get into to fiction books, but this one just, like, slurped me on in. Right on. Alex, thank you so much for joining us on the episode. It was a pleasure talking with you.
Alex Kearns [01:23:44]:
Thank you for having me. It's been good fun.
Will Button [01:23:46]:
Right on. And to all our listeners, thank you for listening. Appreciate your support. Jillian, Warren, thank you for joining me, co hosting with me. And, we'll see everyone next week.
Exploring the Role of AI in DevOps with Michael Dawson and Alex Kearns - DevOps 232
0:00
Playback Speed: