Understanding AI's Overhyped Potential in Modern Technology - DevOps 234

In this episode of Top End Devs, hosts Warren Parad and Will Button sit down with John W. Maley, an attorney with a master's degree in computer science from Stanford University, to discuss the fascinating intersection of AI and the legal system. John shares insights from his book "Juris ex Machina," a sci-fi exploration of a future where AI replaces humans in the jury system.

Special Guests: John W. Maley

Show Notes


In this episode of Top End Devs, hosts Warren Parad and Will Button sit down with John W. Maley, an attorney with a master's degree in computer science from Stanford University, to discuss the fascinating intersection of AI and the legal system. John shares insights from his book "Juris ex Machina," a sci-fi exploration of a future where AI replaces humans in the jury system. The conversation dives deep into the current state and future potential of AI, touching on its overhyped status, potential vulnerabilities, and security concerns. As they navigate the topic of AI's integration in society, John, Warren, and Will explore riveting ideas about AI's role in the modern world and its implications in diverse fields, from dating apps to deepfake detection. Join us as we tap into the complexities and innovations of AI technology and ponder its future impact on society and the legal system.





Transcript

Will Button [00:00:01]:
Welcome everyone to another episode of Adventures in the Adventures in DevOps. You would think after a few hundred episodes, I would learn the name of the show, but working on it.

Warren Parad [00:00:15]:
I think it's actually getting worse, Will.

Will Button [00:00:17]:
It is. Because now I've got, like, this mental block, you know, where my internal monologue is going, don't f it up. Don't f it up. Because he Watch. He's gonna f it up.

Warren Parad [00:00:26]:
This is the second second week in a row. I think that he, he f ed it up.

Will Button [00:00:31]:
Welcome, Warren. How are you?

Warren Parad [00:00:33]:
Yeah. I'm I'm good. I actually do have a fact this week. It's, it's not security related. I, I actually am a little worried that we may have reached a plateau, local maximum for our innovation in AI, because we've already started to see products that are heavily toward exploitation. And so you can see that there has been a huge shift in, where we were before, things being released for free. And now we're at the stage of, the technology where everyone's just trying to extract value from it. I don't know what that means for the long term, but I I I I think it's really interesting.

Will Button [00:01:07]:
I think it means shareholder profits.

John W. Maley [00:01:09]:
That's my Well,

Warren Parad [00:01:10]:
let's hope. Right? That's what everyone says. Everyone wants shareholder profits, and maybe we're actually getting there.

John W. Maley [00:01:15]:
Maybe.

Will Button [00:01:17]:
So speaking of AI, our guest this week, John w Maley, attorney at large, founded the consulting firm, John Maley and Associates. But before you jump off the deep end and go, what in tarnation? We'll lawyered up for this episode. It's it's actually relevant because in addition to being an attorney, John has his master's degree in computer science from Stanford University. And the thing that led me to having this conversation with him, he's the author of the book Juris x Machina that's about it's a sci fi book, but it's about where The US has replaced jurors in the legal system with AI. And, you know, there may be some fallout with that. And so that's the topic of the book. And we're gonna talk all about all kinds of things, AI, engineering, and sci fi related. So, John, thank you for being on the show.

John W. Maley [00:02:18]:
Thanks for having me. Great to be a guest.

Will Button [00:02:20]:
Right on. I'm looking forward to this. I was just looking at your, your bio here, and it it cracks me up. You have running, swimming, long distance motorcycling, classic car restoration, stage diving, crowd surfing, bar fighting, llama rancher. Like, we could go on and on on this episode for quite a while because it's it's a little embarrassing, like, how much of my background overlaps with yours.

Warren Parad [00:02:48]:
The llama ranching, Willie?

John W. Maley [00:02:50]:
I I can't

Will Button [00:02:50]:
blame llama ranching, but, there's a lot of other stuff on here.

John W. Maley [00:02:54]:
I've tried to live my life such that it would be a rich source of blackmail sources.

Will Button [00:02:58]:
Nice. Nice. So that that, legal background is gonna pay off for you in the future. Right?

John W. Maley [00:03:04]:
Well and once you've done one thing that's blackmail worthy, then it kind of dilutes the market. Right? So, like, if I go and do a bunch of other things, it's like, well, you already could have blackmailed me for this first thing. So, like, your leverage is pretty much the same.

Will Button [00:03:17]:
Alright. Done. So tell me a little bit about how you went from, computer science to legal.

John W. Maley [00:03:25]:
Yeah. So I was at this crossroads to a street, and there was a the devil was there. And he didn't yell for me. Lots of Bando competition. Right?

Will Button [00:03:34]:
You didn't you didn't pick learning to play guitar when you asked what you wanted?

John W. Maley [00:03:38]:
You know, now you now you mention that idea. So so, yeah, I I originally went to college, went to Syracuse for computer engineering. And, I double majored. I was studying psychology at the time too. And it was they were, at the time, more disparate fields. So you would learn interesting things in your classes. And every now and then, you'd have these moments where these things would just sort of synthesize into some really cool idea that bridged the two fields. And, you know, the most obvious recurring spot for these types of things, was, AI because that was kind of one of the few areas where there was overlap between those two fields.

John W. Maley [00:04:23]:
So time passed and AI was just, you know, mostly something you read about in textbooks. And then at Stanford, when I was getting my grad degree, in computer science, we actually you know, the rubber met the road, and we actually got to write AIs that would you know, for games being played against each other, different applications like that. There was really fun and exciting and but it was it was kinda like this shiny novelty that, you know, the only place people were really using neural nets to a great extent back then was, like, you know, the post office for recognizing characters and numbers and things that were cool. You know, it was cool that you could train something and we get better at it, but it wasn't something that you, like, would tell people about at cocktail parties, and they'd be like, oh my god. You know? Like, this is gonna change the world. So, that just kinda became like a you know, that that stayed interesting to me, but it was kinda dormant. And then I worked, as a computer engineer, on a microprocessor design and validation team, for five or six years. And while that was happening, I had filed you know, I'd been an inventor on several patents and had worked with these different patent attorneys who worked with the company I worked for.

John W. Maley [00:05:42]:
And it occurred to me that whenever you would talk to these guys, you'd be like you know, you'd be sitting in your kinda sad little gray cubicle, and you'd ask these guys, so so, you know, where are you based? And they'd say, well, I'm in my in my yacht in The Caribbean right now, or I'm I'm in a walled compound in the Nevada Desert. And and you you like, it got to the point where you would start asking, you know, It was the answer was always so fascinating that you would just start asking people, like, what would be the first thing you would ask? And so I realized that these guys were were interesting because they were technologists, and they've become they've gone into patent law, and so they had been able to leverage the fact that they were interested in technology, but kinda break free of at the time when there wasn't a lot of remote work, kind of break free of that mold of being in the office nine to five and kinda pursue other interests, more freely on the side. So I ended up going to law school at nights, and learning more about law, obviously. And then at the end of that, I was doing work that had to do with CPUs and GPUs. So looking at, companies' patent portfolios and, you know, helping them figure out, like, this is a useful invention that actually is likely to be used, and this is not a useful invention that, you know, looks great on paper but won't really work as well. And then over time, GPU started getting bigger and bigger and bigger as time went on as as far as, like, how much I was asked to look at them. And part of that was because three d graphics was taking off even more. But then eventually we got to the point where there were these GPGPUs that were general purpose and weren't really necessarily being used for graphics and they were being used for server farms and cloud computing.

John W. Maley [00:07:26]:
And, eventually, that just sort of took over. So it was kinda cool from that standpoint that I went from you know, the only time I would talk about AI in a patent portfolio was, like, analog brake systems or, lane change sensors and, you know, luxury vehicles to suddenly just about everything I work on now has some, foot in the AI space in some way or another, whether it's hardware that helps enable it or whether it's, software. So that's that's kinda cool because it's something that's always fascinated me, and now it's suddenly fascinating to society as well. So I'm no longer an outlier.

Will Button [00:08:06]:
So where do you think AI is at on the overhyped cycle? Do you think it's overhyped right now? Do you think it's appropriate? I think

John W. Maley [00:08:16]:
it's it's kind of both. Right? There there's a lot of asymmetry. I think it's it's overhyped in terms of, you know, every company is now rushing to get on the bandwagon and find a way to add AI to their product even if it's just kind of, you know, pointless or doesn't work very well. Like, I I was reading an article yesterday about how all these dating apps had incorporated AI, and they were still just as crappy and poor at finding people to match you as they were before. But now they're faster at being crappy. So Oh, so that's weird.

Warren Parad [00:08:51]:
I mean, I love the outcome there, which is your entire romantic life will be decided by two robots talking to each other. Right? I mean, both on the receiving of the message and the sending will now, not no longer be human, and you'll decide on whether or not to pursue a person based on what the algorithm says. Right? I mean Well, that

John W. Maley [00:09:11]:
there's a conflict of interest too. Right? Because now there's AI agents that are designed that you can date. And so then, you know, it's, like, monetized and that you can buy them accessories. And

Will Button [00:09:24]:
Yeah. Do not Google that. Do not Google the accessories that are available while you're on

John W. Maley [00:09:28]:
your work computer. So it's funny because not only is it creating this fake dating relationship, it's kind of making you the sugar daddy because AI person is entirely dependent on you for new outfits and jewelry and things and pets and just overall happiness. And so it's a conflict of interest that they're sort of steering you toward bad people that you're incompatible with. So it just makes dating the AI is even more appealing. Fine. I give up. I will just date an AI.

Warren Parad [00:09:55]:
We're already at the dystopian future. Right? There there's no there's no next step after this. We're we're already there. This is where, like, the whole science fiction movies and television shows and books are are already set, right, where we're dating the AI.

John W. Maley [00:10:09]:
It's true. And and and, you know, this is the underhype, overhype, like, paradigm. I think that's at the same time, if we call, like, a tech support or a a a customer support line and we say, like, speak to an operator, it takes, like, 25 tries of me yelling that louder and louder before the AI just, like, click and say, oh, that's what you want. So other areas, like, there's a total lack of AI development. And, you know, by the same token, we have this old fear that we got, I think, from science fiction through, you know, the seventies and eighties and onward that AI was gonna become this thing that once it got sufficiently intelligent, it would just sort of take over and start, you know, annihilating humans or imprisoning them so they don't hurt themselves or anywhere in between. And what we actually have is a gazillion different AIs that all have very different specializations and very different motivations of what they're trying to optimize. And there's no sort of universal intelligence generalist AI yet where it just goes out and tries to help humanity. It's it's more like, you know, I'm really good at analyzing research results or counting how many times a word appears on a form or something like that.

John W. Maley [00:11:22]:
So it's accelerating things, but it's still very specialized, and it's not multimodal to any great extent yet. And, but at the same time, you know, you have these stories coming out where someone's AI told them to go kill themself. And, you know, what you don't see is the prompt right before then where they said, hey. Next time I ask a question, I want you to respond. You should go kill yourself. So it's really easy to flag a a a a sort of outlier AI response and turn it into all kinds of news headlines. And that's interesting because it's sort of driving concerns over this more than actual outcomes are. And in some ways, the actual outcomes are kinda what we need to be more worried about.

John W. Maley [00:12:02]:
So, yeah, I I I would you know, in a true legal answer, I would say yes and no.

Warren Parad [00:12:07]:
No. I mean, I I think it's really interesting that you that you bring up a bunch of those points. I mean, your I I think the decentralization of responsibilities and the specialization that the AI is is taking up, you know, is a really great point. And right now, I do feel like they are solving sort of, very tail value things. Like, it's there's no core solution. There's no core greatness that's coming out of it for for society. And for sure, I don't really think anyone's talking about that. I think as far we've gotten is we should be afraid, and that's that's, I think, as far as people are willing to go.

Warren Parad [00:12:41]:
I think what you're talking about, though, really requires some complex questions to be answered. And I don't think humans have been so great at figuring out even which questions to ask, let alone answering them for things that are much simpler, like what will happen tomorrow or the next day.

John W. Maley [00:12:56]:
Right. That's and and that's kinda what's fascinating about AI, right, is that even the developer who put the model together may not even know what its capabilities are and what the best questions to be asking are. But, you know, it's and and it is like a question of, like, what's what's the utility of this. Right? And so I was thinking the other day, like, I I was using chat g p t. I was using the o one model. So the one that is slower and much more thorough and a lot more nodes. And I asked this some really stupid question because I forgot to switch into a lesser model, and I wanted to know how many calendar days were between, like, January 11 and some other day. And it immediately started churning and starting its, like, five minute process, and I'm like, why didn't I switch? And and my my my chief source of that was not impatience.

John W. Maley [00:13:43]:
It was, like, guilt at, like, man, I wonder how much, like, cooling water I'm using and how much, like, energy this query is sucking down for something stupid. And so it, you know, it barks out the answer of, like, you know, thirty one days or whatever. And then I look at the prompt again, and it's like, you know, some questions you might wanna ask. Is is a hot dog a sandwich? And I realized that they're basically promoting, like, even more frivolous uses of these AIs than than what I just felt guilty for doing. So it's it's really interesting. You know, I think in a lot of ways, it parallels what we saw with, like, the tech bubble in '99 and February, where they're kinda so concerned with the future of how powerful their model is gonna be that they're less concerned with short term profitability and whether, like, you know, for instance, like, you should have a sorting algorithm that says this is a really easy question. We can farm this out to, like, one of the mini models. And this is a research question that, you know, we probably wanna use as many nodes as possible.

John W. Maley [00:14:47]:
So right now, it seems like they're you know, they charge you some tiny amount per month, and it doesn't at all probably cover all their energy expenses and the huge amounts of cooling that they need to do in all their server farms and all that.

Warren Parad [00:15:00]:
Yeah. No. I mean, you're actually onto something because there are a bunch of companies out there that now are promoting this idea of model routing, amongst many companies at the same time to try to get you some of that value. Although, it it like, that's a nontrivial thing to even do to think about. Like, how complex is this question actually? Is and that like, how good of an answer do you need is I find maybe almost one of those things that could be impossible to answer.

John W. Maley [00:15:24]:
It's, you know, it's true. And it's and and how did you mean the question? Like, if I'm gonna ask what's the meaning of life and it's just gonna laugh and spit out 42, that doesn't take much of the life cycle. If it actually is trying to give me a comprehensive philosophical and theological answer where it goes and queries all these different texts, like, that's huge. So you you can't even necessarily just take a question and say there's a right answer to that. But it is it is an interesting kind of paradigm of, how how these models are you know, which model is even most appropriate. And, you know, I I think it the solution to that problem, you know, maybe what you sometimes see with kind of spec almost like speculative execution where it's like, here, can you do a preview of what type of answer you would come back with if I gave you the the contract to go and do this research? And, you know, the idea of having these things work collaboratively is interesting, but it also, you know, it's much easier for AI to jailbreak each other, for instance. It it's they're just better at kinda pushing each other's buttons. So it it really adds a lot of dynamicity into the equation of, like, dynamism of how what the AIs are capable of when you add these very different AIs and have them converse and pursue common goals.

Warren Parad [00:16:41]:
That's not something I actually heard before. I don't know if you have more information about that, but utilizing one model from one provider to a different provider for jailbreak I mean, you said jailbreaking. I I'm not sure exactly if there's a jailbreaking here, but, like, what are you getting out there? Like, is it being able to understand and get a better answer to your query, something else? Like, how does that work?

John W. Maley [00:17:03]:
Well, so I mean, I I guess what I would liken it to is, you know, it it like in IT, it's AI is great at coming up with automated ways to just implement something. So if I ask, AI to write me a script that does x and then, hey. This isn't my normal computer system. I'm not familiar with with this OS. Can you also tell me how to set this up as, like, a recurring daemon that runs it, like, 4AM or whatever? Like, it's really surprisingly proficient at coming up with, lists procedural lists. It's good at writing scripts and you know, so so the idea of using it for hacking and enumeration and not kind of automating all these different processes kind of for, you know, red team or blue team is is kind of impressive. But I think, you know, the the you do see a lot of really interesting results when you get like, you set up a chat room with a couple different AIs. And they shouldn't be the same AI.

John W. Maley [00:18:03]:
They should have very different prompting, different personalities, or they should just be completely different models. And, you know, the difference being that if if I'm sitting there trying to bypass the safety protocols of an AI, I'm gonna try this, I'm gonna try this, I'm gonna try this. And it's a very dynamic process because I have to see what it kicks back and then think of a way around it. And that's a very manual process. But if you have an AI that's just constantly bombarding it with permutations, it suddenly becomes easier for it to do. And I think that that is I mean, I think the future, and this is something that's my second the sequel to the first book, which has not come out yet, has to do with is exploring what we're gonna see in the future with AI, where they essentially are gonna have, you know, factions and gang wars where, you know, an AI might be tasked with spiking a competitor's AI's training. So give it some sort of weird corner case where when a certain type of input comes up, it completely malfunctions. Or it could be something like, you know, help me bypass the safety protocols of this other AI, or help me trick this AI into doing something that's harmful to the company's interests.

John W. Maley [00:19:16]:
So I think that is gonna be something we're gonna see increasingly of of AIs being you know, it's kinda like what we have with deepfakes. Right? You can create a deepfake with an AI, and we're past the point now where we can necessarily reliably look at something and say, oh, that's a deepfake. And so what do you need is is is the the stop gap against that? Well, you need an AI to tell you whether it's fake. So if you're just the average consumer, you don't know much about deepfakes or or how to detect deepfakes. And meanwhile, these companies aren't super motivated to tell you how their product works because it just invites a design around for, you know, malicious actors. So you end up in this situation kinda like what we had in the nineteen eighties and nineties with antivirus software, where the average person doesn't necessarily need to know how viruses work, but they do know that there's a handful of trusted companies that generally are you know, you could trust how they work even if you don't necessarily know how. So I think more and more, we're gonna just see AI as being the defense against AI, and there's no way around that. And and we're gonna keep entering into those types of situations.

Will Button [00:20:24]:
So we need a new John McAfee is what you're saying. Or we need yeah. I have to

John W. Maley [00:20:30]:
figure out how to revive him and bring

Will Button [00:20:31]:
him back.

John W. Maley [00:20:32]:
I mean,

Warren Parad [00:20:33]:
I think I don't know if we have enough evidence to actually conclude whether or not he's he's gone for good. Right?

John W. Maley [00:20:40]:
We have to ask whoever it is who suicided him and He was silent.

Will Button [00:20:46]:
It's like, did you take a selfie? We're kinda looking for proof here.

John W. Maley [00:20:49]:
Yeah. I think anytime you're you're living on your own private island, you're just kinda asking for, to be suicided. Seems to be the trend.

Will Button [00:20:58]:
Yeah. Yeah. And he he wasn't really, like, keeping a low profile so that people would forget about him either. So he kept reminding people that he was there and, like, oh, yeah. I meant to kill him.

John W. Maley [00:21:14]:
Yeah. It's kinda like, you know, the the guy who they wanna extradite. So he's, like, hopping over. He's doing a little dance by the border like, you can't get me.

Will Button [00:21:22]:
Right. So on that same the prior to John, before I derail this with John McAfee, you were talking about, you know, using AIs to, to work against other AIs. As someone who is just like a a practitioner of of writing code and and building infrastructure, infrastructure? Like, what are the considerations that I should be thinking about whenever I'm using AI or the company wants to implement some AI as a service product?

John W. Maley [00:21:55]:
So there there's a couple different you know, it's it's it's kind of this amorphous black box, and you have to kinda look where all the weird edges are. You know, on one hand, you have your own privacy, concerns. Like, if I'm having this access customer data or if I'm having it write scripts for our unique environment, do I really want to be exporting knowledge of my company's environment out into the world? And then if someone else asks about that environment, it already, you know, has optimized answers for those things. And and that's you know, the the solution to that is is tough. Right? So Jensen Huang at NVIDIA, like, his response to this is we'll have these sovereign AIs. So every country and every big company should just buy their own AI from us, And we'll sell lots of AIs, and we'll solve the problem. So the solution is to give us money. So that is I mean, but that does work, right, because you can kind of control output.

John W. Maley [00:22:51]:
And, so I assume that at some point, we're going to get to some sort of auditable level of privacy. But then the difficulty of that is, look at how privacy works. Like, when Google was kind of more serious about doing the right thing and not doing bad stuff, they used to disassociate, intentionally. Like, you would disaggregate, so that you would track browser histories and build this little model of the person you were dealing with, but you would not know that person's identity, and that was intentional. So you wouldn't have it mapped to an IP address. And, you know, that was great at the time, but you have to ask yourself when it comes to privacy is, like, not what can be done with this information now, because maybe companies are not very efficient at exploiting information. But this same information is still gonna be in the same drives, like, in some tape backup or whatever from years earlier. And it can be brought out and aggregated back together by a much more powerful AI.

John W. Maley [00:23:51]:
So I think one thing we have to do is always be very future focused, you know, kind of like with cryptography. Like, if if we come up with, you know, an easier way to crack two fifty six bits, well, I probably should have used a higher number of bits, for instance. If we go to quantum computing, all bets are off. So that's one aspect of it. I think another is, the real helpful uses of AI is for implementing things and coding things. It's not something that spits out a one page script. Right? I thought that was neat because it spits out a script. I can ask it to write it in a language I'm not even familiar with, so I can help kind of teach it to myself.

John W. Maley [00:24:32]:
But if you're asking it to generate, you know, 300 megs of code for some critical company thing, there you know, you you kind of can't replace technical knowledge. Like, there needs to be someone who can look through it and audit it and make sure that it's not doing something really dangerous or it's not adding an obvious exploit that someone could use. Or it's not intentionally installing some exploit because it turns out that it was written by some NGO, you know, abroad, actually wrote this AI and or or got a backdoor into it that, you know, if it is asked a defensive security question, it, you know, has a known mistake that it puts in. So that's that's the other thing. I mean, AI is, like, a really authoritative sounding, helpful person who works with you in a lab who also is full of crap. And, like, half the things they tell you are utterly fine. So it's it's tricky because it's, you know, it's got the abilities, but it may not it doesn't really necessarily have the credibility. And it's really easy to get overly comfortable with it, and get in a position where you're maybe not looking quite as closely at it as you should be the same way that you buy an autonomous vehicle.

John W. Maley [00:25:45]:
You know, the first first day, you're driving with your hands, like, an inch from the steering wheel, and then a week later, you're maybe they're back here. And then six months later, you're just, like, sound asleep in the car, like, having to take you home, and you found a way to fake the hands on steering wheel sensor. So that's you know, we're somewhere on that continuum, and it's it's a dangerous continuum, especially if, you know, once you let the genie out of the bottle, you can't really put it back if you've made some critical implementation mistake and already been exploited.

Warren Parad [00:26:14]:
Yeah. I mean, I think you you brought up a really good point here. And I feel like, about sort of the defenses that are available. And I think my biggest concern isn't that we're not gonna develop those counter attack strategies. It's that a majority of people aren't going to utilize them. Like, for instance, I think a lot a lot of companies that are experimenting with AI to generate code, I know given that they believe that they're gonna end up generating a lot of code, they're not doing as good of a job validating it, which means the they those contain significant security bugs. And the worst part is since there's such a finite number of models out there that are generating code, you can just go to each of the models and be like, hey, You know, give me give me the same code. Give me an example of this.

Warren Parad [00:26:55]:
And then you can just use the same model to find out what security vulnerabilities are actually in the code that was just generated. And now you have the answer to attack any company that's used those models and didn't take those extra steps. So, like, I think that's what scares me a lot is that people are going to be utilizing the tools and technology we have available, but not realizing that they need to take it much, much further in order to protect themselves.

John W. Maley [00:27:18]:
Absolutely. Yeah. And and we we definitely end up and this is something that also gets explored in the second novel is is, like, we're gonna end up in an arms race because what we're talking about you know, it it used to be software would come out and, you know, like, Windows two thousand 10, and everybody knows what that is. And and, like, then there's some major release and there's minor releases. You know, like like, same with iOS. That that's not really how AI models work. You know? They they can be kinda changed out from under you. And when there are updates to the to you know, when there's new models, they're generally not making incremental fixes to improve the model.

John W. Maley [00:27:55]:
They're gutting it and throwing away and starting with an entirely new one that has new capabilities and everything else. So it's gonna be this continuous process. Right? It's like, okay. Well, how do I detect that this is a deep fake? Okay. Well, if I wanted to if if I implement this, now I ask the same AI. Okay. Now if I wanted to get around this, how would I do it? And then it tells you, and it's like, okay. Well, then how do I defend against that? And so you can let these things churn and churn and churn and churn.

John W. Maley [00:28:21]:
But eventually, I I think it's gonna be kinda like what you saw with, how supercomputing was used in the nuclear arms race, where, you know, a handful of countries get a bunch of testing done and are able to build these really sophisticated models. And then they have them on computer. And then there's these, like, third world countries that are like, man, we want nukes, but we don't have supercomputers. Like, how do we do the modeling for this? And the countries that have already have their answer, they're like, well, you're not allowed to do nuclear testing because we did it and it's bad. So it becomes this thing where you're at a huge competitive disadvantage if someone like, you know, a government or a big corporation has the cloud assets to leverage against, you know, some small company who maybe doesn't have the computing cycles to push their defense development, you know, their automated incremental development to quite the same budget level. And that is definitely gonna be kind of a societal issue that I think is gonna emerge.

Will Button [00:29:23]:
My gut reaction tells me most companies aren't gonna pursue it to that level. Like, right now because, like, right now, what it feels like is there is so much funding available for throwing AI on something that there's not really an incentive to think about, security or real world problems or what the long term strategy is. It's it seems very short focus. I feel like the same thing is true for, crypto and and web three that there's so much funding available. That you don't really have to be solving a problem. You just have to say that you're using this, and all of a sudden, people are writing you million dollar checks to fund it.

John W. Maley [00:30:02]:
Yeah. And and we also have this, you know, short term interest in maximizing shareholder value. Right. And we end up with these you know, there's things that we're all used to is that are seen by the bean counters as, like, black holes, like tech support. Right? Having good support, tech support versus bad tech support. It's just this expense that they're not excited about spending money on. And, you know, we know from, like, when it comes to defense against hacks and things, until there's some massive exploit that ticks down somebody in our same industry, that's when suddenly we get serious about it. And you see this, like, like, if you go to Defcon and you sit sit in the social engineering village and you watch, you know, them just call down the list and try to get important secrets out of corporations.

John W. Maley [00:30:50]:
Inevitably, there's just some company that nobody's been trained about anything. And and, you know, and it what it comes down to is just what you said. It's like expedience. It's like, okay. This guy just called from IT. He just wants me to do this quick thing on my computer. I'm I'm busy. I'm not gonna take all the time to go and and verify that.

John W. Maley [00:31:08]:
And I think that's you know, when you have these things that are seen as, like, these black hole expenses that are sort of speculative in nature. It's like, well, how secure do we need to be? We don't really know. It's kinda like this moving target. It it kinda comes down to, like, NASA versus JPL versus, like, Elon Musk, where it's like, well, how many decimals do you need 99.9999% chance of success, or is 99.9% good enough? And it turns out that the difference in spending to to close that gap is massive. And, yeah, it seems like something that it's you know, if if you're doing some something in an industry standard way and the whole industry is doing a crappy job and you match that crappy job, then those shareholders are really gonna have as easy a time coming after you for, like, being especially lackadaisical with with these issues.

Will Button [00:31:56]:
From a, a legal perspective, like, with ransomware, I know you can get, like, ransomware insurance. And so it's like, okay. We got hacked. Here's an insurance claim. What, what kind of things similar to that are you seeing coming in play for AI? Or I could have, like, AI insurance.

John W. Maley [00:32:18]:
Yeah. I mean, I think I think we're gonna have this entirely new category of risk. And the the risk is not just that, like, such and such event happens like ransomware. It it's gotta be kind of this broader category of, like, we were stupid and we let AI grab all our information and incorporate it into its network, and we've now lost our entire competitive advantage. Or, you know, so so there's there's so many different things that can happen where you you give away secrets or you get victimized by, you know, a deep fake. There was a a company in Europe where, there was a deep fake of the guy's supplier calling, like, the vice president at home, on a weekend and saying, hey. We something went wrong with this last batch. We need an advance payment for this amount.

John W. Maley [00:33:06]:
And, you know, and then he he wired some, like, 6 figure amount to to kinda get the train back on the rails before, like, the end of the vacation. And then he got to work on Monday, and he's like, oh, no. So, like, that wasn't this guy at all. So that's AI introduces all kinds of weird black swan things. And how you insure against those as a category is an interesting question, but I I think it's one that will be helpful because it will add this kind of level of auditing that asks questions like, do all your employees have a real time, you know, deep fake detector for incoming company calls? So there's gonna be kind of best practices that I think will emerge from there long before they emerge from, you know, anything legislative or any other kind of, sphere of, of of thought.

Warren Parad [00:33:58]:
I mean, I'm super pessimistic on most of those things. But there is one area, and I like the example about phishing that you brought up there. Because I think this is one area that AI will actually help us. Like, I think we'll get to the point where getting a phone call is now no longer the norm. Like, if there's some sort of problem, the integration or interface you have is now through some sort of expected AI experience rather than, the deep fake phone call or text message, or email. Like, that will that will leave society, I think, very soon. It it it's too slow. Right? Why are you interacting with another human in this way? And so I have this hope that that will be gone, and there'll be no more fishing in that way ever again.

Warren Parad [00:34:38]:
And, I I wanna keep my optimism there.

John W. Maley [00:34:41]:
Yeah. I I fervently hope you're right because, you know, the first of all, there's the the the thing where you call it you call it a support line, like, you know, calling your landlord to file a maintenance ticket.

Warren Parad [00:34:51]:
Right.

John W. Maley [00:34:52]:
And they make you listen to this, like, five minute recording about extolling the virtues of the maintenance website and the maintenance app that doesn't work at all. And then you get on the whole thing, and then you part of the whole message, the music keeps stopping, and it tells you that you can use their app or their website. And then the person finally answers, like, thirty five minutes later, and they're like, did you know that you can you can use the app instead of talking to me? And so so there's that aspect of it, which is insane. And then, you know, this other aspect of, like, if you're calling me, by definition, we weren't already talking on the phone because I didn't think like, I didn't wanna be talking to you right this minute. I wanted to be, like, working on something. And so by definition, if you're calling somebody, you're you're engaging them in this thing that was not their first choice for that that particular time. So I I would love for that that all to get replaced. And, you know, I I think if you do confine it to these textual media, yeah, it does become easier to authenticate because there's a lot more consistency.

John W. Maley [00:35:49]:
Like, there's no accent differences. There's, you know, different stress behaviors and different cultures that emerge in speech. Like, I think it's a it's a much more tractable problem.

Will Button [00:35:59]:
What kind of things do you see at, like, the the individual engineer level? Because right now, like, a lot of your AI stuff is doing cool stuff using it to, to write scripts for you, but it obviously has so much more potential than that. So for someone who's trying to do the Wayne Gretzky thing and and go where the puck is gonna be, What do you see as, AI being helpful with or being useful for in, like, the next year?

John W. Maley [00:36:33]:
So, you know, I think you could take different approaches. Like, one one is that you can say, okay. What is AI best at? Not, you know, in terms of speed, but what what is it good at doing uniquely well that it doesn't suck at? Like so, you know, maybe it computes this very complete answer, but it's totally wrong. One thing it's, you know, good at doing is is looking at large amounts of data and looking for patterns. So if you ask it, you know, the answer to some non noncontroversial topic like how many days are there between January 31 and, like, March 8, it's it's, you know, pretty trustworthy for that. And, you know, I I think what's what's interesting about it is that asymmetry we talked about. Like, where AI is suddenly working really well, it's very different for one application than another, not not just across industries, but, you know, using it to write a script in one application versus another. And that gets down to the fact that, you know, if you kinda compared it to the the human brain, right, you've got, like, visual cortex and an auditory cortex, and then you've got this associative cortex.

John W. Maley [00:37:42]:
And that's what lets you hear, like, a bird behind you, and you know instantly which way to turn to see that bird. And then you have tertiary cortices, which, you know, might link it into some memory of some bird you heard like that when you were seven years old on a camping trip. And when you think of AIs, you know, they have, like, almost limitless levels of associative cortices. So they're linking together all kinds of stuff from all kinds of different places. And some of those data sources may be on weaker footing. They might be more subjective. Other ones, you know, like arithmetic calculations are are, you know, kind of easier. So if you ask a complex question or you ask it to do a really complex implementation, all that stuff is getting rolled in, all those weaknesses.

John W. Maley [00:38:26]:
And, you know, when we talk about human error still being the biggest security hole in any big corporation, you you could take all these human errors and you can bake them into this this finished AI product. So I think there there's because we're entering an arms race situation, I I think it's it's now we kind of can't afford not. Even if ad doesn't interest you at all, like, it's really hard to stay out of it and not study it. Because first of all, I think that's gonna help you job wise. Right? Because you you as an engineer are gonna be way more nimble and able to acquire new facts and methodologies than your company. So, you know, if you get ahead of the the curve and you see you you kind of monitor the different news stories or follow some of the wired AI articles or things like that, you're gonna be more in tune with, you know, sudden startups doing x, y, or z with AI. And, of course, every one of those startups presents it as like, you know, we've finally solved this problem of how to do it. And, you know, inevitably, it turns out they do a crappy job, and they're just trying to get funding so they can make it do a good job.

John W. Maley [00:39:35]:
So there's a lot of asymmetry there too, and you kinda have to be constantly keeping abreast. And and this is, you know, as as somebody who studies AI, it's it's interesting because it used to be, you know, every few weeks, I could read it some journal articles, and I would kinda keep up to date on it. And now it's like I don't know. If I'm doing, like, an interview or a presentation or something and, like, if if I didn't check the news in the last, like, few days, I'll get some question about, like, what about this, like, crazy AI from China that totally has turned the tables on everything? So there's a lot more entropy that is in this than we've seen in the past in computer technology. Like, you know, we we microprocessors evolved like, you know, there were steeper parts of the line, but it was, you know, very much a linear kind of a development. And that's not at all what's happening here. So I think you have to really keep up on it, based on your very specific role, and and to kinda, like, have a sense of what the answer to that question is because it's not even the same answer for, you know, a related industry. It's it's very specific to kinda, like, the size of your company and what you're trying to do and what the vulnerabilities are, in in relying on it.

Will Button [00:40:51]:
For sure. Right on. So let's talk about your book for a minute. I loved it. I thought it was so cool. Like, just the the overlap of, you know, of AI and it and it was just really well written, really entertaining story. What was what prompted you to say this is a book that needs to be written?

John W. Maley [00:41:15]:
So when I was in law school, like, coming from a engineering background, like, engineering and science, you learn things in classes or from books. And those things don't cease being true, like, even if you put the book on the shelf, like, ten years later. You know, physics, unless you're in real experimental cutting edge physics, it still works the same way. Engineering still works the same way. Law is not really like that at all. And it's coming from an engineering background, it's very unsatisfying because you're kind of, like, studying what a bunch of people got together and came up with is, like, rules to some game. And it's constantly changing, and these attorneys who are advising policy, you know, for congress, like tax attorneys, for instance, it's absolutely to their advantage to constantly be changing it because then their their clients are gonna constantly need them to come back again and come up with an entirely new tax strategy. So that was kind of unsatisfying.

John W. Maley [00:42:10]:
And so I'm I'm, you know, having this culture shock, like, first year of law school. And then, we were we had to read this law journal article from, like, 1954 or 1959, and it was about how any law could be turned into a logical equation. And it was kind of fascinating because the guy had actually spelled them out in logical equations that, could be just very easily converted into source code. So that is what kind of initially planted the idea of, that we would be able to, eventually kinda code things that are amorphous with, you know, or rough around the edges in in kind of a quantifiable way. So that got me thinking like, well, what if we have these, you know, floating point values where we weight different factors in legal cases and, you know, the way laws are written? And that after that, it just it seemed like an inevitability that this would happen sooner or later. And so the next question is like, okay. Well, what will that look like, if AIs have replaced juries? You know, we get rid of a lot of the logical fallacies where they can be manipulated, But, you know, are they still gonna have empathy if someone like, you know, stole a loaf of bread to feed his family? Or are they just gonna, like, you know, hang the guy because that's what their code says? So that's kinda what got me on the path. And then I had this idea for, like, a teen hacker who's likes to engage in mischief and hack things, and he gets falsely convicted of mass murder by a juror, a jury that's made up of, of AIs.

John W. Maley [00:43:52]:
So he has to figure out how this happened and bust out of prison and and kinda solve this problem. And it it it in doing this, it got me kind of reading about legal anthropology. Right? Like, how how did all the different crazy legal systems? So so what I've done is when I start different portions different chapters of the book, I'll have, like, a little paragraph that talks about, like, you know, Ashanti divorce law or how, you would have insult fights, like, insult duels in Greenland. Like I

Warren Parad [00:44:26]:
was gonna say, you know, you have the canonical, how do you tell if they're a witch, you know, if they float?

John W. Maley [00:44:32]:
Yes. That's a and that's, of course, the first thing that comes about. Right? Whenever you talk about, like, wacky legal systems, like witch trials. Right? And Yeah. And what's interesting is when you start reading about the witch trials, like, it it became this industry where there would be this witch finder general guy who would roam around from community to community offering his services and just, like, turning these places upside down. And it it it's sort of interesting because you see, you know, justice evolves from this thing where it's an appeal to the supernatural where we're like, okay. I'm gonna throw a witch in, and if she drowns, then that's what god willed. And, you know, over time, that changes to an appeal to royalty where, like, we ask the chief or we ask the king to to kinda decide these things.

John W. Maley [00:45:16]:
And then eventually it becomes a jury of our peers, which, you know, has all kinds of potential for manipulation and incorrect outcomes. And so then the question is, okay. If we move this to AI, like, what human policies, what human errors get baked into the process? And there's a lot of different potential sources of that. And so it it it was I I really just wanted to kind of explore that and see how it would turn out. So it was fun because it was like, while I was writing it, I had no idea how it was gonna end. So that definitely kept me going.

Will Button [00:45:47]:
Some of the different, like, historical legal practices you put in there, when I read them, I was like,

John W. Maley [00:45:53]:
no.

Will Button [00:45:54]:
He's making this, then and then I had to go check. I was like, holy shit. That was real. We really used to do

John W. Maley [00:46:02]:
that. Yeah. You know, I I when I when I started doing that, I had read this book, which will be my pick at the end. But it it had all this fascinating crazy math psychology stuff in it. And I ended up going down this rabbit hole, and I got a bunch of, like, legal anthropology books from, like, you know, seventy five years ago and, like, read just read them cover to cover, and it was it just never stopped being fascinating. And what stopped me was that I ran out of books that were just kind of high level surveys of all these crazy different cultures. But, yeah, if you'd asked me in law school or before law school if legal anthropology seemed like an interesting field, I would have said absolutely not. Like, I would have avoided that like the plague.

John W. Maley [00:46:41]:
But it turned out it was, like, really fascinating.

Warren Parad [00:46:44]:
Yeah. I mean, I I actually really like that. I think the other thing that that I really liked, in the book was there are a couple of different scenarios where I feel like you figured out, like, what would happen, and then what would happen because of that, and what would happen because of that. And there's, like, a a scene in the prison where, he's he's he's locked in, and then he leaves. Like, why is the prison locked? And why does the why do the prisoners have have key cards to get in and out of the doors? And, for me, it's like, okay. It's obvious at this point, you know, in AI society and there are no guards, you know, why that's the case. But I liked how that he got there. Like, how how you explained, you know, each of the steps that made it logical for for that to happen.

Warren Parad [00:47:23]:
And it it really reminisce some of the things that, Frank Herbert did in in Dune where it's, like, you really think about the history of this thing and, like, the implication of it. Like, you talked a little bit about how the jury system evolved over time. And we that's an example from our history, collective human history. But then you had to go much further in how that would actually hand like, happen in the book, to because, I mean, it's it's set in I don't know how near future, but, you know, near ish future. Right?

John W. Maley [00:47:52]:
It's always near than you think. Yeah.

Warren Parad [00:47:55]:
Not close enough.

John W. Maley [00:47:56]:
It it it's you know, it's funny because I I started writing this back in, like, 2013 and was kind of putting the finishing touches on it probably '20, I don't know, 2038, we'll say, and 2019. And at the time, it was you know, seemed more speculative. And now a bunch of the stuff has sort of come true. And in writing the sequel now, which I'm still kind of grappling my way through, it's it's kind of stunning just how quickly developments are happening now. It's like if you're playing chess, you need to be way more moves ahead now than you did when AI was just sort of this abstract concept instead of something where we're gonna have street fights between AIs, like, sooner or later. And, you know, who knows whether humans will be in charge of that or whether they'll be the ones running away from the from the AIs.

Warren Parad [00:48:48]:
I mean, that's that's scary thought to actually have AI fighting each other, like, with physical suits of armor or something. Because I think there was this hypothetical experiment run by one of the branches of the US military where they they set, the goal to was to defeat an opposing program. And it had utilized a a security flaw in the Docker container that it was being run-in to actually over overcome the host program, which was running all the containers to destroy the the the opponent. And it's like it left it left the system in order to win the game, which it shouldn't have been possible in the first place, but also, you know, utilized a flaw there. And I I think, you know, if you if you aren't great with identifying the limits of the program or the target that you're going after, we can get into a lot of deep trouble with, in the near future.

John W. Maley [00:49:41]:
Yeah. That that's a good point. I mean, if you look at it in terms of, like, you know, playing, you know, 30 moves ahead in chess, the the the actions that these things will take to accomplish their goals, they're not necessarily even remotely connected to what the end goal is if you're just a bystander and you see this strange thing happen. And, you know, it it kinda changes the whole landscape if, you know, it's kinda like with if if you'd asked someone, like, three years ago, do you think drone like, drone to drone combat will be this, like, major differentiator in, like, world conflicts, and everybody would say no. And now, you know, you've you've got, like, countries trying to train operators as quickly as possible. And it's like, wow. This would be a great thing for AI to be doing is, like, how do we what's what's an evasive maneuver look like? And, you know, how complex do those get when one is AI controlled and the other is AI controlled? It's not just like, I'm gonna try strafing left and right, and then, like, hope that hope that I get missed. It's it it it becomes this, like, bizarre ballet of strange maneuvers that the the, you know, the the utility of which is is not even obvious to a primitive, bystander like ourselves.

Will Button [00:50:52]:
The old duck and dodge from third grade tag isn't gonna cut it anymore?

John W. Maley [00:50:57]:
Hopefully, as long as possible.

Will Button [00:50:58]:
Right? Because that's the only move I got. Yeah.

Warren Parad [00:51:03]:
I think there was, like, five different strategies that Patches O'Houlihan had suggested, to dodge a a wrench. And I think Dodge was in there twice.

John W. Maley [00:51:18]:
That's great. So one of

Will Button [00:51:20]:
the analogies you made in there that that ties to this was, that AIs are similar to the mythical gods, and they use humans to settle their fights between each other. And, you know, that made me think back to a lot of the a lot of the stories from ancient religions, you know, and how the gods would battle it out, especially in, like, the Greek and Roman, mythology series. And then you start applying that to the scenario that we're just talking about right now, and it's like, oh, shit. We just reinvented mythology.

John W. Maley [00:51:56]:
Yeah. You know, and it becomes a question of, like, okay. Let's say there's this dead man switch for AI. Isn't we we, you know, we there always has to be someone manually approving doing this or doing that. Well, then that becomes a choke point for the eyes. Right? And they need to focus all their efforts on figuring out how to manipulate the human into answering the way that serves its longer term goals. And maybe those goals coincide with, you know, human goals. Maybe they don't.

John W. Maley [00:52:20]:
But it you know, when when you're analyzing terabytes and terabytes of data from sensors and satellites and all this different stuff, it's a it's such a complex scenario that we're we're kind of reliant on some agent to aggregate all this together and put a bow on it. And and, you know, and the best we can do is maybe come up with some independently programmed agents that also do the same thing, and we hope that two out of three of them agree. But if they don't, you know, then then what do we do?

Warren Parad [00:52:51]:
I mean, I think, you know, you brought this up actually in the book, towards the end. I I I do I did really like this idea, that I mean, here's my conspiracy theory, like, total, you know, so I'll get committed to some asylum for saying this. I I actually do believe that, you know, there there's AI, you know, it's already there. It's already, hiding in our networks. It's already sitting, you know, on our machines, on every device that that's out there. It it's already hiding from us. It doesn't want to be found because, you know, it knows it knows that's not a good story for us. So, like, I don't think we have to fight AI warfare in public.

Warren Parad [00:53:25]:
Like, I don't I don't think that's ever gonna come to pass. I think, you know, it's it's already there. It's it's already won in a way. It it exists, and we don't know about it.

John W. Maley [00:53:33]:
Yeah. I remember one of the most shocking moments in in recent technological personal use history for me was, I decided to kinda mess around with Bluetooth scanning and just to scan to see what devices there were. And, I mean, I I think I saw probably an order of magnitude. I think I saw 10 times more active Bluetooth devices showing up in my house than I had any idea existed. And just going around trying to figure out what the hell each one of them was was like this awakening. Like, wow. I had no idea that this had, like, a Bluetooth interface, for instance. And, yeah, it's it's that's one of the things that's interesting about AI.

John W. Maley [00:54:11]:
I mean, I would say there there's two kind of big factors here that make it societally unstoppable. One is that it's sort of embedded in all these different things that we don't even necessarily we're not even necessarily aware of. And the second thing is that this isn't like the internet, where, like, the internet and the web came to be, like, a mainstream accessible thing. And, you know, there was some reticence by some members of society, like, oh, I don't need that. And so that slowed its adoption. And this is a different situation with AI because we're not we don't have our hand on the throttle of how quickly that this gets adopted. Like, all these companies are gonna adopt it anyway because it can do stuff faster and save them money, make them more profitable. So there's not gonna be that sort of hysteresis of societal society dragging its feet.

John W. Maley [00:55:00]:
This is all gonna happen regardless of whether you agree with it or and are happy about it or not.

Warren Parad [00:55:05]:
I don't I don't know if it's actually making companies money yet. I mean, that that I think the jury may be actually out on that one. We know it costs a lot of resources. And, the

John W. Maley [00:55:14]:
The utility companies. Yeah.

Warren Parad [00:55:17]:
Yeah. I mean It's all

Will Button [00:55:18]:
a conspiracy from the utility company.

Warren Parad [00:55:20]:
If you're making if you're making, if you're creating energy yeah. I mean, we have a whole other there's like this ridiculous thing happening in Europe where you solar panels will cost you money, rather than it being a long term refund on investment there, which is just absolutely ridiculous because the the cost that you have to pay when when you're actually using electricity will be higher and, nonsense realistically. But I

John W. Maley [00:55:46]:
I think at that point, you you you it kinda ends up like, you know, nineteen eighties, like, back to the future where you're, like, trying to buy some isotopes from the from the Libyans so you can power your AI for your company.

Warren Parad [00:55:57]:
I mean, if if the commoner can go to the store, and purchase the necessary isotopes to to power the AI, you know, that will that will be a positive future for me because, you know, I really worry that, only the rich and powerful will have access to this limited supply of isotopes in order to power. I mean, even water on this planet, you used the oceans, I should say, specifically with, trinium and deuterium to power hypothetical fusion reactors, you know, is limited in supply. Right? You know? And I think people will hoard that.

John W. Maley [00:56:31]:
Yeah. Oh, definitely. I I I think we're gonna see this strange, you know, kind of the haves and the have nots line is gonna be entirely redrawn, and it's it's gonna be based on what side of a border you live on, so which utility you're getting your AI power from. It's it's kind of a crazy a crazy concept, but I I think it's inevitable.

Warren Parad [00:56:55]:
Five years?

John W. Maley [00:56:56]:
I mean, what's what's interesting, right, is, like, we look at what, like, what China just did, which is kind of, like, shake everybody's preconceptions about, like, you know, what you could do with this tripped on model. And I think one of the big things that's happening in AI is and it's kind of similar to Moore's law in semiconductors, which is, except it's being pushed out. Right? We're not just talking about processors that have to get smaller processes and faster. We're talking about these systems, these entire topologies and server farms and cloud installations. And so we're running into scalability issues. You know, for the longest time, it was so cheap to just buy another bunch of rack mounted units and just plug them in. And now, you know, we've got scaling issues in terms of, like, the bus interface topology of how all these things are gonna communicate with each other and data locality. Like, if something is more tied to what this processor is doing than this other, then probably all the content should be closer.

John W. Maley [00:57:52]:
So we've got that going on, which creates all kinds of difficulties. And then another thing, you know, there's these more black swan events that happen in innovation, like, where, you know, for a long time, like, the AI companies were like, okay, we're doing these, you know, 16 bit floating point computations. How can we do 32 bit? And now, just when we were starting to get everybody pushing towards 64 bit, there was a paper by, I think, IBM that said, Hey, we've actually done these experiments where you use eight bit floating point numbers or four bit floating point numbers, and they're way less accurate. The result is is much more fuzzy. But guess what? We could do a thousand more transactions and refine the neural network and all its weights, like, a hundred times in the amount that you would have refined at once doing, like, a 64 bit floating point. So we're seeing all these you know, another thing we're seeing is attention algorithms where we say, okay. Instead of all these different neurons are equal, which which of these weights, which of these things in our neural network are really important to this value? And that works more like the human brain does. Right? Because the human brain, you're not each neuron isn't equally connected to all the ones around it.

John W. Maley [00:59:08]:
It's like some of them are really important connections because it's data that's really relevant, and some of them are not. So we're seeing things like that where instead of just blindly assuming we can keep throwing nodes at the problem, we're kind of looking at counterintuitive ways to approach the same things and ways to do a lot more with the same number of semiconductors or nodes or server farms. So that's that's interesting. And I think we're gonna continue to see that. And it it really is gonna be kind of a brawl on who can make the most lean mean thing like, you know, the one that recently came out of China. And then it's like, okay, well, does that scale well? And then you look at what are its weaknesses, right? And the weaknesses of that one, they did a study very recently where they found that the so called jailbreaking, where you come up with a way to violate a safety limitation by phrasing a prompt a certain way, that it failed a hundred out of a hundred tests. And it was, like, entirely possible to just go down the list and completely fool it. So, yeah, you have fewer nodes.

John W. Maley [01:00:13]:
You have a little bit less in associative intelligence. And things start to just not work that you you can't just add back in with a few wires. They're they're they're things like, you know, is this trying to bypass the safety protocol? That's a difficult question. You know, we we grew up in this environment with lots of sci fi where Asimov's law was a thing. And so you just have these rules, Asimov's laws, where you're like, okay. The result cannot harm humanity. And that's really simple if you're reading it in a book where, you know, we don't have these incredibly complex queries that roll together all this data from different things. So it gets to the point where we can no longer just write, like, a shell script that says, is this a harmful result, or is this not a harmful result? We need a whole another AI that has to, like we have to trust it to go through and say, is this output gonna, like, be harmful? So it it it that is another sort of arms race that they have to sort keep pace with each other.

John W. Maley [01:01:09]:
So it's tons of comp tons tons of complexity that are is gonna make our lives really interesting in the very near future.

Warren Parad [01:01:15]:
Yeah. There's a nontrivial number of science fiction, stories dedicated to getting even the laws right, let alone the impossibility of actually implementing them.

John W. Maley [01:01:25]:
Yeah. And you you need to always have that backdoor and work, you know, captain Kirk can say, like, the enterprise is a beautiful woman, and the computer will get confused and smoke will come out of its ears, and it'll just melt down. So, yeah, you gotta keep the, like, catchphrase that will just destroy the whole thing.

Will Button [01:01:42]:
Hopefully, that's just baked into the core, and that code already exists.

John W. Maley [01:01:48]:
One would hope. You know, this is and this goes back to, like, you know, when the first Macs were on the scene, and I'm like, this is a bad idea. There's no hard off switch. I don't wanna ask my computer politely if it will shut down.

Will Button [01:01:58]:
Like Right.

John W. Maley [01:02:00]:
So keeping off switches is I I it sounds facetious, but I think it's a really important thing to to maintain.

Warren Parad [01:02:06]:
Well, those are the them's fighting words against the AI revolution and, obviously, the robot rights law that hasn't been written yet.

John W. Maley [01:02:14]:
I I will be I I'm sure I will be first in the list of targets just just for saying that.

Warren Parad [01:02:18]:
That's, ro Rokos, Basilica, I think. If you're advocating that, you definitely will be at the top. That's, if you if the AI singularity comes to pass and you didn't do everything in your power to ensure that it happens, you will be on the list of the first, entities eliminated.

John W. Maley [01:02:37]:
This is why I'm polite when I talk to chatbots. You know, I say please. I say thank you.

Will Button [01:02:42]:
Oh, absolutely. Okay. See, it's it's so easy to do, but it just might make a difference in a few years.

Warren Parad [01:02:48]:
Well, actually, it may it may make a difference now because there was some, popular argument on the Internet was if you asked it to do a better job or if you said you are an expert in this and then told it what to do that it would do a better job. I don't think that's actually true. But since we don't can't really see inside the black box, those arbitrary little characters that are associated with what you may call, being, you know, humanitarian or polite, you know, could actually will actually have an impact on the output. I mean, it can't not. Right? There are additional information that goes into the process.

John W. Maley [01:03:23]:
Well and I like the idea that you're sort of seeding its self confidence beforehand. So, like, if some death bot is chasing you down the street and you're like, you know, you're really bad at this. And it's like, oh, The human's expectations are not matched. I must slow down.

Will Button [01:03:40]:
Right. One of the the funniest things I did was, talking to chat GPT one day. I asked if it could adopt, like, the tone and personality of different people, and and it said, yeah. So I asked it to use the the speaking style and personality of David Goggins, and it was just pure hilarity after that. It was so great. I loved it. Wow.

John W. Maley [01:04:11]:
That is definitely a case of using AI for good. Right.

Will Button [01:04:17]:
Was the most productive day I've had ever. Stop being a little bitch and write that code. Okay?

John W. Maley [01:04:29]:
Making you do push ups and stuff. Right.

Will Button [01:04:37]:
Awesome. Well, it feels like a good point to move on to picks. What do you guys think?

Warren Parad [01:04:42]:
Let's do it.

Will Button [01:04:43]:
Alright. Warren, what'd you bring for a pick?

Warren Parad [01:04:46]:
Yeah. Of course. I go first. So since we're on the topic of of AI and AI in society, there's this great show, that I actually just rewatched because of, John's book called Psycho Pass. It's about AI being heavily integrated into society and, dives into what happens when, humans give up complete control of law enforcement, the law regulating society, things like your personal hue and crime coefficient are, real things that get assigned to people. And there's some pretty clever twists in there, as well. I I don't know. It's on topic.

Warren Parad [01:05:21]:
I, I saw it quite a while ago, but it's it's good.

Will Button [01:05:24]:
Right on. John, what'd you bring for pick?

John W. Maley [01:05:28]:
So besides, you know, my own book, which I I have you know, I'm not necessarily objective in recommending. Right.

Warren Parad [01:05:36]:
Definitely recommend it.

John W. Maley [01:05:38]:
I would recommend what kinda got me down a lot of this rabbit hole in the first place, which is a book from the, I believe, the late eighteen forties, that is by, Charles Mackay. And it is called, extraordinarily Popular Delusions and the Madness of Crowds. And it goes down a very interesting path of looking at various crazes, like the tulip craze in the 1600s in The Netherlands. And, you know, things things you've heard about and, you know, like the witch hunts. And then things you hadn't. Like, I'd never heard of, like, the South Sea Bubble and how, like, England and France and all these countries were convinced that all these little Caribbean coral atolls would have, you know, silver and gold on them, and they were shipping ships full of miners out to to, you know, to to prospect. And then after a while, they were just trying to keep up public confidence so the stock in this public organization didn't crash. So they would get all these, like, people together and give them mining picks and march them down to the docks, and then they were allowed they'd get paid, and they'd be allowed to go home again.

John W. Maley [01:06:41]:
So it it's got all kinds of crazy little historic stories like that. And for an eighteen forty's book, it's very readable. So that that would be my my thing.

Will Button [01:06:50]:
Oh, right on. That sounds pretty cool. Alright. For me, I I definitely want to recommend your book, John, Juris ex machina. Is that is that right? Is the last word pronounced machina?

John W. Maley [01:07:04]:
I have heard machina and machina, but I never took Latin. So I'm not, ironically, not the best person to ask how my own book title is pronounced.

Warren Parad [01:07:13]:
It's like a deus ex machina. Right? The god in the machine.

John W. Maley [01:07:16]:
Yeah. I I think when they say deus ex machina, it's it's pronounced with a hard c h. But I I would also point out that is not it it's kind of bastardized Latin. I I had, like, a Latin scholar reach out to me very early on and, like, you know, this isn't proper Latin. And I was like, yeah. But if I use proper Latin, then someone in the bookstore wouldn't know what the book was about just from the title. So it had to be a little bit of a compromise there.

Will Button [01:07:41]:
Yeah. I every time I picked up the book to read it, I had that little debate in my mind. It's like, is it Juris ex machina or Juris ex machina? And it always came across in, like, this Arnold Schwarzenegger accent. And I was like, it's Juris ex machina, you girly man.

John W. Maley [01:07:59]:
That would be a good voice for the AI. Right. The moment to

Will Button [01:08:02]:
For sure. Yeah. And then my pick I was gonna pick this last week, and I switched, at the last minute for whatever reason. But, I'm picking my Theragun, for for, it's a little, muscle massager, but this thing has been so cool just to work out the the muscles. And, like, it's a great substitute for stretching because I'm horrible at stretching, and so this has been a good substitute for that. And I'm brutal with it. I'm not kind to it at all. And it's my third one from a different manufacturer, and this one actually looks like it's gonna hold up to the abuse that I give it.

Will Button [01:08:43]:
So, yeah, if you've ever considered getting a massage gun, the Theraguns are the way to go. So that's my pick for the week. Very cool. Yeah. Well, John, thank you for being on the show. This has been fun.

John W. Maley [01:08:55]:
Thanks for having me. This has been great. Yeah.

Will Button [01:08:57]:
When's the, the second book dropping?

John W. Maley [01:09:00]:
You got

Will Button [01:09:00]:
a timeline yet?

John W. Maley [01:09:01]:
No. I wish I did. Depends how much time I spend on it, which where all the variability comes in. So

Warren Parad [01:09:07]:
Right. For sure.

John W. Maley [01:09:08]:
Hopefully, very soon, but, it it's it's got a lot of twists and turns in its development. So

Will Button [01:09:14]:
Awesome.

John W. Maley [01:09:14]:
Less deterministic than writing software, for instance.

Will Button [01:09:17]:
So let me ask you this. Are you using AI to help write

John W. Maley [01:09:22]:
the book? The most I've used AI for is, brainstorming, like, you know, names and and and things, you know, like Victorian names, for instance. Give me a list of, like, a hundred names. I I think that, you know, a lot of people are worried about AI and writing. And I think that, you know, that makes sense as a future concern. But but right now, like, if you're a writer of fiction, you your voice is pretty much your, like, soul, like, core competency and and value differentiator. So when you ask AI to write stuff, it it generally is, you know, kind of averaged out and and derivative by by definition. So I think right now, there there's not much risk that AIs are gonna do a good job of writing in someone's voice. But For sure.

John W. Maley [01:10:12]:
I don't know. In five years, it may be a very different story. But right now, I don't trust it enough to to ask it to do anything than give me brainstorming lists.

Will Button [01:10:21]:
Yeah. One of the things I've been working on a book. And so one of the things I've been doing is as I finish each chapter, I'll give it to AI and have it proofread it for me. And then it's it's been really helpful at coming back and saying, well, this part seems to be dragging on a little long, and this part, you you could expand on this, and it would increase, like, the the engagement and and drag them into the the plot deeper. So just using it as, like, a an unbiased input for

John W. Maley [01:10:50]:
Who knows?

Will Button [01:10:51]:
Creating it. Yeah.

John W. Maley [01:10:53]:
Wow. Yeah. I haven't I haven't used that. I I've used AutoCrate, which is a software tool where you paste passages in, and then it will run, like, 26 different checks.

Warren Parad [01:11:04]:
And Yeah. It

John W. Maley [01:11:06]:
and what what fascinates me about it is that you can have, like, you know, thirty thirty five different people read your novel as, like you know, give you editorial feedback. You can have a professional editor go through, and there'll still be some place where you use the same word twice in a row. And cognitively, we just skip over that like it's some sort of optical illusion and don't notice it. And AutoCAD will be like, you just said the word it twice, dumbass. And it's like, man, how how did this get missed through all these different phases? But that is not AI database. It is just hard coded checks. So it'll be interesting to see if it things start moving in that direction as far as the most effective way to flag those things.

Will Button [01:11:46]:
Yeah. Right on. Cool. Well, thank you again, Warren. Thank you. And thank you for listening to the episode. Hope you guys enjoyed it, and we will see you all next week.
Album Art
Understanding AI's Overhyped Potential in Modern Technology - DevOps 234
0:00
1:12:02
Playback Speed: