AI in Security: Revolutionizing Defense and Outsmarting Attackers in the Digital Era - ML 153

Michael Berk and Ben Wilson join cybersecurity expert Daniel Miessler to delve into the cutting-edge world of AI and cybersecurity. They discuss the evolving tactics of attackers, from specialized targeting to AI-driven data collection. The episode tackles dynamic risk assessment, the arms race between attackers and defenders, and the role of open-source models in security.

Special Guests: Daniel Miessler

Show Notes

Michael Berk and Ben Wilson join cybersecurity expert Daniel Miessler to delve into the cutting-edge world of AI and cybersecurity. They discuss the evolving tactics of attackers, from specialized targeting to AI-driven data collection. The episode tackles dynamic risk assessment, the arms race between attackers and defenders, and the role of open-source models in security.
They explore AI's potential to monitor, defend, and even augment human efforts against security threats, touching on both the opportunities and ethical challenges. They also examine AI's role in protecting against social media scams and phishing attacks, envisioning a future where AI acts as our digital guardian.
Whether you're in cybersecurity, development, or simply curious about AI's impact on security, this episode is packed with valuable insights. Stay tuned for a fascinating discussion!

Socials

Transcript

Michael Berk [00:00:05]:
Welcome back to another episode of Adventures in Machine Learning. I'm one of your hosts, Michael Burke, and I do data engineering and machine learning at Databricks. And I'm joined by my amazing co host.

Ben Wilson [00:00:15]:
Ben Wilson. I do capacity planning at Databricks.

Michael Berk [00:00:19]:
Today, we are speaking with Daniel Meisler. He started his professional career in the US Army and then left to attend university. And after graduating, he held a variety of security related positions. For instance, a practice principal for a software security testing service offered by Hewlett Packard, head of business intelligence for information security at Apple, and then he also served as head of vulnerability management at Robinhood. Currently, he works at unsupervised learning, an organization that he founded, which focuses on increasing global security via software offerings and educational content. So, Daniel, kicking it off with a soft question. What personality traits make for a good security engineer?

Daniel Miessler [00:01:05]:
Yeah. Great question. Personality traits. Okay. I immediately went to skills, but personality traits, definitely curiosity, and definitely tenacity. I would say, that combination of, I guess, curiosity, passion, and discipline. So it's like I want to get this thing done, but it's driven by the interest of why things broke in the first place and the interest of, like, how could you best fix it.

Michael Berk [00:01:38]:
Got it. Yeah. So the basis of that question is one of my first engagements at Databricks, I was put on a security audit for a customer in Australia, actually. And this customer was like, hey. Make sure we don't have any security vulnerabilities. And I had done the onboarding. I had done the training, and I was just very terrified of missing something because security needs to be airtight. And if you miss a vulnerability, that's potentially extremely problematic.

Michael Berk [00:02:05]:
So how do you think about creating a 100% coverage of a security audit or a system analysis like that?

Daniel Miessler [00:02:16]:
I don't actually. I I think that we should accept that that's not gonna happen and decide where to spend our valuable time. And I think it all comes down to having really good methodology and really good understanding of, like, the business side of the risk. So you basically have to know what you're actually afraid of. One of my favorite sort of pieces trivia about security is the fact that it's a combinational word from Latin. Se is without, and kura is worry.

Michael Berk [00:02:52]:
Nice.

Daniel Miessler [00:02:53]:
So you wanna be able to produce this feeling of calm in something, in someone. So so the quest the goal is not remove all problems. The goal is to produce an environment in which somebody can can prosper and work. Right? So if if you're doing this for a business, you want them to be able to run with scissors quickly with new ideas that they came up with and not harm themselves. Another good analogy for this is, f one cars or any race car, but f one is like the extreme example. Brakes are there to make the car go faster without killing the crowd and the driver. So it's like, look, relax. We have everything is fire retardant.

Daniel Miessler [00:03:52]:
Relax, we have the best seat belts. Relax, we have the best helmet. Relax, we have the best brakes, the best tires. Now go be crazy. So I think when you approach a security assessment, excuse me, the real question is what level of risk do we need to get to? How safe do we need to be versus how unsafe are we able to be and still operate. Now let's build a methodology that predictably gets us to that level of risk, And those methodologies will be different if you're at, like, Los Alamos or something, and you've gotta secure this really mission critical thing that you know is gonna be attacked versus some random piece of consumer software. And that's what determines the methodology. And then then it's just a matter of executing on that methodology really well.

Daniel Miessler [00:04:50]:
But first, you have to understand that scope of the problem, then understand what a methodology would look like that gets you to the acceptable level of risk.

Michael Berk [00:04:59]:
Got it. So it sounds like most other projects and businesses, you understand the business context, understand specifically for security, the risks associated with a vulnerability, then you list out sort of the potential solutions and do an ROI calculation of what it is worth. Is that roughly how you would go about it?

Daniel Miessler [00:05:26]:
I I would say ROI, I would say more like a risk assessment of how and I do this based on, threat scenarios, so like a, threat analysis. You basically say, here's what I'm worried about happening. The all the different ways that an attack can happen or an accident or something, and here are the things I could do to reduce the risk of those things. Right? But most importantly, you're calculating if this did happen, how bad would it be? And that's what determines how much effort you're gonna put into finding that problem or fixing that problem. And that's what I use to make the methodology for for assessing the security of these things. Because I wanna be able to like we're talking about before we started recording, I want to be able to flex from 5 security questions. I'm only gonna do 5 things on this thing. What are the most important five things to test? I did this thing I I called it ATM.

Daniel Miessler [00:06:30]:
This was, like, probably 7, 8 years ago. I made this little methodology called ATM. I forget what what the acronym was for, but it allowed me to say, if I only have 5 minutes to test this application and I only have open source tools, and I'm not very skilled, what are the 5 things that I do? So I could change those variables. I could say, I have 7 days to test this application. I have all the skills available, and I have the full set of closed source and open source tools, and it blossomed out the whole methodology. And I could slide the slider of time, and it would take things out. I could slide the slide of skill level, and it would take things out or add them. And so that that's the way I dynamically think of it is just, like, what's the risk of the thing

Ben Wilson [00:07:22]:
and how much time do I have

Daniel Miessler [00:07:23]:
it to assess? What are the most important things to look for in that time?

Michael Berk [00:07:28]:
Got it. And then one more question about this framework. So the risk of a thing is fixed and defined. How do you think about the probability of something occurring and estimating that?

Daniel Miessler [00:07:44]:
Yeah. This this is, I don't feel like I've ever really solved this one in a tangible way. I think I've gone to relying on intuition combined with empirical data collection. So, and specifically not using hard probabilities, like a fair system or something like that. My personal belief, and I'm sure this will ruffle several people, my personal belief is that is not real science. I think it has the appearance of real science and that is not true. And I've seen deployments of these types of systems where they try to put new science on on those things. I actually think the science is sound.

Daniel Miessler [00:08:34]:
I think when it impacts with route reality is where the problem is. I've seen people try to implement something like FAIR, for example, that does this. They put probability impact in dollars, and they bring in an army of, like, 10 consultants into a giant company, 18 months, 24 months, 36 months. The bills keep going up, and when they ask them, okay, now give me the spreadsheet, give me the calculation, they're like, still need to collect more data. Still need to collect more data. So it's like, it just doesn't quite get you there. So I I think the better thing to do is to say, what does reality look like? And let's do a rough estimation based on that. And what you're gonna end up with something is something that looks more like a CIA system, highly likely, somewhat likely.

Daniel Miessler [00:09:32]:
They have those sorts of, qualitative things as opposed to quantitative.

Michael Berk [00:09:40]:
Interesting. Yeah. That makes sense because these events by definition are super rare, so you don't have data. And, basically, abstracting a lot of the complexity, so it's just these categories would would simplify things a lot.

Daniel Miessler [00:09:54]:
That's right. And and I think, in my opinion, it's the only one that works, and most importantly, it's fast. So you could actually lock it in and start doing work as opposed to a lot of these people that I'm talking about, they're like a a coup is 3.7 likely to happen in Dubai. And I'm like, 3.7. That's impressive. They're like, no. Let me show you. Let me show you.

Daniel Miessler [00:10:21]:
And then they start talking about all the math, and I'm like, guys, I I I don't I don't think that's real.

Ben Wilson [00:10:32]:
Yeah. Speaking from a perspective of somebody who is on the receiving end of one of those consulting reports in order to secure and lock down a system, we had the same sort of questions to those people.

Daniel Miessler [00:10:45]:
Yeah.

Ben Wilson [00:10:45]:
We're like, yeah. Thanks for pointing out all of these crazy scenarios and the likelihoods of them, which we don't really agree with. It seems like you just copied this from some template that you had done for some other industry, and we don't even process that sort of data. We don't have these sort of systems, but cool. Thanks for the report. But you did miss this thing that, like, we gave you access to our database, and you did miss the fact that when you started your project 6 months ago, this other team was storing, first name, last name, credit card numbers, and CV numbers in a database. Right. We don't do that anymore.

Ben Wilson [00:11:31]:
But you that's not on your report at all. And that's a huge security risk because Yeah. You know, the axiom that that I always went by, which is kind of flippant, but, it's like nobody's gonna pursue something that's not worth their while Yeah. Like an attacker. So if you're some small company that's that's, like, selling things, to consumers, no hacker is gonna spend, you know, 30 days going after you if in the first hour, they don't see something. They're like, hey. They're wide open here. Let now I know I'm gonna start digging.

Ben Wilson [00:12:14]:
But compare that to some, you know, some sort of, like, stock trader or something or somewhere something dealing with finance. And you probably have a lot of actors that are going after that and trying to poke holes all the time because the the gains and rewards from that are potentially huge if they can find something, or some company that they know is collecting data on politicians or government figures or government, you know, operative somewhere. You know, okay. I wanna get a list of names and who these people are. You know, that's right for the picking for other state actors and stuff, but nobody cares about your website that's selling shoes unless you're doing something stupid like exposing credit card numbers somehow through an API.

Daniel Miessler [00:13:03]:
Yeah. I agree with you, mostly. I I think the new economy of attackers slightly changes that calculus, however, because what what happens is, it's all just seen as, like, surface area that could be covered, and what they have is, initial access brokers. So you have someone who specializes in getting into companies, and then what they do is they open the door and they're like, okay, everybody in. Mhmm. And they start bringing in these crews, And one crew specializes in carpets, 1 crew specializes in paint, 1 crew specializes in they're just like, yeah, come on in. And they these different groups come in and spread out, and one of them is looking for access to Okta. One of them is looking for access to blah blah blah.

Daniel Miessler [00:13:53]:
And they're gonna go find and get that thing and then sell that one little thing. So the things that you don't think are useful or valuable inside of your company, down to the smallest thing, there's a whole economy that only wants the set of emails that it can get from this company because that's useful to an email broker. They're not there for any in fact, if they saw Bank of America credentials or something, they'll just be like, oh, that's someone else's job. I'm here for emails. So just imagine that there's, like, 20 different threat actors led in the door from the initial one, and they're all looking for 20 different things to go sell to this larger economy on the dark web.

Michael Berk [00:14:38]:
That's super interesting. It it but it would make sense if if you have, like if you're doing a mission impossible, you wouldn't go collect the $100 bills and the diamonds and the gold watches. You would have you would have a specific target that you go directly towards because you have a fixed amount of time.

Ben Wilson [00:14:54]:
So So you're going in to unlock the back door, basically. Yeah. There's a backdoor

Michael Berk [00:15:00]:
guy. There's a diamonds guy. There's a watches guy or girl.

Daniel Miessler [00:15:03]:
That's right. That's right. And and but but I think your point, Ben, holds when it is an individual attacker because they're looking for their their thing.

Ben Wilson [00:15:14]:
Right.

Daniel Miessler [00:15:14]:
And their thing is not likely to be a small company. Their thing is likely to be bigger targets because they're just one person. But when it's a giant economy, then it doesn't matter. Cool.

Michael Berk [00:15:25]:
That that's a super interesting And scary.

Ben Wilson [00:15:27]:
Thought about that.

Daniel Miessler [00:15:28]:
Yeah. Very scary. Yes.

Ben Wilson [00:15:30]:
So what do you think about this is maybe slightly on topic, but also, on topic for our podcast and yours. If we were to ask, one of the big closed source LLMs out there right now, like GPT 4.

Michael Berk [00:15:47]:
Mhmm.

Ben Wilson [00:15:48]:
But, like, hey. What are the the best ways for me to to test security access into a closed system. It's probably gonna I've never asked at that, but it's probably gonna report back, like, I can't answer questions like this or something, or it'll be theoretical or or academic. But if you're trying to ax ask it like, hey. I want a script that will do penetration testing on this type of network protocol. It's probably not gonna answer it. They're like, I don't I'm not gonna give you the tools to do something illegal sort of thing. But with open source models that are of sufficient complexity, what do you think the impact to these these sorts of dark web operating groups are gonna be when somebody figures out to take, you know, the next big open source model that's say 2,000,000,000 parameter model, or 2 sorry.

Ben Wilson [00:16:44]:
Not 2,000,000,000,000, 2,000,000,000,000 parameter model that's as large or as powerful as GPT 4, and they train it on nefarious data about, like, how to do here's all the the security exploits and a bunch of 0 day exploits that nobody's really used yet, but we have, you know, the knowledge of this, and then see what it can come up with. Do you see that as a potential game changer for the security world?

Daniel Miessler [00:17:13]:
Yeah. Great question. So yes and no. It it's so a bit counterintuitive here. So, let me see if I can weave around this. So, basically, imagine that you fed a 2 trillion parameter model, all of human health data, every single doctor report, everything or whatever, and you're, like, look, I need to find out how to live longer. And it's looking at gen every single person on the planet who's ever lived their genome, everything, it would come back and it would say, diet and exercise.

Ben Wilson [00:17:57]:
Mhmm.

Daniel Miessler [00:18:00]:
Similar to what you asked, the brokers who are attacking companies right now, they already have list of the best way to get in. So the LOM is actually not gonna help them that much. They might occasionally say, oh, you know, so think about this. But the way to get into a company is to hack somebody who has access already. That will that's basically diet. The other way is to, like, use email, to spoof them so they click on a thing, and that's exercise. So there's only certain number of answers that everyone's gonna collect on. However, if you can collect mass amounts of data about a specific enterprise, Like, all their different Slack all their Slack messages, you basically dump all their internal databases.

Daniel Miessler [00:18:55]:
And this would be in the distant future when you have, you know, millions upon millions of tokens. But if you could take, like, some significant portion of, like, a data lake for a company and drop it in and just be, like, write me code that will automatically exploit this company.

Ben Wilson [00:19:12]:
Mhmm.

Daniel Miessler [00:19:14]:
That's where you start to weaponize in the way that you're saying. So I don't think it'll help with the tactics themselves of, like, how you would go about doing hacks because that's already locked in, like diet, exercise, and calorie restriction, that type of thing. However, contextual attacks against very specific companies and targets, the biggest thing it's gonna be able to help with is it's gonna tell you, going back to what we're talking about before about compression, you'll be able to ask it, I have 1 hour to get into this box. It's only between 3 and 4 AM on a Sunday. What should I do? Now it's gonna be brilliant. Now it's gonna be, like, you could do 30,212 things, but you should probably do this first. Yeah.

Ben Wilson [00:20:03]:
Do these these things. These are gonna probably give you access to the critical database that you would need to feed into a different LLM that could say you could ask that question, How should I bet against this company based on their plans for the next 6 months that that's in this data somewhere? And then you go and take that information and, you know, buy a bunch of their stock or create a fire fire sale on them, which is gonna create some serious problems for them, like maybe a competitor or something.

Daniel Miessler [00:20:38]:
Yeah. That that's a great, point because think of it this way, a really, really smart AI let let's say it's, you know, superhuman inside a 140 IQ, 180 IQ. You might ask, how do I break into this thing at 3 AM on Sunday? And I might say, is that the right question? What are you really trying to do to this company? And you're like, oh, shit. Yeah. Maybe, I I guess I'm trying to make money off hacking them. They're like, well, do you wanna make money or do you wanna hack them? Because there's other ways to make money against them. Right?

Ben Wilson [00:21:15]:
Right.

Daniel Miessler [00:21:16]:
The other thing it could do is, like, oh, you're hack you're starting your hack at 3 AM. Well, in this data lake, I found out that system that you were thinking about targeting is not actually on at 3 AM on Sunday. Right? So that's the type of shaping that it could do because it has that context.

Michael Berk [00:21:35]:
But how does it get that context? Like, sort of, if you have the data lake dump, aren't you already done? You have all the information you need?

Daniel Miessler [00:21:43]:
No. No. Because you're human. You can't look at it. That is I'm talking about, like, terabytes of data. Right? What what would you do with, like, gigabytes or terabytes of data sitting in a data lake, and you're just like

Ben Wilson [00:21:58]:
But how does

Michael Berk [00:21:59]:
the LLM get it is my question?

Daniel Miessler [00:22:01]:
The attacker would have to provide it. I mean, that would be part of the that's part of the actual attacker purchased data from all these different locations that did the harvesting, and the harvesters don't actually know how to hack, but they bring this thing in. So now the then the LLM submits this hand built, here's what I think you should do to the human hacker.

Michael Berk [00:22:32]:
I see. Okay. So it's like 2 breaches almost. It's a breach in service of another more targeted breach?

Daniel Miessler [00:22:38]:
Yeah. I mean, that that that's what usually happens is, these days, what happens is when you find out you're compromised, you go look for all the other ways you're compromised and by different actors. And oftentimes, it's multiple people who have been in there or are still in there. Not always, but oftentimes.

Ben Wilson [00:23:00]:
And this is just blowing my mind with with what a security response would be like 10 years from now. Yeah. Because right now, you go in and you're like, alright. The firewall is is not good, or, hey. Somebody was a victim of phishing, which now that, like, all of this this access of the last 6 weeks they were doing to these systems that technically they have access to. But for the last 4 years, they've never logged into these, and they certainly weren't downloading data from it. Right. And they're not even capable of understanding how to download from these systems, but they, for some reason, have access to it.

Ben Wilson [00:23:38]:
And you start identifying all that stuff. I'm thinking about what what's it gonna be in 10 years? Do you have to change protocols? Do you have to sit there and be like, hey, now that all this information's out there about our the actual systems architecture of how we run our back end, We now have to move to a different database stack and do a data migration and then lock things down so that the just the topography of what was released is no longer a threat vector for us. Yeah. Like, do you think stuff like that is what a security response would be? Like, hey, we gotta lift and ship. Or like, hey, all this data in AWS that got compromised, we now have to move to Azure.

Daniel Miessler [00:24:23]:
I don't think so because, one, that'll be difficult and maybe there was a reason you're on AWS versus Azure in the first place, but 2, they're already gonna have the plans for Azure as well. They're already gonna know Azure inside and out. I I think what's more likely to happen is that you have dueling banjos between armies of agents. So, basically, you have your army of agents internally, blue team, that are just monitoring every single thing. This is the coolest thing about AI in in my mind when it comes to, like, security, both as a society and also technical security, is, the best thing AI is good at is doing something that was not being done before.

Ben Wilson [00:25:08]:
Mhmm.

Daniel Miessler [00:25:08]:
Looking at moles on people's arms. How many billions of people need moles looked at? Who can't get it looked at right now? Therapy, tutoring, monitoring open source projects for weird changes, monitoring open source projects for malicious code or dangerous code or a vulnerable code. So I think, you'll basically have your own agents running inside your company who are looking at every single log entry, every single opening, every single configuration, every single piece of code that gets committed, and it's watching to see if something bad is happening or something bad could happen. And it's constantly formulating a list of how it would attack us and what we should do to fix it and and recommending fixes. And if you go further enough into the future, like 5 years, 10 years, it'll just be fixing those things. It'll be like, hey, I found this thing. Oh, hey, I fixed it. Right? Now, the dueling banjos is the attackers trying to do the same thing from the outside, but they shouldn't have as much access to that data unless they're right there inside with you with the access to the same stuff.

Daniel Miessler [00:26:25]:
So they should be more on the outside guessing. Okay? And and there, things are also looking for a new port that opened, a new database that opened that you forgot about. But ideally, if your agents on the inside are watching close enough, they see that database port open before the attacker does and they shut it down. And the security team team can't do this. The security team with 50 people, they can't watch millions of ports, Kubernetes pods open and close constantly. It's just too much churn. It's too much data, but a 1,000 AI agents can. They don't sleep.

Daniel Miessler [00:27:03]:
They don't rest. They just watch. So I think that becomes the game of, like, that back and forth between blue and red.

Michael Berk [00:27:13]:
What What about the companies that don't have the budget to support that? Are they just screwed?

Daniel Miessler [00:27:19]:
No. I I think it'll be extremely cheap. Now, not this super advanced cluster of advanced red teaming or blue teaming agents. That'll probably be more expensive. But to have a bunch of AIs watching logs, doing all that stuff, I mean, we're gonna have that in the next 6 months, year. But I mean, realistically, we have it now. It's just not widely, distributed. I think that'll be trivial, amount of money to have these things watching you at all times.

Daniel Miessler [00:27:53]:
Not only that, it'll just come natural it'll come as part of the platforms. So, like, when you deploy Microsoft stuff, it'll be like, oh, yeah. Spin up this many agents. It'll cost you, whatever, $12 a month.

Michael Berk [00:28:07]:
So it'll be like cross region replication for data. It's just like click a button, click how much money you wanna spend on said button, and it'll go it'll be owned by the clouds theoretically, or do you think it would be an independent offering or both?

Daniel Miessler [00:28:23]:
The way I see that working is everything starts off as like an idea, it becomes like an app, and then it ends up in a platform. The question is just how long does that take? So I think what's likely to happen is when Microsoft, say, deploys something for you, all this infrastructure, they'll just be like, hey, give me your list of things you're worried about. Actually, we're bringing our own list of things you should be worried about, but let let us know what you're worried about. And by the way, there's 10,000 AI agents crawling around your thing all the time. And if it sees any indication that any of these bad things are happening, it'll alert

Ben Wilson [00:29:04]:
you. I mean, we kinda have a version of that today with Copilot, Yeah. You're in Versus code Yeah. As you're typing. Happened to me yesterday. I am a back end developer, but I am I'm exploring front end stuff for, to to just learn and, like, build some new stuff. And I was writing some React code yesterday. I'm I'm not super familiar with that framework.

Ben Wilson [00:29:30]:
Mhmm. And I I made a like, an error. Of course, you know, TypeScript violation and, you know, Copilot's, the well, the linter warned me. It's like, do you wanna fix this? And I was like, yeah. Copilot, go ahead. And the response actually surprised me, because I'm on the the beta of their their most recent one. And it provided this paragraph of text, like, kind of explained why it's it fixed what it did. Mhmm.

Daniel Miessler [00:29:59]:
And it,

Ben Wilson [00:30:00]:
it was basically like, hey. You're you're not consuming the index here in this list iterator, and this is bad for performance, but there's also a potential way that somebody can mess with this data that you're processing. So don't do like, don't consume this JSON raw and override the HTML because of this attack vector. I was like, I had no idea that was the thing, but it's

Daniel Miessler [00:30:26]:
like

Ben Wilson [00:30:26]:
a a SQL injection, but with, like, JSON that's passed in. And it it responded. It was like it said, like, do you wish to fix this? And I was like, yes. Clicked on it, rewrote the code. And I looked at it. I'm like, oh, so it's gonna stringify it, then process it, and kind of do it in a sandbox mode. I'm like, that's pretty cool. Like Yeah.

Ben Wilson [00:30:47]:
The clever bot. And then I told one of my my buddies that's helping me with with, doing front end development right now. He's like, yeah. There's a there's a better way to do that. And he, like, showed me a code snippet, and he's like, you could just not not do any of that and and directly do it this way. And I'm like, oh, so just use the component directly. He's like, yeah. That's way more secure.

Daniel Miessler [00:31:09]:
I'm like, cool.

Ben Wilson [00:31:10]:
So I learned, but these tools, they adapt to it. You know, they have that contextual knowledge of, hey. This is a bad idea, and we can use that here and now today. We have to prompt it or we have to make a an egregious code mistake. But, yeah, I don't think we're far off from exactly as you said, integrated into a tool like Versus Code. When you go to commit, it does a code scan. If you, you know, enable it and it says, hey. I know you haven't touched this code in in 14 months in this one module, but there's a CVE issued about this, you know, 2 weeks ago.

Ben Wilson [00:31:52]:
So I've rebooted for you. Do you wanna accept my changes? Yes or no?

Daniel Miessler [00:31:57]:
Yeah. And now imagine that those agents are not only in your IDE, but they're also in your build system. They're also crawling around in your Slack. They're also in your data lake. They're also watching every single log that coming into Splunk or whatever aggregation location, and they're just constantly watching, fixing, and streaming information to, like, the security leaders and then the business leaders. And they know the the the threshold of, like, when they should alert you and when they shouldn't, what they should fix by themselves versus ask you. So it's like, if I'm sharing my password with you in Slack, it'll just, like, pop up a thing and be, like, hey, yeah, you shouldn't do that. I I just reset your password.

Daniel Miessler [00:32:41]:
Don't do that again. By the way, I'll let your boss know. Like, it's it's just gonna be everywhere, fixing everything all the time, knowing that the attacker is trying to do the same exact thing with their bots all the time. So what's exciting about it is that it's gonna fill in the cracks of, like, things that just aren't being looked at. Like, the biggest way companies get hacked right now is devs go out, they spin up boxes, they put real data in it, and they leave it online, they just forget. And in a large company, this is easy to do, really easy to do, and it's hard to constantly monitor what you have facing the Internet at all times, every single port, every single IP. It's not hard for AI. Super easy for AI.

Ben Wilson [00:33:32]:
Yeah. We don't even have the ability to do that at Databricks anymore. Yep. Years years ago. Yeah. Like, Michael, if you had joined when I joined, you could have done that. Like, spun up an entire AWS account, basically, that was tied to the Databricks field account. You could start start up any service you wanted.

Ben Wilson [00:33:52]:
Now good luck. Even in the engineering side, everything that we do is sandboxed, and there's timeouts, like aggressive, sometimes annoying timeouts. But once you think about it from a security perspective, you're like, oh, that's why they cycle my keys every 2 hours. Yeah. Yep. It's probably smart. So, yeah, you have a fixed window to spin up something, you know, dangerous. And when that window stops, everything shuts down.

Ben Wilson [00:34:18]:
You have to reinitiate from scratch, and you can't even there's not even a mode for you to type in a password. It it doesn't exist. Everything is through configuration scripts that are through automated systems that, you know, uses build servers, basically, that you don't have direct access to. So it's so locked down. And when you're when you're working on something, you're like, this is gonna take me 12 hours to fix, and you end up having to restart 6 times while working on that. You're like, man, this sucks.

Daniel Miessler [00:34:54]:
You know what's really exciting about that is, have you all heard of, Chaos Engineering? I assume you have from, like, Netflix.

Ben Wilson [00:35:01]:
Mhmm.

Daniel Miessler [00:35:03]:
I think they had Chaos Monkeys at some point. And so at the time, it was just like automation, like scripts. Right? But what you'll be able to do soon is basically hire a 1,000 different AI agents that have a certain amount of credentials. Right? And they just go break things. They just walk down the street kicking in windows. They try they turn off services. So now you get into like this Nassim Taleb antifragile concept, where they're trying to pull credentials. They're trying to hack you constantly, and the good bots are there defending and fixing and cleaning up at all times.

Daniel Miessler [00:35:45]:
And anytime the bad bots, which you've enabled, gets a win, that just go gets cleaned up. So now the attacker actually does break in, and they start trying these things. It's like, what are you talking about? What are you doing? We've been doing this. We've been we've already tried all those doors and windows, and they're they we have coverage, and we've already fixed all those things. So it's, like, imagine an org that is constantly being hacked from the inside and everything's being fixed almost in real time, because AI has the time to do that. I mean, that's normally what a red team does, but it's so slow compared to that.

Ben Wilson [00:36:28]:
Can you Human can only type so fast Yep. And do so many things.

Michael Berk [00:36:34]:
Can you speak to the importance of open source in the security world as it relates to the arms race between red and blue?

Daniel Miessler [00:36:46]:
Yeah. I guess the the main thing I would say there is that transparency of algorithm is going to be, like, the most important thing. So if you know exactly if everyone knows exactly what everyone else has, they know what types of vulnerabilities there are, then it's just a question of, like, how often are we testing for those things? What are we finding and what are we fixing? So I feel like the more of that that's open and clear this is very similar to, like, encryption algorithms. The most secure ones are open. Right? And they are secure like RAINDAL for for for AES, for example, Raindoll is an open algorithm. You everyone knows how it works, but it still works. So I I would say that is probably the future of security is that you could fully describe exactly how your company works, all your internal systems, name your internal systems, name every interaction, publish a full architecture diagram, and you would not be any less safe. Right? So I think that speaks to the open source sort of model.

Daniel Miessler [00:38:10]:
The reason we don't do that now is because we know that the knowledge of how our stuff works is actually a danger to us. And that gap in between those 2 should not be there.

Michael Berk [00:38:25]:
What will bridge that gap?

Daniel Miessler [00:38:29]:
Having complete com coverage of your security vulnerabilities in that open model. Via agents. Via agents, agents is probably what's gonna get there, but via whatever method. If if we had 2,000 person security teams or we had just better automation even without AI, As long as we had full coverage, continuous coverage, never sleeping coverage, and it's high quality, and we were sure that nothing wasn't being looked at, right, nothing was being ignored, then I think we could be like, yeah, we use Okta, we use Slack, we use all these different tools. This is our back end where AWS shop and, I mean, I I feel like we're kind of getting there a little bit. No one's no one's ashamed to say we use TLS 1 1 dot 3. Right? So I I think that's where it ends up going, is everyone will be in that situation. I we use Azure, we use TLS, we use the following things, and ideally, that should not help the attacker at all.

Michael Berk [00:39:45]:
Got it. Another question in on that front. There's a perception that security is sort of just a cat and mouse race almost where the hackers get a little better, we invest more in security, security gets better, and the hackers, alright, figure out something new, and it's just back and forth. Do you think that's a cycle that will always be perpetuated, or is there some way to stop that cycle?

Daniel Miessler [00:40:15]:
No. I think that will always be there. Well, kind of. I in a very distance and I would say that I'm saying for, like, Google. Let's because the difference between Google and, you know, Jane's Bait shop, and the IT shop at Jane's Bait shop, which is also Jane. Right? The difference between those is insane. So, right, when you you find these insider threat situations, someone was selling TPU schematics to China, and they found it inside of Google. Well, that's because they probably have one of the best, probably maybe the best insider threat program.

Daniel Miessler [00:41:02]:
So it's like, for a top tier company like that that has like the best security, the difference between red and blue, the time frame of between when the attacker finds it versus when the defender finds it and cleans it up, or maybe the even the defender finds it first. That gap, that window of time gap is gonna be so small, so it's gonna approach, you know, a limit of 0. For everyone else, that gap is still gonna be large. I think AI, this is where AI offers the most possible help, is to close that gap the most because of continuous watching and fixing.

Michael Berk [00:41:50]:
So it's almost like arbitrage in finance. Interesting. That's super cool.

Ben Wilson [00:41:59]:
Yep. I think the only way to protect yourself or the the software stack that you're working on like, the only true protection is build something nobody cares about. Right? Mhmm. So So people that are so I've I've talked to,

Daniel Miessler [00:42:19]:
you know,

Ben Wilson [00:42:21]:
small organizations back when I was in the field at data rates and some and people are like, exactly what you asked, Michael, at the beginning of, like, your first engagement. I need a threat model for for my my system. And you sit there and ask, like, well, what do you guys do, and what kind of data do you store? Like, well, we don't store any any PII data. There's nothing in our database that could ever be connected to a human being or an end like a business entity. Everything's been obfuscated, but we really wanna make sure that nobody can get in there. And I'm like, can I see it since I'm under NDA? And I'm like, yeah. Yeah. Like, here, do a select star, and and you can see the data.

Ben Wilson [00:43:02]:
And I'm looking at it. Like, this this is a matrix of floating point numbers. I have no idea what this is for. And they're like, well, is it secure? Like, yeah. It's it's you could put this onto a public AWS bucket, and nobody will touch this because there's no contextual information here

Daniel Miessler [00:43:26]:
to gain. Secure because useless. Exactly.

Ben Wilson [00:43:29]:
But it it means something to them because of, you know, code that they have that Yeah. Can read that. And, like, your security model is pretty darn good here. Make sure you have you know, for what that company was doing, They definitely would not want the code to be released with that that data. And I was like, just make sure your repo is really locked down, like, really locked down. And they're like, oh, yeah. We we totally like, we have to go through 5 steps just to log in to into our GitHub account. Like, that's good.

Ben Wilson [00:44:02]:
And make sure that you're monitoring all access to that. But the data, I wouldn't know anybody who would know what to do with that data other than you guys with your code base.

Daniel Miessler [00:44:14]:
Yeah. So so interesting interesting because here's the way I believe this works today, and I I've seen this in multiple places. So what the attacker would do is they'd find the name of that system. Let's let's say it's called, Charlemagne or something. They're like, okay, Charlemagne back end. The first thing attackers do now, this is part of this whole attacker economy, is they pull Confluence. They pull Google Docs. They read everything.

Daniel Miessler [00:44:46]:
They read the onboarding docs. They figure out how to onboard onto everything. And if it's good onboarding information, it'll be like Charlemagne is the most important system ever. Oh, by the way, Charlemagne is super super secure. The back end is super obfuscated. It's not useful at all. The Charlemagne front end is located at

Ben Wilson [00:45:06]:
Yeah.

Daniel Miessler [00:45:08]:
Turns out, Chris is the admin to the Charlemagne system and Chris really likes dogs. So they write a phishing campaign about, hey, do you wanna adopt a dog? They send it to Chris. They compromised Chris's account. They go and pull all the data. They never mess with the Charlemagne system because it's secure. They compromise Chris and get it all through the front door.

Ben Wilson [00:45:32]:
So really is I mean, would an AI system, even if it was fabricated in such a way that scanning every email, every Slack message, every means of communicating with somebody through their work computer, that would still have to extend to a personal computer too. Because maybe Chris, you know, sometimes logs into some systems at work from his personal, like, his phone or

Daniel Miessler [00:46:01]:
Yeah.

Ben Wilson [00:46:02]:
You know, his computer at home, even though he shouldn't be doing that. Would these systems need to follow employees around on their devices, like, wherever they connect?

Daniel Miessler [00:46:14]:
Yeah. It depends on the type of security system, like some will be very focused on logs and and data and stuff like that, like traditional, like network security. But if you have, like, an advanced security system, which all these platforms are gonna be selling, it's gonna start with startups first. It'll be like idea application startup platform. Right? Mhmm. So part of that is gonna be so so there's a company called Zero Fox, which I think they're still around. They they used to, like, defend, executives against social media attacks, but it'll be the same sort of thing. So they'll be like, okay, Chris is one of the people Find in Okta or find in every single critical system, which you could find from the documentation, every single admin, add them to the VIP list, Add all executives to the VIP list.

Daniel Miessler [00:47:06]:
Add anyone with admin credentials of any type to the VIP list. Now, for the VIP list, go build an OSINT recon protocol and profile for all of them, which does like a personality analysis and figure out what types of phishing emails they would click on. And now look for those types of emails being sent to them. Also give them training around not clicking on things related to that. So it's like, you you have to learn the the attack surface inside of your company because you know the attacker's gonna be doing the same. Especially since Chris is on LinkedIn talking about how he works at this company and how he loves dogs.

Michael Berk [00:47:52]:
Yep. Are humans gonna consistently be the weak point in security systems? Like, multifactor authentication, for instance, was a game changer to some degree. Is there gonna be something that makes humans secure?

Daniel Miessler [00:48:12]:
I I eventually eventually, humans will become more secure Oh, fascinating question. Humans will become more secure not because they've actually got better, but because AI is sitting in front of them filtering. So, for example, that email, and this will be in a security system very soon. I'm I'm actually working on something similar. This will be in a system very soon. The email comes into Chris's email at work.com@hiscompany. And it's like, hey, Chris, I see that you're a lover of animals. We're actually starting up a local dog rescue thing in your town and we want you to be the leader of it.

Daniel Miessler [00:49:01]:
In fact, we, New York Times is coming to town to talk about it and they'd like to interview you. The AI reads that and says, oh, this is targeting Chris and Chris will click on this. So it rewrites the email, phishing email regarding dogs. Don't click. This has been, or it doesn't even send it to him. It actually just sends it to the security team and says, Chris was phished, but we didn't pass the email on to him. So, for example, if if we're having this physical let let's say we're having this conversation in person and but we're all wearing AI glasses and our AIs are listening and you're trying to sell me a bridge in in a state that, doesn't have bridges, it's just gonna pop up and be like, hey, this person is complimenting you. This person is talking about how awesome you are.

Daniel Miessler [00:49:58]:
This person also is running from the authorities in 3 different states. Also, there are no bridges in the state that he is talking about. They're just trying to sell you a bridge in. So it's not that I wouldn't have purchased it. I was about to give you the money, but this thing popped up and protected me. And it turned off my bank account, so now it won't even work because it's protecting me from myself.

Michael Berk [00:50:23]:
So it's another sort of arms race of Yes. Systems.

Daniel Miessler [00:50:29]:
Yes. Because those same AI systems are gonna be trying to compromise that person at the same time that your AI system is trying to defend you against them.

Ben Wilson [00:50:39]:
I don't think I've heard a better scenario explained that kind of proposes the beneficence of advanced AI. I I've heard a lot of people say like, oh, I don't want nanny AI. I don't want a system. I've even heard it from colleagues who are, like, at first, before they actually tried some of these these new LLMs that are code based focused. So, like, there's no way you can code better than me. And then a bunch of us are saying, like, that's not the intention. It's not it's not replacing you. It's it's augmenting you.

Ben Wilson [00:51:16]:
It's helping you.

Daniel Miessler [00:51:17]:
Augmenting. Yes.

Ben Wilson [00:51:18]:
And it's saving you from the annoying process of writing docstrings. You've told me you hate doing that. It does that really, really well. So just let it do that. And then they they try it a couple times. They're like, this is awesome. That took 3 seconds.

Daniel Miessler [00:51:34]:
Right.

Ben Wilson [00:51:34]:
I'm good. I'm I'm a believer. And I think for the laypeople who are really terrified about AI, I I see it on on news clips all the time, and it it's it's almost just boring now carrying it. Years ago, people are like, see some local newscaster, been like, have you seen the movie Terminator? It's like, oh, please shut it up. Yeah. But even for those people nowadays, they're talking about GPT 4, OpenAI, and and Coherent. You start hearing it like the name drops of these big, you know, model companies. And people are concerned about, oh, I don't know if we want this listening to us all the time.

Ben Wilson [00:52:14]:
It's like your your devices, the thing that is sitting right next to you on the desk, it's already listening to you. Yep. And people being concerned with having something that is watching them or listening, to them, I think it becomes much more palatable for the general human on the planet when they are protected from getting their bank account drained by a scammer, you know, which happens all the time. You know, robo calls that start, they're getting more and more sophisticated over time. Some of these things coming out of India, those call center houses.

Daniel Miessler [00:52:53]:
Mhmm.

Ben Wilson [00:52:53]:
And you've see it in the news, like, some retiree, you know, somebody's grandma just got $800,000 cleaned out in 2 weeks from some, you know, love scam or something.

Daniel Miessler [00:53:07]:
Yep.

Ben Wilson [00:53:07]:
And they're like, if people saw that these systems that are going that are in active development to do something like that. Like, hey. I'm listening to your calls, and I'm gonna warn you or I'm gonna prevent you from falling victim to this because this person is emotionally manipulating you. And that, like, none of this is is real. It's legit. The human the human bias that is introduced by emotions is so hard to overcome for most people, and I think that's what a lot of attackers go after. It's like, hey. Yeah.

Ben Wilson [00:53:44]:
I can manipulate this person because Chris likes dogs. I'm gonna show him a picture of a sad abused dog, and he's gonna wanna he's he's gonna wanna see what's going on here. And, yeah, to have a a system that protects against that, I think that's gonna make everybody a believer. Like, yeah, stuff is really powerful.

Daniel Miessler [00:54:04]:
Yeah. I I think the other thing that's gonna make them a believer is just how useful it is to have a friend that's with you at all times, who knows everything about you and knows everything about your traumas and doesn't let you talk shit to yourself. You're like, I could I could never do that. That guy's too smart. I I can't go in that show. That podcast talks about machine learning. And they're like, no, you're you're cool. You you can go on there.

Daniel Miessler [00:54:28]:
You know, you talk about AI as well, like, you could do this. They'll like you. Like, it'll be cool. Just wear your blue hat. It'll be awesome. It's like and you're like, I'm gonna stay in bed, like, I can't go to the gym. It's like, no, you you you know you feel better when you go to the gym. You gotta get up.

Daniel Miessler [00:54:43]:
Come on. It's gonna be fun. Alright. 123, get up. Right? Uh-huh. And so that's gonna be your best friend, but now it gets an upgrade from the company. Let's say it's OpenAI or whatever. Upgrade.

Daniel Miessler [00:54:55]:
Now, propaganda filter. So every time you receive propaganda, I'm gonna warn you. I might rewrite it. So it's like slowly over time, it's gonna get more and more useful. But I think the deepest one is, like, people are alone.

Ben Wilson [00:55:09]:
Mhmm.

Daniel Miessler [00:55:09]:
It it's hard to build friendships especially when you get into your thirties. And this thing is gonna know everything about you. Now, here's the flip side of this. You thought having your social security number compromised was bad? You thought having your bank accounts compromised was bad? This company providing this AI that is so amazing, it's integrated with your life, that's just a startup with regular people and they run AWS and they forget passwords and they leave boxes online, there will be a breach of AI digital assistance at some point in the near future. And it won't be your social Security number that you lose. It won't be your bank account that you lose. It'll be your soul.

Ben Wilson [00:55:53]:
Yeah. It's you.

Daniel Miessler [00:55:54]:
It'll be your journal where you talk shit about your friends. You're like, yeah. I don't know. I you know, Michael, we do this podcast thing and he makes me mad sometimes and blah blah blah. And that got released. It's on Payspin. Michael's reading what Ben said about Michael to his AI agent in the privacy of, like, some, you know, sad time, and it's just all left out there. Your entire journal, like your trauma is everything, and it's like, that's gonna sting.

Daniel Miessler [00:56:27]:
It won't stop people from using them. It'll just be a little tiny blip, and then it'll just the usage will go back up because it's too valuable. It's too valuable.

Michael Berk [00:56:37]:
To that point, it seems like over the past 50 years as technology has become more pervasive and entered daily life, there's a lot more of your, as you said, soul out there.

Daniel Miessler [00:56:49]:
Yes.

Michael Berk [00:56:51]:
It's sort of turning into 1984 where, like, thought crimes could become a thing. Do you think, like, we're headed in that direction or or is there a way to protect yourself, like, throw your cell phone into the toilet and go live in Alaska on, like, a farm? Is that the question? Will

Daniel Miessler [00:57:09]:
a lot of people will definitely do that. I think Carpathi said something like this, which I thought was really smart. The society will fragment into multiple different small societies, so you have like the Amish approach. So I just can't do this. Me, I'm gonna go the exact opposite direction. I'm gonna have all my stuff in a personal API. It's like published. Like, maybe not my journal, like, some I also have some private entries.

Daniel Miessler [00:57:36]:
But largely, I'm gonna secure myself through transparency, more of an open source model. As the compromises start happening, and we saw this a little bit actually with the Me Too, What what happens when the pendulum swings back is you realize everybody's broken.

Ben Wilson [00:57:55]:
Mhmm.

Daniel Miessler [00:57:56]:
You realize that thing that got compromised that embarrassed me? Ben is like, actually, I kind of have those as well. And Michael's like, yeah, me too. You know, in my worst moments, I kind of make those kinds of jokes as well. And pretty soon, it saturates into the entire world and everyone's like, I'm no longer shocked about any of this because it turns out that's just what humanity does. So there's no more gotchas. There's no more gotchas.

Michael Berk [00:58:27]:
That's that's hitting. So everybody will know that everybody I mean, yeah, I guess that makes sense. If there's this amount of openness, the societal sort of definition of what is taboo will shift to what is normal. And so if normal becomes a lot more open, then we're good.

Daniel Miessler [00:58:48]:
That's right. Now there is a flip side of that, which is the 1984 model, which is author authoritarian government. Do you remember this, I I don't remember which Batman, I'm not into Batman, but there was one with with, it was a system Batman wanted to build the system and it listened to every single cell phone, and he needed this to be able to go and find the the bad guys. And kind of the hero of the story, his buddy who built all the tech was like, listen, if you build this, this will be a weapon. And I will build it for you, but I no longer work for you because someone else is gonna come and use this thing as a weapon. So this amazing thing which I am working to build for human reasons, I believe the purpose of AI is to magnify humanity. I I don't care about the tech whatsoever. I care about humans interacting with humans, and AI is a way to make that happen in a better way, more secure way.

Daniel Miessler [00:59:52]:
But if if Mao from China or Stalin had this tech, you absolutely get 1984. But a scary, scary version of it because it sees and knows everything.

Ben Wilson [01:00:08]:
Yeah. You're not telling the party line you are Yeah. An object of liquidation. Yeah. I mean, Stalin did that anyway. Like Well with how many millions of people he killed.

Daniel Miessler [01:00:19]:
Well, and he and he used people to do it. He was like, go go narc on your friends. But what happens when it's all the cameras and all the phones?

Ben Wilson [01:00:27]:
Exactly.

Daniel Miessler [01:00:28]:
And every single smart speaker.

Michael Berk [01:00:30]:
Terminator plus Hitler plus 1984.

Daniel Miessler [01:00:33]:
Exactly. So so the the trick isn't the tech. The tech, the tech can be like this wonderful human thing, or it could be Stalin in 1984. And that's a question of human control of the system.

Ben Wilson [01:00:55]:
So do you think that that, like, from having this perspective, which I also share, when you hear politicians attack technology, do you also hear in the back of your head like I do? I'm like, this is a society problem. This is a process problem, not a a technology problem. Absolutely. But when they start doing that attacking, they're like, oh, we don't want, but we need to regulate this industry and we need to, you know, control how people do these things or how they use this. You know, like, shouldn't you control how you're going to use this as a as a member of the government? Like, what's your role in this, and why are you calling this out?

Daniel Miessler [01:01:42]:
Yeah. Yeah. I don't know. I I I I feel like some brakes, sorry, or bumpers need to be used. I I just think so few people are thinking about this stuff the right way. The really scary thing to me is when I talk to friends who are not necessarily in the tech space and I just I come to the dinner, and I'm like, oh my god. What's going on? Building this. Like, you've seen this latest thing, and they're like, is that that I'm like, yeah.

Daniel Miessler [01:02:16]:
What are you building right now with AI? And they're like, do you mean you mean chat GPT? I I think isn't that didn't that kind of blow over? I thought I thought it was, like, really hyped up, like, 4 months ago, and then it's just kinda died down. Yeah. I don't really use it. Can you pass the salt? And I'm like, holy crap.

Ben Wilson [01:02:38]:
Yeah.

Daniel Miessler [01:02:38]:
They think first of all, they think AI is chat gbt. 2nd of all, they think it peaked in hype, and now it's going down. And I'm like, I how do I talk to this person? And how do I and and the reason I continue talking to them is I'm like, listen, get ready because if you think AI is is bad at replacing, you know, is bad because it's gonna replace jobs, wait till AI is inside of a competent robot. When g p t is g p d 6 is inside of a a robot that can crawl on its back under a house, take a picture of a broken pipe, and reach up and actually fix the fitting, and it has the knowledge of all pipes and all fittings, plumbers and drivers and things like this are no longer safe. So when I hear somebody being like, isn't AI chat gbt and didn't that blow over 4 months ago, I'm like, I fear for you. Mhmm.

Michael Berk [01:03:43]:
Well, yeah, at least in the hardware space, we have a bit of a buffer, but in the software space, it's yeah. But it the nice thing is you can learn to leverage the tools. Yes. But there's a lot of people that will be left behind if they don't do that, and it's hard.

Daniel Miessler [01:04:00]:
Absolutely. I I mean, that that's that's my whole vibe right now is, this transition to what I'm calling human 3 point o. I'm trying to get, like, I'm trying to build this structure. What does it look like to be a human that survives through this after AI has happened? And there there is some there's a use case for hope. So, AI crushed humans at chess in 2000 or 99, whenever that was. Yeah. Deep Blue beat, Kasparov. And it was That was it.

Daniel Miessler [01:04:33]:
Humans And what's crazy about it is humans aren't getting any better. Like, Carlson is a little bit better than Kasparov, but not that much. In the meantime, my my iPhone crushes Deep Blue. Yeah. So but here's what's really exciting about that. 2024. So 25 years later, chess is the most popular it's ever been. In the history of all chess, it is the most popular it's ever been right now, and nobody is watching computers play computers.

Daniel Miessler [01:05:09]:
They're only watching humans play humans. And to me, that's a that's a signal of hope. That ultimately, the tech can beat us, but we still care about the human to human interactions.

Michael Berk [01:05:23]:
But there's sort of a cutoff because it's more like art based. Like, the writers strike, for instance, is a good example. Like, I don't wanna see I mean, AI generated art is cool, but I typically wanna see something created by a human. But in the sort of profit world and the, like, cold business world, maybe it's a bit different.

Daniel Miessler [01:05:43]:
Well, so I I think the trick is combining those. Right? So so right now, how many Steven Spielbergs are there? 1? 20? A1000? A100? Not many. How many people on the planet? 8,000,000,000. Cool. How many people have had an idea for a series or a movie that's as good or better than Steven Spielberg. And then now imagine the gates in between them having that idea and being able to make a movie like Saving Private Ryan. The gates are, are you in LA? Do you know the people who can get this done? Do you also have access to 1,000,000 of dollars? Do you also have access to the acting talent? Okay. Now, remove all those gates because the whole thing is AI generated with AI actors, but it has just as much impact on somebody as if they watched Saving Private Ryan.

Daniel Miessler [01:06:42]:
And so it is a completely human story. They made it up. They said what the actors do. They have the same amount of control that Steven Spielberg does. But the art, the acting, the production, the visuals, cinematography, everything, it's all done by AI. So some 13 year old kid in Nairobi makes the next Saving Private Ryan for, $38, and it crushes it it gets an Oscar. And I and I think that's what's exciting about the future is that that won't be an AI thing. That will be a human thing, which AI made for the human.

Michael Berk [01:07:28]:
That makes sense. Sold.

Daniel Miessler [01:07:31]:
Now, there is slightly scary is the the AI that's really smart can be like, oh, that was really cool. Can I try? And you're like, okay, go for it, and it makes something even better, and then you're like, do we give it an Oscar? Should we name this thing? What do we do? Right. And that's where you gotta start wondering, do you do you just say, even though it can make a better movie, kinda like chess? Nobody watches chess computers play each other. Do you just say that you don't allow them to make movies? They must come from the human? But that gets really difficult because it's like, the human could just basically hire one of these things, give it an idea. It goes and makes the whole thing, and then the human puts their name on it. So

Ben Wilson [01:08:26]:
That that reminds me of a a conversation that I had with a Databricks customer, a while ago in the video game industry, and they're a very big studio. They're using AI, their own custom built AI to build bots that play video games.

Daniel Miessler [01:08:49]:
Yep.

Ben Wilson [01:08:52]:
And years years ago, people did the, like, hey, we built a bot that can play, like, Pong really well, and then we built one that could play, you know, Super Mario Brothers, and it beats it in world record time by doing, you know, frame analysis. And then, you know, fast forward a little bit and, like, yeah, we've got a bot that can play some of the most complex video games out there. So reinforcement learning systems that are trained really well with custom architectures, they can play a game like Call of Duty, and they have the only access that they have, they don't have access to the game engine. They only have the same access that a human would have. So whatever's rendered on screen and input controls that humans can have, Yep. They can push the buttons faster or move, you know, a virtual mouse, you know, their reticle much faster and more accurately than a human can. But what they're doing in these systems where they're running these simulations is playing bot versus bot, not to see who wins. Well, they kinda do.

Ben Wilson [01:10:04]:
So they wanna give an important rule set to a bot to say, hey. Learn what you can about the engine and see if you can break this. And it's basically just your optimization is to get a really good kill to death ratio in this game. So go nuts and it will find and exploit patterns that would take an exceptional person to figure out somebody with high creativity, but it's doing just brute force and just trying stuff. And then it's like, hey, I'm now dominating because I'm doing this really weird thing where I throw a grenade at the corner of a building and it bounces just right off of this other location and it kills the enemy every time. So it starts doing that over and over and over again.

Daniel Miessler [01:10:47]:
It's a whole QA team. Right?

Ben Wilson [01:10:50]:
It's a like, they're running something like 10,000 instances of a game in parallel, every 10 minutes. And Imagine how many

Daniel Miessler [01:11:01]:
human testers you would need for a QA team to be able to replicate that amount of coverage.

Ben Wilson [01:11:05]:
They know, actually. They're like, we would need 1,000,000 people to do this.

Daniel Miessler [01:11:09]:
There you go. There you go.

Ben Wilson [01:11:10]:
That was like play testers.

Daniel Miessler [01:11:12]:
And that's before the update to the AI, which comes out next Thursday. Right?

Ben Wilson [01:11:16]:
It's like Exactly. So the interesting thing I mean, I geeked out because I'm a gamer and have been since I was a kid. And I was, like, asking this guy. I was like, dude, so have you ever played against it? He's like, oh, yeah. Yeah. I'm like, why'd you do? He's like, it's like playing against a cheater. Yeah. He's like, you can't win against these things.

Ben Wilson [01:11:38]:
I'm like, well, what if you just don't allow it to go far enough in its training iteration cycle, and you just kinda like stop it after, you know, a couple 1,000 rounds. He's like, well, doesn't really work like that. Couple 1,000 rounds, it's pretty dumb if it's it hasn't been initialized, but, you know, a couple million rounds, then, yeah, I might I might kill it once. Yep. And I was like, well, 1 versus what? He's like, 1 versus, like, 299 deaths. Like, wow. That's crazy. I was like, so you can't, like, put in, you know, one of these systems into an actual video game, can you? He's like, nobody would buy the game? Right.

Ben Wilson [01:12:18]:
Because you just would lose, and nobody wants that experience. He's like, but they're great tools because they help us automate something that is meant for humans, but we just can't play this game at this level ever.

Daniel Miessler [01:12:36]:
Yeah.

Ben Wilson [01:12:36]:
I was like, well, how how good could it possibly get? He's like, he's like, I wish I could show you, you know, put it into a Mhmm. A match, you know, with you against, some people. And then you can you could see for yourself. He's like, but I don't care how good you are at our games. He's like, you wouldn't win any of it. Like, you wouldn't make any progress in any of our games playing against these things.

Daniel Miessler [01:12:57]:
It's like, wow.

Michael Berk [01:13:00]:
So so that's how security agents are gonna become, your opinion, Daniel?

Daniel Miessler [01:13:09]:
I think security agent is not quite the same metaphor, because, it's not a direct on direct, like perception versus activity speed. I would say system versus system, that's probably true. So think of security agents, China versus United States. Okay? So give 1,000,000 security agents on each side, the goal of attacking and owning all the critical infrastructure of the other country. That's the level of the game that's like the big game. Because you have to find the systems, you have to hack the systems, you have to prioritize the systems. And then you have to be able to say, given a a certain type of kinetic scenario in like a real war, what would I want to target to most disable the the other, adversary? And there's no way that a human team is gonna compete in that game, because there's just too much data. There's too many moving parts, and and that's where agents are are gonna have the massive advantage.

Daniel Miessler [01:14:26]:
I really recommend this book called Life 3.0. Have have you heard of that? Mhmm. Quite quite amazing. It's Max Tegmark. The opening chapter for this book is like one of the most exciting pieces of AI writing you'll ever ever read. It's actually, I won't I won't tell you. It's a little bit of spoiler, but you wanna basically be read the first chapter. It's unbelievable, and you should just finish the whole book.

Daniel Miessler [01:14:54]:
You probably will after that. Hey. Can I go grab another protein shake shake?

Michael Berk [01:15:01]:
Sure. I I was actually gonna mention we're about at time. So,

Daniel Miessler [01:15:05]:
Oh, well, then I think we're good then.

Michael Berk [01:15:07]:
Yeah. Cool. So I will wrap. The we didn't even get to, like, the question that I wanted to ask that we were discussing before recording, but that's what happens when you have a free form podcast. Really, really cool discussion. So a few things that stood out to me, the personality of a good security engineer is typically curiosity combined with tenacity combined with discipline. Those 3 make a a dynamic combination. The purpose of security is to feel safe, and you're not looking to have a 100% airtight systems.

Michael Berk [01:15:40]:
It's more of a, like, ROI calculation on how you should spend your time for each threat factor. For assessing the probability of a risk occurring, just assign bins of, like, super high, medium, low, etcetera. It's often not worth assigning exact probability calculations to these, and also doing that calculation is not a trivial matter. And sort of taking a step back from the future of security perspective, it's typically gonna be AI agent based. And, there's gonna be a movement towards sort of open sourcing architecture for your, organization, and that will actually allow for the greatest amount of, transparency, but also the greatest amount of security. Currently, that's not the case. And if users know about your system, that will actually be detrimental. But in the future, sort of like open sourcing algorithms, open sourcing your your architecture will be helpful.

Michael Berk [01:16:34]:
And then finally, AI is gonna filter stuff that makes your emails less phishing oriented. So, Daniel, if people wanna learn more about you or your work specifically, your organization unsupervised learning, where should they

Daniel Miessler [01:16:47]:
go? So you could search for unsupervised learning, podcast or unsupervised learning newsletter, and you'll probably find it there. It's also Unsupervised Learning on YouTube. You can find me there. Starting to put out more content there. And, there's also one particular piece called AI's Predictable Path, which talks about a lot of the stuff that we talked about today. It's actually my longest post I've ever written. It's, like, 9,000 words, and I spent like 20 hours on the art, AI art, of course. But that kind of encapsulates a lot of the stuff we talked about, especially around the agents, and, agents defending their principle and sort of keeping them safe.

Daniel Miessler [01:17:33]:
So, definitely wanna check that one out. Awesome.

Michael Berk [01:17:38]:
Well, it has been an absolute pleasure, and until next time, it's been Michael Burke and my co host.

Ben Wilson [01:17:42]:
Ben Wilson.

Michael Berk [01:17:43]:
And have a good day, everyone.

Ben Wilson [01:17:45]:
We'll catch you next time.
Album Art
AI in Security: Revolutionizing Defense and Outsmarting Attackers in the Digital Era - ML 153
0:00
01:17:50
Playback Speed: