JAMISON:
There’s editing but only to, like turn up the volume on any mistakes that you make. [Laughter]
CHUCK:
I think we have enough now for ‘I love Linux’.
[Laughter]
[This episode is sponsored by FrontEnd Masters. They have a terrific lineup of live courses you can attend either online or in person. They also have a terrific backlog of courses you can watch including JavaScript the Good Parts, Build Web Applications with Node.js, AngularJS In-Depth, and Advanced JavaScript. You can go check them out at FrontEndMasters.com.]
[This episode is sponsored by Hired.com. Every week on Hired, they run an auction where over a thousand tech companies in San Francisco, New York, and L.A. bid on JavaScript developers, providing them with salary and equity upfront. The average JavaScript developer gets an average of 5 to 15 introductory offers and an average salary of $130,000 a year. Users can either accept an offer and go right into interviewing with the company or deny them without any continuing obligations. It’s totally free for users. And when you’re hired, they give you a $2,000 bonus as a thank you for using them. But if you use the JavaScript Jabber link, you’ll get a $4,000 bonus instead. Finally, if you’re not looking for a job but know someone who is, you can refer them to Hired and get a $1,337 bonus if they accept the job. Go sign up at Hired.com/JavaScriptJabber.]
[This episode is sponsored by Wijmo 5, a brand new generation of JavaScript controls. A pretty amazing line of HTML5 and JavaScript products for enterprise application development in that Wijmo 5 leverages ECMAScript 5 and each control ships with AngularJS directives. Check out the faster, lighter, and more mobile Wijmo 5.]
[This episode is sponsored by DigitalOcean. DigitalOcean is the provider I use to host all of my creations. All the shows are hosted there along with any other projects I come up with. Their user interface is simple and easy to use. Their support is excellent and their VPS’s are backed on Solid State Drives and are fast and responsive. Check them out at DigitalOcean.com. If you use the code JavaScriptJabber, you’ll get a $10 credit.]
CHUCK:
Hey everybody and welcome to episode 184 of the JavaScript Jabber Show. This week on our panel, we have Dave Smith.
DAVE:
Hello.
CHUCK:
Jamison Dance.
JAMISON:
Hello friends and also enemies.
[Chuckle]
CHUCK:
I’m Charles Max Wood from DevChat.TV. Just a real quick plug for JS Remote Conf, if you want to submit a talk or buy a ticket. We also have a special guest this week, that’s Nik Molnar. Did I say that right?
NIK:
You did perfectly.
CHUCK:
Do you want to introduce yourself?
NIK:
My name is Nik Molnar, said the exact same way. I live in Austin, Texas. I’m a web developer and a program manager at Microsoft on the Cross-Platform and Open-Tooling Team.
JAMISON:
Can you just explain a little bit about what that team does?
NIK:
Sure.
CHUCK:
It crosses platforms and open-tooling.
JAMISON:
I don’t know what that means.
DAVE:
The name is right there on the chest. [Chuckle]
NIK:
That means that the products of that team is working on and focused on our cross-platform, meaning, running Linux, OSX and Windows, kind of by default. I work on a web product so that also means cross browser by default. And open meaning that everything that we’re doing is opensourced or is involved with open standards like some work with W3C or other standardization bodies and that’s really our focus.
And then we also serve as kind of a server function at Microsoft to help other parts of the organization that think about an embraced-open source, either consuming it or shipping it.
JAMISON:
So, my outside impression is Microsoft has moved to embrace that. Is that still a big transition for some people inside Microsoft or is it kind of just how things work there?
NIK:
You know, I don’t think I can speak to that expertly because I’ve only been in Microsoft for three or four months now. And it’s a huge organization obviously. So, I think that your experiences may vary depending on what parts of Microsoft you work in. I’m in the cloud in enterprise division and so, that’s the group that brings you .NET, Visual Studio, a lot of other developer-focused products. And I think that that division is very well-aligned in embracing open source now. And so I think that you should continue to see us move forward and evolve in that space.
CHUCK:
Yeah, I remember when Microsoft was like the evil empire and, you know, “Oh, Microsoft!” But it seems like they’ve really kind of opened up and said, “You know what? There’s a real ecosystem out there beyond just the Windows server and Microsoft desktop arena.” And there are a lot of things that they put out there. One of the things that I know that they did, we did an episode on it on Adventures in Angular was Visual Studio code and all of the people that I’ve talked to that are using it and loving it are on max.
NIK:
Exactly. Visual Studio Code is my team, not my immediate team, but the guys in my group puts up Visual Studio code and it is great. I use it. It is actually – you're stealing my thunder. It’s one of my picks for today’s episode because it’s great for JavaScript. I use it for Notes, for frontend stuff. It’s really fast and lightweight. I’m really enjoying using it over some of the other similar text editors like Brackets which I was trying to use before but I wasn’t super successful with.
JAMISON:
So I think we brought you on here to talk specifically about performance. Is that right?
NIK:
Yes, that is mostly what I focus on and care about. The product that I work on which is an open source debugging and diagnostics tool called the Glimpse is main use case, is a round performance. So, I’ve kind of been entrenched in the performance space particularly web performance for almost five years now.
JAMISON:
This is, I don’t know, it’s kind of a hard topic to give an introduction to. How do you introduce performance? Computers are fast but sometimes we write slow code.
DAVE:
It’s very easy. You just introduce it very quickly. Ready for the low-performant performance talk you ever had.
NIK:
So the way that I’d like to do it is I have two main talks that I do, each one an hour long. So obviously, I don’t have the time to cover all of that now when I talk about web performance. But one of them that really digs into how to measure web performance and what to measure. And the other one digs into, okay, here’s the common problems and how you would fix them.
So we might want to start to talk about why performance even matters. Maybe your audience already agrees that it does and so we can skip that and if not, then we can dig straight into what to measure and how to measure it. Whatever you guys are more interested in, let me know.
JAMISON:
I think why it matters would be important. I think you might have people that think computers get faster every year or they haven’t gotten faster every year especially in the browser, VMs get faster every year. So, I mean, you just wait and then your performance is better.
NIK:
Yeah, that is an approach. That’s an approach that we use for a long time. Unfortunately, we’re kind of at a bottleneck with web performance and specifically because the big bottleneck there tends to be latency and latency is not really getting much better. And so, there’s techniques that we can use to kind of get ourselves around the latency problem.
JAMISON:
So when you say latency, you mean the round-trip time to talk to a server and get a response, right?
NIK:
Yes, exactly.
DAVE:
So you’re saying that Moore’s Law does not apply to the speed of light. [Laughter]
NIK:
No, it does not.
JAMISON:
That’s just because the high frequency traders have not figured out how to make money out of making those [inaudible].
NIK:
[Laughs] Oh, trust me. The high frequency traders, they’re thinking about the speed of light all the time. When I was living in New York, I get a lot of consulting for banks. And literally, the office they would pick and based on how close it was to the next major internet hub, that was a competitive advantage for these guys.
DAVE:
Whoa! Oh my, gosh!
JAMISON:
They’re responsible for laying a lot of fiber optic cable, if I understand correctly.
NIK:
Yes, you are exactly right. Milliseconds, nanoseconds – that kind of stuff matters. It leads us to some very interesting and ugly architectures and not the clean architectures that you would like to build as a pure software developer because you’re like, “Ah, extra hot, extra cold. Nope, not going to do that. In-line everything.” And it’s like, “Oh, this code is a pain to work on.”
DAVE:
Maybe that’s a good question to open. It opened the topic. But is good, clean architecture, generally speaking, at odds with high performance code?
NIK:
That’s a really good question. And I would have to say to a degree, yes. There is a lot of things that you’ll do. So let’s take an example, alright?
One of the most simple things that you can do in web, specifically, is to bundle and minify your assets. So take your JavaScript and your CSS in minify, you ship out all the web space and then bundle it, meaning, if you have two or three CSS files that you might have split up for modularity purposes, for example, jamming those things all together.
Now, why I’d say it might be at odds is kind of likely is because what I’m describing to you there, that is the distribution. If the web was binary, if it was compilable, we don’t really think about what the compiled code looks like in a native application. We only care about the source. However, because of the way the web works and the stateless nature where there’s a server and a client, there’s a lot of times where we’re using our clients and the debugging tools built-in to our browsers to look at that JavaScript, to look at that CSS. And so when we pop it open and it no longer resembles what it did when we wrote it, that does kind of make the architecture a little less clean because we’ve made this optimization to improve performance but now it’s a little less debuggable, maintainable, et cetera.
JAMISON:
So, you start talking about latency and that’s not a thing that gets faster as processors get faster.
What do we do about that?
NIK:
Oh, we got to talk to some physicists about that. [Laughter]
NIK:
When we think about processors, let’s break down the way that a page renders. The very first thing that typically happens is if your browser will issue a request. Issuing a request means requiring across the network. So right there, the very first bottleneck we might hit is network. On the other side of network, the server receives the request and it does some processing, maybe you’re using, you know, PHP or some templating language or something like that on the backend that has to build up the HTML. Once that is done, the HTML is really kind of a honey-do-list, right? Every Saturday, my wife tells me here’s the things that I need to do, like, I got to go cut the grass and I need to go pick up milk and do this stuff and the other. HTML is really just a ‘to-do-list’ for the browser.
And so the server puts together the honey-do-list and it sends it down to the browser and then the browser says, “Oh, okay. Well now, I need to go and I need to download these JavaScript files and I need to download these CSS files and I need to go download these images.” And so once again, we’re hitting the network there. And once it gets all of those assets, it has to render them or execute them or parse them based on the type of asset.
So when we talked about CPU performance and Moore’s Law, that does apply to things like server side, rendering time, the PHP chunk that I mentioned there. And it applies to the JavaScript side if you’re running custom code on the client. But all of the in-between time is latency. And so, the closer we can be to the server or the faster that connection between the client and the server is, the better the latency.
So, let me give you an example. I just did a quick little performance audit of your guys’ website, the JavaScript Jabber site, the DevChat.TV.
CHUCK:
Oh no.
NIK:
I won’t shave it.
JAMISON:
The [inaudible] children. [Laughter]
CHUCK:
It’s a shame away. I wrote it myself but go ahead. [Laughs]
JAMISON:
You wrote that, Chuck? I was wondering who did that.
CHUCK:
Yeah.
NIK:
I’m also not even sure if this is like a CMS that you guys are like built-in with because it was DevChat.TV. Anyway, there’s 17 JavaScript files that were there. So the round-trip . . .
DAVE:
Please Chuck! Argh! What are you doing? [Chuckles]
NIK:
So the round-trip time happens on every single one of those requests. And so, if we can eliminate latency, then that wouldn’t be as much of a problem. And so when I say that we have to get physicists involved is because assuming that we’re on fiber optic which obviously, most of us are on fiber optic from end to end, but even on fiber optic, at this point, we’re only getting about twothirds of what the speed of light is and on vacuum.
And so, there are universities working on this problem trying to eke out additional percentage points on the speed of light through fiber because it matters. And if you compare latency to bandwidth, there’s been studies that will take a large group of websites and they will increase the bandwidth and reload the website over and over again. So one megabit a second, two-megabits a second, et cetera, et cetera, and see how long it takes to load each of those sites.
Bandwidth does matter for common websites up to a certain point but then it has a huge diminishing law of returns when you start getting, everybody is talking about ‘Oh, I want Google fiber’ or ‘I want files’. And the reality is if you’re mostly just surfing the web, you’re not going to see much of a difference between the two. That comes until watching video and doing other bandwidthheavy things where the latency doesn’t – oh, go ahead.
JAMISON:
Can you define what bandwidth is too? We talked a little bit about what latency is but just to make sure everyone has where we’re working out the same knowledge base.
NIK:
Yeah, for sure. I think of bandwidth as capacity. So bandwidth is when your city decides that to get traffic moving more quickly, they’re going to add another lane to the highway. Now instead of three cars wide at a time, it can be four cars at a time.
So it is the capacity. It’s the amount of cars that are able to travel down that street at the same time. Whereas latency is a Pinto versus a Ferrari and it doesn’t matter how many lanes there are, it’s how fast can you get from point A to point B. Does that illustration kind of clear up the difference between bandwidth and latency?
DAVE:
Yeah. Sure.
JAMISON:
Yes.
DAVE:
So, it’s bits per second versus millisecond round-trip time.
NIK:
Exactly. And it gets really confusing because if you turn on the radio or whatever and hear any of your broadband providers, they’ll tell you, “Our speed is 100megbits a second,” or whatever it is. Well, that’s a lie. That’s not their speed. That’s their capacity. The reality is if your capacity gets choked-off, it will affect your speed or your latency but they’re kind of two independent things that sometimes step on each other.
JAMISON:
Okay, I totally cut you off there. You’re in the middle of building up steam and then I dumped a big bucket of water on your fire.
DAVE:
Yeah and now I’m trying to look at the pulls and remember where I was going.
NIK:
Yeah. The question is…
DAVE:
Hold on, I think we’re in the middle of criticizing Chuck for writing a slow website. [Inaudible] [Laughter]
CHUCK:
I know, 17 JavaScript files. I mean it takes 20-years, right? [Chuckle]
NIK:
Well, there are certainly things that you guys could do to improve it. I think one of your challenges is that most of those JavaScript files seemed to be being served from different service providers. I see a lot of Google, I see add this, I see Stripe, I see TrackJS, et cetera, et cetera. So you guys have extra challenges because you’re relying on services and that’s fairly common in today’s kind of mash-up – I'm showing my age to say that word – mash-up internet where we’re cobbling together these different services.
DAVE:
So, you’re going with the honey-do-list and the server gives you a bunch of HTML and you’re waiting for it to comeback and then…
NIK:
That’s where I was. Thank you so much. So everything comes back to the browser [inaudible] and all those things. So the CPU, we were talking about Moore’s Law, that applies on both the server and the client where we’re executing the JavaScript but the network is usually the thing that you want to optimize first. So the studies show that the 80/20 rule applies here. About 80% of the performance problems that you’ll see on web applications can be solved with networking optimizations. And the study was recently redone with the focus on mobile and it goes from 80/20 to 90/10 on mobile because those networks are so deficient compare to what we’re getting on a standard kind of desktop machine.
So, for example, with these files that you guys have, I think the biggest single thing that you can do on your website is to enable caching of all of these assets. Right now, there are very few files being cached. I’m downloading them again and again each time I click on another link on the page. Whereas if I could cache them and not have to re-download them, what that does is instead of me having to deal with latency, I can just skip it because I’m going to read it from disk instead of going across the network again.
DAVE:
So you’re talking like client-side caching?
NIK:
Yes, HTTP level caching. And so, usually you just fill in to your web server. There’s a configuration file where you can add a configuration header and that header will tell the browser, “Hey, listen. This file here, it’s good for the next week, 10-minutes,” whatever you guys deemed to be appropriate. And then the browser won't download again for that timeframe.
CHUCK:
Oh yeah, there’s new episode every 10-minutes.
NIK:
Yeah, perfect. Exactly. And I would imagine you guys could get away with weekly caching and some assets probably longer like – I don’t think you’d probably change your logo very often. So, cache the logo for a month, two months, something like that.
CHUCK:
Yeah, that makes sense.
DAVE:
Yeah.
NIK:
Some people might say, “Okay, none of these really matters.” And this goes back to what we originally were thinking: why does performance matter? There’s been – all the big companies have done studies on performance and what it does. So Amazon added 100 milliseconds of load time and they lost 1% of sales.
CHUCK:
Oh, wow!
NIK:
100 milliseconds, this is a number that is so small that is kind of hard to think about. We don’t really deal with milli much of anything, maybe millimeter once in a while. So what I want you to do is I want you to blink your eyes. That blink just took between 300 and 400 milliseconds. So, 100 milliseconds and Amazon lost 1% of their sales.
DAVE:
I’m literally sitting here blinking as fast as I can.
[Laughter]
JAMISON:
I’m straining my blink reflexes for a long time and I can tell you, I can blink in like a hundred milliseconds.
NIK:
Yeah. That’s still 1% of Amazon sales.
DAVE:
Jamison has a superpower.
NIK:
Yeah. At Google, they did this experiment where they increase the number of results that were being shown on their search results page. And they ended up adding an additional 500 milliseconds; so half a second and they lost 20% of their revenue and ad click-throughs because of that. So, 500 milliseconds, 20% is pretty big.
And then another study done by [inaudible]. They added a 160K of hidden images to their page and their balance rate increased by 12%. That means the number of people that went to that page and [inaudible] went up by 12%.
CHUCK:
Oh, man.
NIK:
And you know, it’s easy for me to spell out figures. And these companies care because they’re selling something, like bottom line matters. They can tie performance to revenue. And a lot of the users, myself included, I haven’t worked on a ton of e-commerce websites but I certainly have worked on content websites and things like that. And this stuff matters for content as well. Google has added a factor into their algorithm for page speed and the faster your site is, the higher up in the index you’ll move. So, you get more traffic if you’re faster. In fact, they’re experimenting – this popped up I believe in December of last year with what I call the ‘Scarlet Letter’ of the web. They put this little red, slow icon next to websites that they deem to be too slow.
CHUCK:
Oh, wow!
NIK:
They’re [inaudible] testing that. You certainly don’t want that to show up.
CHUCK:
Yeah.
NIK:
And then lastly, the thing that I’ll say is more and more and maybe when a majority of the time now, your website is accessed via a mobile device. I’m holding my iPhone in my hands right now and these devices are underpowered compared to what we’ve been working on. They’ve been out for five years or so which is quite a while in internet time. But before that, we kind of didn’t worry about performance that much because bandwidth was getting so good and Moore’s Law was there and now it’s not because people are using these things. And so, there are certain websites that I know of I go to and I’ll read an article, I can see my battery drop 10%-15% just because I’m reading an article and they’re doing really dumb things with their CSS and all these extra animations and things like that.
DAVE:
It’s onscroll event. I guarantee it. [Chuckles]
NIK:
Oh man, those are the worst. Honestly, honestly that’s one of the worst things that you can do. Just [inaudible]. If you could avoid it in any way possible, talk to your designer and get rid of onscroll events.
And then lastly, I mean, those are really good audience and financial reasons to consider performance and make sure that your applications are as performant as possible. Well, performance should not be your number one concern, right. Donald Knuth was famous for his quote about ‘premature optimization is the root of all evil in software development’.
And so, I kind of want to put a little gut check-in for the listeners to not just run off a tail, “My gosh! We’re going to increase our sales if I go on and make this thing faster.” What you really need to think about in your application is there’s this hierarchy of needs and we’re familiar with Maslow’s Hierarchy of Needs potentially. He’s a psychologist who said that there are certain fundamental needs that humans must meet first before they worry about more advance needs.
JAMISON:
Yeah, it’s like the triangle of life at the bottom and then…
NIK:
Exactly. Everybody [inaudible] to that foot.
JAMISON:
[Inaudible] about self-actualization on medium on the top.
NIK:
Exactly. So there’s the UX Director at Mailchimp in Philadelphia, his name is Aaron Walter. He’s proposed a similar pyramid which is the hierarchy of needs for users of software.
And so the very first thing that your users need out of your software is they need your software to be functional. It has to have a problem. It has to solve a problem that they have. And then next, it kind of needs to be reliable. It needs to do that without crashing all the time and in a consistent manner.
I’m on Twitter quite a bit. My handle is @nikmd23. And a few years ago, you guys might remember that Twitter was so notorious for being unreliable that their mascot, the Fail Whale, kind of became famous in and of himself.
CHUCK:
[Laughs]
NIK:
During that time when Twitter had reliability problems, I didn’t really use it because it didn’t meet that second level of the hierarchy which is reliability. Now, Walter goes on to suggest that the next level of the hierarchy is usability and there’s been a big shift and a big focus on design in UX in our industry. I maybe gave Apple some credit for that, with what they do with their devices but everybody really cares about design now.
But part of usability and this is the part that I add in to the hierarchy is performance. If it’s beautiful but I can’t access it quickly enough, then I’m not going to use it.
JAMISON:
So when I hear about performance as usability, I think of – you mentioned those onscroll things. I’ve seen a lot of beautiful PR sites where there are amazing things happening when you scroll down the page, just this gorgeous animations but it hijacks your scroll and then it feels broken and then I don’t like them anymore. Like there’s a little angel flying across the background. But when I scroll down, it doesn’t scroll down. I don’t care; I don’t want that angel there. I want to scroll down. That’s why I’m scrolling down. [Chuckle]
DAVE:
You will scroll the angel and you will like it.
JAMISON:
Yeah, yeah. I didn’t press to ‘show angel’ button. I pressed the scroll down button. [Chuckle]
NIK:
I would assume that if that was the only way to get the data that you need, you might deal with it. But if there was a competitor that could give you that same information, that was functional, reliable and usable, you would switch over.
JAMISON:
Yeah, yeah.
NIK:
Because switching cost on the web is so cheap, right? Control L, I type in a new URL and I’m gone. And so, you have to really be invested in getting those users.
Well, Walter’s hierarchy goes on to finish the top of the pyramid is that we tend to begin to create pleasurable software and this is software that it is functional, it is reliable. It’s usable, it is performant but it is also pleasurable. And so you see that in really beloved products that we all talk about like Github or Trello with the personality of the team is coming through in fun little ways. And so we get Octocat Jedis on 404s for Github and a barking dog named Taco on Trello. We can [inaudible] that team and it’s kind of fun to use those apps and see what kind of happens because they made it to the top of this pyramid and now they’re doing pleasurable things.
DAVE:
That’s a really hard place to get in my experience as a developer.
JAMISON:
I don’t know that I’ve ever gone there. I was just thinking about that.
NIK:
I completely agree. I don’t know if I’ve ever gotten there either but I’m trying to get there and that’s why performance has been kind of my focus for the last five or so years.
JAMISON:
Another appealing thing about performance is it feels like as a developer, I have a lot more control over the performance than I do over the pleasure that people get from the app. Like if there’s a spectrum of how involved developers are with customers and project management, things like that. But I think even if you’re heavily involved in the direction of the product, if you’re the one building it still, it can feel like this amorphous massive desires. And it’s hard to say like, “Yup, I nailed it”, when you’re down on the code, I think. But the performance, you can measure it. You can say, “Yup, it’s faster. I did it.”
NIK:
Yeah, you’re exactly right. I think the only time that people – the developers maybe have trouble controlling the performance is when they work in a team where they are the development team and the UX team is another team. They throw things over the wall. And that, from a cultural standpoint, is dangerous. Performance is something that, ideally, the entire organization will be bought into from top to bottom.
And there’s a concept called the performance budget where everybody would agree just like, we don’t want any bugs and there’s this criteria for what makes the software of good quality. There’s criteria for what makes software performant. So you might say – and we haven’t really talked about the different metrics that are available. But you might say something like, “Our speed index is going to be 2500 milliseconds,” and that’s the budget. And if the designer designs something that there’s no way for it to be implemented in that timeframe, then we all agree as an organization that we’re not going to implement that feature, at least not in a way that it’s designed right now.
And so, I agree with you that developers typically have a lot more control over the performance of things, but if they’re not working closely with their designers, that can get away from them rather quickly.
JAMISON:
So I want to ask about a couple of things. The first one is, you mentioned – the examples you mentioned on performance for an E-commerce website where there’s a very direct monetary cost to performance where, I mean, as the bounce rate goes up, the bounce rate going up means you make less money.
There’s a whole other world of applications, I mean, especially enterprise apps. If your company has bought this application and you have to use it to get your job done, to some degree the performance doesn’t matter as much, right? I mean, it will make your life better if it’s faster but if you have to use it, how do you justify spending time on performance if it’s already okay-ish in this situation like that where you're the company making this enterprise software?
NIK:
It is certainly something that needs to be balanced and considered as ROI. So when I was consulting, I did a bunch of these corporate internet-type applications that maybe had somewhere between 20 or 2,000 users, pretty small on a use case scenario. But in a lot of times, in those places, management would say to me, “Well, as long as the page loads in 30 seconds, it’s fine. It’s only used by 50 people.”
And so, what ‘good enough’ is on your scenario, I’m not sure. But the easy thing to do there is if there’s an action that those 50 people have to take every day and it takes 10 seconds but you could whittle it down to 5 seconds, it’s easy to do the math and say, “Well, these 50 people times 5 seconds times 365 days a year times their average salary,” and you will get a number that’s in the thousands. Now, does it make sense? Would it be cheaper for me to go on fix that and whittle those 5 seconds down? Would it be cheaper than the thousands of dollars that we’re going to spend over the next year? And how long does a software supposed to last for? Because usually those systems are built to last for 5 - 10 years. And so, I think in that case, it’s very easy to make the ROI decision on whether or not you should be investing in a particular performance scenario.
DAVE:
Now, the scenario you just described sounds kind of like a linear relationship between the amount of money that it would be worth to whittle the performance down. In my experience, it seems like it’s more of a step function which is like, at some point, the user becomes distracted and they go do something else while they're waiting for your page to load. And then, it doesn’t matter if it takes 2 seconds or 10 seconds if the destruction point is at 1 second. Have you any experience of that, sort of measurements or research?
NIK:
Yeah. Another research for when users do get distracted as far as calculating ROI because they’ve switched tasks. I know generally that all of the research shows like context switching is very bad and it takes a lot of time to do that and to come back to a task. The math behind that is above my pay rate and my algebra to schooling. [Chuckle]
NIK:
But basically, I mean, as far as when people begin to lose interest, the studies were done in 1968 at IBM. And the same study was re-conducted by Jakob Nielson again in 1993 and in 2005. And the numbers didn’t change over those 40-odd years.
And so basically, what that comes down to is 100 milliseconds, the number that we talked about earlier. That feels instant to a user. So I’m not saying that you need to respond to a web request within 100 milliseconds because best of luck to you making that happen. But if I go on and click a button on your site, that button needs to depress. It needs to be a sad button or it needs to say loading or it needs to at least do something within 100 milliseconds to let me know that I got the gesture. And you know the sites that don’t do that because you’ll click on something and you’re like, “Oh, maybe my mouse is broken or the batteries are dying.” And you go and you try to click again. That’s the 100 milliseconds and that feels instant.
A thousand milliseconds, 1 second is uninterrupted thought. So if I can click on that button, one missed CP and I’m seeing the response come back, you’re never going to lose your user. They’re still focused on the task that was at hand.
Now, the study went all the way up to 10,000 milliseconds up to 10 seconds. At 10 seconds, not one user in any of these 3-way period studies still had their attention on the task they were trying to accomplish. The numbers around the 3 seconds are where people will start to leave now, but at 10, for sure you’ve lost even the most patient of users.
And so, you know, maybe in your experience, because I’m going to speak to your experience now since that’s what you asked me to do. Maybe in your experience, switching tasks and firing off something and letting it run and coming back to it later is fine. And you know what? There are tasks that just take that long. Maybe you have a huge amount of data to crunch or you’re waiting on results to come in from the field or something like that and that’s fine. But what I would recommend doing in those situations is responding quickly that you're acknowledging the user’s gesture. And then if it’s going to be awhile, find another way of notifying them. There’s notification APIs built in HTML5 now. You can email your user. Maybe you can tie in an [inaudible] in a corporate setting like there are ways to draw them back into the activity and just kind of acknowledge, “Hey listen, I can’t meet these kind of demands that Nik is talking about here. This a thousand milliseconds uninterrupted thought, that’s fine. Let me find a way to release my user’s psychic waves so they don’t have to think about this anymore and troll them back in later when I’m ready for them.”
DAVE:
Or maybe just present Flappy Bird to them to keep them busy, right?
[Chuckle]
JAMISON:
The old chrome 404 page model.
DAVE:
Yeah. [Laughs]
JAMISON:
Hey, it’s broken but it’s entertainingly bruised.
DAVE:
Yeah. [Laughter]
NIK:
The other’s personality coming through. You could definitely do that. I think calculating the ROI on that one will be a little bit easier. [Chuckle]
NIK:
I mentioned a term here called Speed Index. Is that something that you guys are familiar with?
CHUCK:
No, I’m not.
NIK:
Okay. I think that this is something important because what I find when people dig into web performance, when the premise they run into, there’s hundreds of different metrics that might or might not be useful. And so, Tim Kadlec is a well-known developer in the performance space and he’s kind of broken down all these metrics into four categories that I really like.
And so the first of the metrics is quantitative metrics and these are the metrics that maybe you’ve heard of before. They’re very simple, basic numbers. So the thing is like page load time, page weight, number of images, number of HTTP redirects – they’re all just simple numbers. But the problem with them is if I tell you that there’s 17 JavaScript files on your page, that doesn’t necessarily mean that it’s slow. We have this gut-reactions that number sounds really high. But if I tell you I have two different pages and one has three JavaScript files on it and one has two JavaScript files on it, you don’t actually know which one is faster because quantitative metrics don’t tell you anything about performance.
So the next category of metrics are rule-based metrics. And what these really kind of do is they look at your quantitative metrics. They run some analysis on top of them and apply best practices. So if you think about tools like Yahoo’s YSlow or Google’s PageSpeed, they will literally give you a grade and say, “Hey, your website is an 88 or your website gets a B.” Those are rule-based metrics. And they typically tell you, “Well, maybe you would have an A if you cache these resources or if you minified this JavaScript file,” et cetera, et cetera. But just because my website is an A, that doesn’t mean that it’s fast. It’s just means that I’m following the current best practices.
So people like to use rule-based metrics because it’s kind of a one number that you can rally behind and point your boss to it. But like I said, it’s not really telling you anything about the performance of the site.
For that, we have to start getting into milestone-based metrics. Milestone metrics measure the amount of time between when a navigation began in some processing event that happened in a browser. So there’s a couple of famous ones for this. So you might have heard of ‘Time To First Byte’. That’s from when I click on the link, how long is it until the server responds and I get the first byte of the response.
paint:
how long until the first pixel actually renders on the screen. And then the two most famous milestone metrics because they're built-in to all of the developer tools is the time to DOMContentLoaded and the Time to Load. But a lot of people don’t even understand the difference between those two, DOMContentLoaded and load. DAVE: That’s what I’m going to ask you. Go ahead.
NIK:
DOMContentLoaded is when the HTML has finished downloading and the browser has parsed it and has turned the HTML into a DOM and you are ready to begin manipulating with it. It does not mean that all of the rest of the assets on the honey-do-list had been downloaded. So you might still be waiting for JavaScript and for images and things like that when DOMContentLoaded hits.
But in Time to Load on the Onload, now all of those assets have finished downloading and then maybe you’re going to go and kick off some more with JavaScript or something like that. But that’s kind of out of preview.
And so, milestone metrics are really great because they get down to performance. I can say this took 1200 milliseconds and now it takes 800 milliseconds and it’s faster because I’m talking about time and speed again. But the problem is they’re extremely technical. Even here, I had to explain the difference between DOMContentLoaded and Onload and I don’t even know if I did that great of a job explaining it to this technical audience. So imagine having that conversation with your nontechnical manager.
Some people have tried to make custom milestone metrics which is a little bit better. So, Twitter is famous for this. They have a metric on their page which is Time to First Tweet. How long is it until the first tweet of your timeline appears? Or you can imagine YouTube having time to video playback. How long until a video starts playing? And those are business-specific, right? Twitter is in the business of serving tweets and YouTube is in the business of serving videos. And so, making those metrics specific to the business helps everybody get behind that culture and helps everybody understand, especially when you’re starting to put together something like a performance budget.
Now, what are the challenges around milestone metrics is I can have a site and you guys could have a site and we could both have an Onload time of 3 seconds. But I’m kind of this old school guy, maybe I just did everything on the backend and that 3 seconds when Onload hits, my user can see all of the content. But you guys did really cool things with Angular and React and Ember and – who am I leaving out? Whoever else.
JAMISON:
We use them all. We use all the frameworks.
NIK:
You use them all.
JAMISON:
It’s a JavaScript podcast.
[Chuckle]
NIK:
So when you guys are using [inaudible].JS, that’s Nik’s new framework, you guys are using that at 3 seconds. You hit Onload, maybe there’s not actually a lot of content there because that’s the moment in which you start Ajaxing everything in. So even though we have the exact same Onload time, the user would perceive our pages to feel different. They would perceive my page, the oldfashioned boring one maybe, to be a little bit faster.
And so the fourth category, the fourth and final category of metrics are these perceived metrics, where we try to measure the way that the user feels about how fast the page loads. And so, Google and Microsoft, there are a bunch of other companies that have tried to come up with perceived metrics that have mostly failed because they’ve been very difficult to calculate. But there’s one that’s kind of still hanging on and it’s what people are rallying around, and that’s the one that I mentioned before. That’s the Speed Index.
And so, what the Speed Index does is the engine that calculates it is at WebPageTest.org. So what that does is it captures a video playback of your page loading and it looks at frames every tenth of a second and it calculates how visually complete is that frame to the final frame, and so it builds the score. And it [inaudible] those percentages over time and that makes a curve. So we might be at 0%, 14%, 56%, 97%, 98%, 100%. So we get this little curve and plot that out. You take the area above the curve and that’s what makes the Speed Index. That’s all very mathematical and weirdsounding but basically, it’s how much of your site got to your user how quickly.
And so, for example, the JavaScript Jabber site has a Speed Index of around 5,000 milliseconds. I did nine loads of it and came up with a median run to get that 5,000 - that’s on the first load. And so, that metric, that’s the one that I really recommend that if you’re going to think about performance, all of the other metrics might actually make sense for you. But for dipping your toes in the water, get the speed index and use that to measure whether you’re getting faster or slower.
You might actually add a bunch of images in JavaScript and still be able to reduce your speed index because speed index takes everything into account – the numbers, the counts, the weights, all of that gets encompassed into one number which is a lot easier to socialize.
DAVE:
So, can you say again how does speed index know that your page is fully loaded? You know nowadays, we have lots of Ajax and stuff that manipulates the page after the initial load. How does it know? Does it just settle over time?
NIK:
When you set up WebPageTest.org to run a test run, by default, what it does is it waits until Onload happens and then it waits until there’s two seconds without any network activity. So at Onload you go on, you Ajax on a bunch of things, it will wait until all of that has finished and you haven’t requested anything else for two seconds.
However, that’s very configurable so you can put in X number of seconds after Onload or cut if off at DOMContentLoader or cut it off at Onload, whatever makes sense for your application. You can kind of tweak it to figure out what the end state is and then reverse engineer that the percent completes from there.
DAVE:
Well, that makes sense. And is this a Chrome extension so I could use it to test apps behind the log in?
NIK:
It’s not a Chrome extension. And so, there’s a publicly available [inaudible] at WebPageTest.org and that has agents that run literally all over the world. So you could run tests from Tokyo, from Moscow, from LA, from Brazil. You pick a place and that way, you can start to test what the latency looks like from these different places. And you can run it on all different browsers and even different devices. There’s some Android devices and some iPhones hooked up to the server so you can literally run that. And that’s all free.
Now your problem, if you want to get that to a site that’s behind the firewall, that’s really a bit more of a challenge. So there’s two ways that you can get around that. There’s one you can…
DAVE:
I’m now behind a firewall, behind a log in wall. Is that what you’re thinking?
NIK:
Okay. The log in wall is a little bit easier, actually. If you support the HTTP basic off, you can actually enter in a username and password on the site. I don’t recommend putting in a real username and password. Have like a test user that you’ll just save after your test. But you can also run a script and the script works kind of like Selenium. It’s a slightly different syntax and you’re putting the script right there on the website and it will execute it and run it.
If you’re behind the firewall, the more complicated scenario, you can run a private instance of WebPage Test. And Pat Meenan, he works for Google or he’s the main guy behind WebPage Test. He actually makes that very easy because he has AMIs available so you can get WebPage test completely up and running on Azure and then you can just set up a tunnel between your infrastructure – I'm sorry, not Azure, on Amazon web services.
JAMISON:
I was going to say, “Are just announcing that Azure supports AMIs?” [Chuckles] That would be cool.
NIK:
That would be cool. But no, that’s me working for Microsoft. Azure’s off my tunnel, a bit more easier than Amazon. Either way, once you to get it set up, you can have that local tunnel and then you can hit your internal infrastructure.
JAMISON:
I keep coming up with more questions and I’m writing them all down. We might not get to all of them but I want to ask about the phenomenon of most developers developing on blazing fast hardware and how do you get a feel for higher site actually performs when you're used to working on a 10 million core, 8,000GHz. I don’t know. Most of our computers are, I would say, faster than the average user’s computer, especially when you take mobile into account. Is it just the habit or you just try it on different computers or how do you avoid the ‘it feels fast in my machine’ syndrome?
NIK:
I kind of have a three-fold answer to this because it works on my machine as a rampantly bad syndrome that happens in our industry. And the problem here in that scenario is my machine. Because my machine is – it’s not even necessary that my machine is fast because maybe some of my users have fast machines. The problem is that my machine is my machine every single day. So, it’s running on the same operating system. It’s always on the same, probably very stable connection. It’s always in the same location in the world usually.
JAMISON:
You might even be running your server on your machine too, right? So the latency goes away.
DAVE:
Zero latency.
NIK:
Exactly. That’s why geolocation matters there too. Not just geolocation like New York to LA. But even from on top of my desk to below my desk is a bigger geo than all running in the same machine.
DAVE:
Well, I’ve also got the same browser with the same browser plug-ins that can also affect performance and behavior.
NIK:
Exactly. So let’s just focus on those four variables: device type, the browser, the geolocation, and the connection speed. If we made a matrix of those four variables across our entire user base, we would come up with a test scenario so large, we probably would not be able to finish executing it for even the most simple of sites. And I think because of that problem, it’s so daunting that a lot of developers will just say, “Hey, listen. I'm going to do the best I can. I'm going to open up two different browsers, I'm going to test it and okay, we’re moving on.”
So what we really want to do is we would love to create a specific test for every combination of those variables. The guy who’s on a camel in Egypt using Firefox OS because I'm pretty sure that’s the only place Firefox OS is used. [Chuckle]
NIK:
If that guy wanted to access my website, it would be so difficult for me to create a test that showed me his performance. So instead, let’s do something radical and turn this whole thing [inaudible] and let’s let our users be our testers for us. Right now, this sounds kind of weird because you don’t think of testing – you don’t want your end users do your testing usually especially when we’re talking about functional testing or testing for correctness.
JAMISON:
I don’t know. It depends.
CHUCK:
I was going to say you just made a whole bunch of people happy.
NIK:
“We’re good to go.”
CHUCK:
New feature done. Bam!
NIK:
What we really want to do in the performance case is let that guy tell us how fast his experience is and to do that is actually easier than you would think.
The W3C has a web performance working group and that group is basically dedicated to solving this problem. There’s also some other problems of performance on the browser but the main body of their work is around this. And so they have introduced three different specifications that together belong to a style of performance measurement called RUM – stands for Real User Monitoring.
The scenario that you brought up with ‘this works on my machine’, if we enable RUM with this specifications, it’s very easy. Navigation timing, resourced timing and user timing – those are the names of the three specifications. We literally write two or three lines of JavaScript, all the milestone metrics that we’ve talked about including the custom ones like Time to First Tweet get gathered and can be sent back to our server for further analysis.
So the question of whether or not a page is fast or not should never be answered by you, the developer, based on your machine or even your own personal experience using the website. Instead, you can literally get real performance data from all of your users and then go and track down the pages or the usage scenarios through the application that are slow and focus on those.
And so that’s the first kind of style of performance testing which is the Real User Monitoring and it’s really good for answering how fast is a page or a scenario, and really how fast is it. Not just because my computer says it’s fast but because the guy on the camel in Egypt says that it’s fast.
The other technique is the one that we’ve kind of talked about already with WebPage Test. That’s called synthetic testing. So WebPage Test does what you alter some of these variables. I mentioned you can change the device and you can change the browser and you can change the location, but you’re still only limited to what, a dozen or so locations and a handful of devices and you’re not really getting the coverage that your user base gets. But because of WebPage Test and other synthetic testing services like this used highly instruments and browsers where they’re digging down into the networking and things like that, we can get super deep analytics and figure out, “Okay, I know that this page is slow because my RUM data told me. Let me run it through WebPage Test and figure out how will I make it faster,” because it will really show me everything that’s happening on the network and in the browser.
And so synthetic testing is good for answering ‘how do I make a page faster’ once you already know that it’s slow. So I think that that kind of goes a long way towards answering that question and making sure that it’s not necessarily the developers in-charge of doing that performance testing but leveraging the user base in getting that data back.
You can just use this API’s and some of the data back to you own server and then store it and aggregate it and report on it however you want to, or there are services out there that will do those for you. In fact, Google Analytics if you go into on internal on option will start to gather some performance information for you from real users. So, if you’re using Google Analytics, that’s probably the easiest thing to do to start getting real performance data.
CHUCK:
Alright, we only have a couple of minutes left. My question is, have you actually done this on some websites? Can you kindly give us an overview of what you found and how you approached the performance issues you found?
NIK:
Yes, I’ve done this on lots of websites. People ask me to do performance audits of their pages all the time. The best example to do this is pick number two. I have a video, it’s an hour long where I take a website and I do a performance audit and we go through and make everything faster. And so, that might be the best way to get up and running with this. But I can quickly describe it in the couple of minutes that we have here my general approach.
The very first thing that I do is, if I don’t know anything about the website or it’s not the website that I own necessarily, is I will do synthetic testing since the RUM data isn’t there. And so, I’ll run it through WebPage Test. I’ll see what it’s telling me to do and usually I’m going to focus on the networking ones’ recommendations first.
So the ones that we’ve kind of already talked about – minifying, bundling, compression, caching, turning those kinds of things on that usually gives you a very good boost from there. If it turns out to be a server side problem or a client side problem – the JavaScript or the backend language, the Ruby or the Python or the Node or whatever, is taking too long, that’s when you crack open the CPU profiler and you run that against your application as you make some requests to it and you start to figure out which methods, which functions in your application are taking a long time. And then you go and start making those optimizations in your programming language of choice. So that’s kind of really the first step.
phases:
it’s observe – that’s the very first step. And so, you’re going to look at your real user metrics, whatever data you have and so you will see what’s happening in real life.
do:
should you cut down images, should you sprite them, should you in-line them, et cetera, et cetera. And then you act and you implement the decision that you had.
And then the loop starts again. And you start observing to see if that made the effect that you wanted to have. And if at any point in the loop you get lost or you can’t solve the problem, you can always shortcut and go right back to the beginning. But observe, orient, decide, and act following that OODA Loop, really kind of makes sure that you’re not doing a premature optimization. Because it’s very easy to just go and, “Let me go and optimize all of the admin pages.” Oh great, those are only accessed twice a year by your users. So maybe you don’t really need to focus on them.
JAMISON:
So I feel like you gave us some kind of a broad framework for how to maybe diagnose performance problems. And it seems like since we’re focused so much on latency, maybe the one sentence guide to performance is send less data. Is that what you’d say most of the techniques that you used to optimize websites are?
NIK:
The words that I use and this is funny, that’s exactly what I say in my session. I’d say, “Do less.” It’s, “Make fewer requests. And for the requests you have to make, send a fewer bytes. And on the server, do less work. Don’t go to the databases often. Cache that stuff. Make it easy. Do less.”
Yeah, you’re exactly right. That is like that. That’s the banner headline. And you know that sounds at odds because you’re not really making the application do less. The application and the users are actually able to do more now but you know, less is more. So yeah, do less.
JAMISON:
I heard an interesting story. A YouTube engineer did a performance experiment where he tried to make the – I think their page rate was like 2 megabytes or something, 3 megabytes, I don’t know. Higher than he wanted and he wanted to do an experiment to get it under 100 kilobytes. And with a bunch of hacks and changing the page and taking much of the stuff out, he got under 100 kilobytes. It is an A/B test and the average latency actually went way up and they were confused until they found out that it was because of all these people from countries without great internet infrastructures could now actually watch videos on YouTube. Whereas before, they just wouldn’t get to the page at all. So they never sit there waiting for it to load because it takes forever. It opened them up to a whole new audience. It’s not only made it faster but it made more people able to use it at all.
NIK:
That’s a cool story. That’s pretty [inaudible]. I haven’t heard that one before.
DAVE:
How interesting that they thought that they have the opposite effect at first try.
CHUCK:
Yeah. Alright, let’s go and get to the picks.
Before we get to picks, I want to take some time to thank our silver sponsors.
[This episode is sponsored by TrackJS. Let's face it, errors cost you money. You lose customers, server resources and time to them. Wouldn't it be nice if someone told you how and when they happen so you could fix them before they cost you big time? You may have this on your Back End Application Code but what about your Front End JavaScript? It's time to check out TrackJS. It tracks errors and usage and helps you find bugs before your customers even report them. Go check them out at TrackJS.com/JSJabber.]
CHUCK:
David, do you want to start us off with picks?
DAVE:
Oh sure. sure, sure. Alright. So my first pick will be the UtahJS 2015 Conference which the talks for the conference were just released this week. They may have come out earlier but I didn’t see them until this week. So a lot of really good gems this year in the conference and I highly recommend that you peruse the talks. I posted a link on Twitter and we’ll put a link here on the show notes. Pretty good stuff.
Also, I wanted to pick the conference organizers who had a really cool idea of giving speakers a gift which is not an original idea but the gift they gave was their choice of some really fun Lego sets which I thought was super cool. After you gave your talk, you come and pick your Lego set and it was really fun. So I thought that was really clever idea and so I want to pick it. That’s all I have this week.
CHUCK:
Alright. Jamison, what are your picks?
JAMISON:
I have four picks. The first pick is this article ES6 Overview in 350 Bullet Points. There's a ton of ES6 information out there, but this is the best source that I’ve found that’s just like I just want to know everything that’s in ES6. And it might not be 100% all of the detail in every single one of those features but just give me all of it. Give me all the features. So this is a good overview for that. I was showing it to a friend who’s learning JavaScript and he was kind of set about it. And I mean, 350 is a lot of bullet points but that’s all of it. That’s all it is.
My next pick is a comic from Saturday Morning Breakfast Cereal. It talks about the high frequency trading thing that we talked about. And if you could convince high frequency traders that the only way that they could trade was by like bringing back Mars’ dust then the space program would be awesome. I don’t know. It’s kind of a funny idea of using their great appetite for technical progress to have social progress too.
The next pick is just a link to that article that I mentioned. It’s called Page Weight Matters. It’s from 2012. So I imagine Page Weight has only gone up since then.
And my last pick, inspired by Dave, The React Rally Talks which also now features Dave’s talk at React Rally. They are all up on YouTube. So my next pick is to play those talks. If you didn’t get a chance to go, check them out. I think they’re really good. Those are my picks.
CHUCK:
Very cool. I’m going to go ahead and throw a few picks out there. The first one is a book. Actually, the first two are books. The first book is a book about money and investing and saving for the future and having a plan. And it is really well done.
It gives you a lot of information to go through. Seriously, this is not a lightweight book but at the same time, it really explains all of the things that you kind of have to know in order to successfully invest for your future. And then, to what level you want to invest for your future. So for example, if you just want to be able to kind of pay your basic bills and cover your basic expenses like a mortgage or rent and utilities and food and that kind of stuff, then you need to save to a certain level. And then if you’re looking at maybe getting some extras and things, you save to different level. If you have kind of a couple of wish lists or what-ifs when you retire or when you start living on that money that you sucked away, what you have to do to get there, there are a whole bunch of worksheets and stuff. The book is called MONEY Master the Game and it’s by Tony Robbins.
The second book I have is a historical fiction. It’s something that I read to my kids. My father-in-law bought it for my daughter for Christmas. And I read it to her, to my 8-year-old daughter and my 9year-old son. And they really enjoyed it. It’s about the pilgrims and their journey to Plymouth Rock. It’s called Rush Revere and the Brave Pilgrims and it’s written by Rush Limbaugh.
I know some people are going to react to him because he is kind of a divisive person. Anyway, this is just pure historical fiction. I mean, he just took the story of what happened, inserted some fictional characters. They travel through time. There’s a little bit of humor in there, that’s kind of at the kids’ level and the word choices are all also kind of at that lower-ring level. So, I really liked it. And regardless of how you feel about Rush Limbaugh, I think these books are great for kids if you want to kind of give them some historical stuff.
And Jamison asked in the chat if Tony Robbins is a motivational speaker and my answer is yes, he is. He has several motivational books. He does events all over the country but he kind of explains why he put the book together in the book. But essentially, he had a lot of people coming to him and asking him about how he managed money. And so he went out there and actually talked to a whole bunch of people who were good at money and got their takes on things. So, there’s a lot of information, a lot of different approaches to it but it kind of gives you all the information you need to make good decisions.
Nik, what are your picks?
NIK:
Well, we’ve already mentioned one of them which is VS code, which is the text editor that I’m really enjoying using right now. So, go ahead and check that out no matter what system you’re on. It’s something that will work for you.
JAMISON:
Does that have a VIM mode?
NIK:
I don’t know. I’m not one of those guys. I shave off my [inaudible], no offense. Ooooh… [Laughter]
CHUCK:
Why would you want a VIM mode?
JAMISON:
I have a neck period growing right now.
NIK:
Then send all hate mails to Jamison.
CHUCK:
[Chuckles]
NIK:
So that is my first pick. The second one is an amazing book that is actually available. You can get the print copy for some cash dollars or you can read it for free online and that is High Performance Browser Networking by Ilya Grigorik.
And so, it goes into depth on all of the things that we’ve talked about today and much more like HTTP 2.0 and Web Sockets and Server-Sent Events and even how the radios inside of your cellphones work. And so I highly recommend that book.
And then my last pick will be me. I’m now plugging myself a little bit if that’s okay. I have Pluralsight. I have a couple of courses out on Pluralsight. The first one to cover is metrics and how to measure web performance and how to automate that. So you can just check in your code and your continuous integration server will tell you whether or not you’ve got faster or slower and it’ll cover how to do that and all the different tooling around that.
And then hopefully by the time this episode is published, if not a little bit after that, I’ll have a new course out that is a deep dive in the WebPage Test. And so, I’ll spend two to three hours showing off all of the features, showing you how to get it up and running on Amazon, how to script it, how to make custom metrics. The whole nine yards are basically covered, the whole thing [inaudible] .So, I’d love it if people would check those out and then give me some feedback on Twitter about whether or not they like them.
CHUCK:
Very cool. Alright, if people want to follow up with what you’re doing or just see what’s going on there or what you’re doing with your team, where do they do that?
NIK:
They should check out my blog which is NikCodes.com. I spell my name Nik. So, it’s NikCodes.com or the best place is going to be on Twitter, @nikmd23.
CHUCK:
Cool. Alright, very cool. Well, thank you for coming. We’ll go ahead and wrap the show up. We’ll catch everyone next week.
[Hosting and bandwidth provided by the Blue Box Group. Check them out at BlueBox.net.]
[Bandwidth for this segment is provided by CacheFly, the world’s fastest CDN. Deliver your content fast with CacheFly. Visit CacheFly.com to learn more.]
[Do you wish you could be part of the discussion on JavaScript Jabber? Do you have a burning question for one of our guests? Now you can join the action at our membership forum. You can sign up at
JavaScriptJabber.com/Jabber and there you can join discussions with the regular panelists and our guests.]