The Alphabet Soup of Performance Measurements - JSJ 622

JavaScript Jabber

A weekly discussion by top-end JavaScript developers on the technology and skills needed to level up on your JavaScript journey.

The Alphabet Soup of Performance Measurements - JSJ 622

Published : Feb 27, 2024
Duration : 1 Hours, 14 Minutes

Show Notes

Dan Shappir takes the lead in explaining all of the acronyms and metrics for measuring the performance of your web applications. He leads a discussion through the ins and outs of monitoring performance and then how to improve and check up on how your website is doing.

Sponsors


Links

Picks

Transcript

 
AJ_O’NEAL: Well, hello folks and welcome back. It's a bright, beautiful day and even more bright so and beautiful so because of Dan Shapir, who's gonna be telling us a little bit about, oh gosh, we've got six acronyms in a row here. Dan, what's our topic in summary? 
 
DAN_SHAPPIR: Yeah, I call that the web performance alphabet soup. Don't worry, I'll get into all the various acronyms during the show, but yeah, stuff like FCP and TTFB and ETC and whatnot.
 
AJ_O’NEAL: All right. For those of you following along at home, those are all letters in the alphabet. All right. Also on the show, we've got Amy Knight. 
 
AIMEE_KNIGHT: Hello. Oh my gosh. Yeah. So don't have coronavirus, but have something. So hello from Nashville. 
 
AJ_O’NEAL: Don't listen too closely. You might catch it. And also Steve Edwards. 
 
STEVE_EDWARDS: Hello. Hello from sunny Portland. 
 
AJ_O’NEAL: And with that, this is JavaScript Jabber. Fight. Start! 
 
 
When it comes to test maintenance, the biggest complaint that teams have are flaky tests. Tyco is a Node.js library built to test modern web applications. It creates highly readable and maintainable JavaScript tests. Its simple API, smart selectors, and implicit weights all work together toward a single goal, fixing the underlying problem behind flaky tests to make browser automation reliable. Tyco is open source and free to use. Head to tyco.dev and get started. That's T-A-I-K-O.dev. 
 
 
STEVE_EDWARDS: All right, Dan, I think that means you're on. 
 
DAN_SHAPPIR: Yeah, I guess so. So today we're going to talk about a topic that's really near and dear to my heart, which is web performance and a whole bunch of acronyms and metrics and measurements that are associated with it. As some of our listeners may recall, my day job is performance tech lead at Wix, where I deal with the performance of millions of websites hosted on the Wix platform, literally billions of websites, even more, hundreds of millions, I don't know. Anyway this means that I'm constantly involved with how to best monitor and optimize performance on the web. That's like what I do day in, day out. And today I wanted to talk about performance monitoring, specifically about the alphabet soup, which is web performance measurements and metrics. I mean, we mentioned a few, I'll mention them again. I mean, metrics such as TTFB, FCP, LCP, TTI, TBT, and etc.
Lots and lots and lots of acronyms. 
 
STEVE_EDWARDS: Dan, one algorithm. Do you have TTFN? Tata for now. Tata for now from Tigger. 
 
DAN_SHAPPIR: Yeah, WTF, I guess, as well. Anyway. Dan. Go on. 
 
AIMEE_KNIGHT: I'm going to say maybe start us off. So I know that not everybody knows about the changes from V6 to V5. Would that be a good starting point to frame our conversation? 
 
DAN_SHAPPIR: I actually wanted to start a little bit back in front of that because I really wanted to talk about what performance measurement really is and why it's important because before we get to a specific measurement tool, which is Google Lighthouse. But yeah, once we get to there, that's definitely an interesting topic of conversation. So I found that, like I said, that many web developers are actually not familiar with the various performance metrics or are confused by them. They're not sure what the metrics mean which metrics are the most relevant for the particular use case. In fact, it's even worse than that. I have a talk that I give which is titled, My Website is Slow, Now What? Which is essentially a laundry list of action items that you can take to improve the performance of a website or a web app. And essentially the first item on this list is to measure and monitor. And you'd be surprised by how many web developers or web or companies or teams or whatever neglect to measure and monitor performance at all of the websites that they build, let alone do it on a regular basis. So you know, we're going to be talking about all these metrics and measurements, but the most important thing that you actually do it, regardless of which ones you ultimately choose and which tools you ultimately use. There's a quote that's attributed to a well-known management consultant by the name of Peter Drucker, which I actually like, which is if you can't measure it, you can't improve it. And I definitely think that measuring and monitoring performance is a prerequisite for being able to actually improve it. Otherwise, there's a very good chance that you'll waste time and effort on changes that provide very little benefit and you'll totally neglect significant possible gains. So folks, please, please, please measure the performance of the websites that you build. So, okay, so let's say I convince you, hopefully I have, how do you actually do that? So it turns out that there are really two main ways to measure performance if I put them in like, in categories or buckets. One is known as synthetic monitoring, and the other one is known as real user monitoring, which is also has an acronym, which is RUM, R-U-M. Synthetic monitoring is about monitoring the performance of a website in a quote-unquote laboratory environment. Often it involves some sort of automation tooling, but you can really just do it manually yourself. You create some sort of a synthetic environment that simulates typical user scenarios, or maybe the worst-case user scenarios, depends on you. And then you measure the performance of your website in those environments. So say, for example, you know that a lot of your users are using mobile devices that say Android, and they're coming from, let's say, OK networks. So you might use some Android device on a 4G network or maybe a fast VG network to try to measure the performance of your website if you want to also account for the users who have lower-end devices and networks and maybe you'll buy some cheap low-end mobile device for a hundred bucks and then connect it via like 3G and try to measure your performance on that. And you can also simulate those environments these days then something like the Chrome DevTools actually lets you specify both the CPU slowdown and the network slowdown. So you can actually even do that on your computer but you need to do that options and tools out there for synthetic monitoring, but Google Lighthouse has become something of a standard. In fact, you can run it directly from within the Chrome DevTools in the Audits tab, so you literally have it built into your browser. You can also integrate it into your release process using something called Lighthouse CI, which is an open-source project by Google. Lighthouse itself is an open-source project.
Or you can use it online using sites like Google Paid Speed Insights, which also has an API which you can connect to, like do it as part of your build process. As I said before, I highly recommend incorporating some sort of synthetic monitoring into your CI process if you're building any sort of a website that you hope will see some production traffic. This will enable you to specify stuff like performance budgets which are just a fancy term for limits on various performance-related aspects of your project. For example, the total size of the JavaScript that you download. So you can say I have a performance budget, that my total JavaScript download doesn't exceed, say, 100 kilobytes over the wire. And then if you actually exceed that, then your build breaks and you need to make hard decisions of whether to remove some functional JavaScript and maybe some functionality or maybe to increase your limit, knowing that it will have an adverse impact on the performance that your users experience. Any questions about synthetic monitoring before I move to real user measurements? Have you guys played around with various tools like that? 
 
AJ_O’NEAL: I'm definitely familiar with the stuff that's in Chrome and it is so refreshing when You try to tell someone, hey, your site's slow. And they're like, no, it's not. And you're like, well, even if I just simulate a poor connection, it's terrible. So that's, that's kind of what I've used those tools for quite a bit is letting other people know and like, no, really. Like here's how you can see for yourself that when you're not on fiber ethernet, it doesn't work that well. And then Lighthouse is something I'm fairly familiar with though I don't know. Uh, when I've looked at it, I've often been confused about how to take full advantage of it. 
 
DAN_SHAPPIR: Okay. That's hopefully something I'll have enough time to cover this episode. Or if not, we can do a continuation one. Okay. So in any event, like I said, the other bucket or, or, or method of performance monitoring is real user monitoring or ROM which measures the performance of pages in real live sessions. So you add some code, you instrument your code so that it actually measures and reports back performance of actual user interactions. And you collect this information into some sort of a database that you can then run queries over. If you recall my previous JavaScript Jabber episode where I spoke about this stuff that was episode 334. I spoke about the web performance API. That's actually how most ROM measurement tool these days gather all the information that they do. You can do it yourself, or you can get a third party tool that will do it for you. Interestingly, Google is actually now also exposing ROM data themselves, at least from sessions that run on Chrome. There's this thing called the Chrome User Experience Report, Crocs for short, which collects performance information from Chrome itself. When you install Chrome, you have this option to opt out of Google gathering information to improve their service. If you don't, then it turns out that they also gather anonymous performance information about all your browsing. And they put this information into a database that's actually open to anybody to query. If you run BigQuerys on it, it might cost you a bit, you know anybody can do it. There are actually even tools out there that give you free access to this information. So for example, there is this free Crocs run service by a company called EdgeMesh, which provides access to this information. Also, in the Chrome Search Console itself, they actually now have performance information from there. And Google PageSpeed Insights also shows you information that it retrieves from that from that database. So Google PageSuite Insights now shows you both information from the synthetic test that it performs and from real user sessions. Now there is a caveat with that because, first of all, Crocs, Google kind of updates it one month back. So you can do queries on data for your sessions. But if you make changes. You'll only see the impact of those changes a month later. And likewise, they only actually maintain information about sites that have or pages that have sufficient traffic. So if you're putting up a new page, you might not actually get any data for that from Crocs, because it's just not getting enough traffic yet.
Be that as it may, there are many, many third-party services out there that you can add to your site. Obviously, they cost some money, but they can provide a lot of benefit. And I'm not going to advertise any one of them. You just Google search for web performance monitoring and many such services will come up. So that's RUM data. Have you guys actually used RUM in any of your projects? I can tell you that we at Wix use RUM like a ton. We are a very data-driven company, but we are kind of huge. I'm wondering if you guys had the chance to use stuff like that. 
 
AIMEE_KNIGHT: We don't, unfortunately, use it where we are. I want to, though. 
 
DAN_SHAPPIR: Yeah, I highly recommend it, because at the end of the day, if you're just doing synthetic measurements, then you're kind of guessing what your users are experiencing. Like I said, to begin with, I would highly recommend for you to use something like CrocsRUN to look at your current situation, like I said, it's, it's a month late, but it's better than nothing. And you can actually look at how your performance in the field changed over the past six months, I believe in that using that tool. So it's, it's actually quite informative. 
 
AJ_O’NEAL: I'm looking at it right now for my blog and I'm not sure what to make sense of all the numbers, but it's cool to see them. That's for sure. 
 
DAN_SHAPPIR: Well, they mostly show you, I'll get to what that means, but they show you stuff like your TTFB and FCP and the DCL. I'll get to that hopefully in a bit. 
 
AJ_O’NEAL: One thing that I see here is that it looks like there can be quite a variation in where things are falling in these different composite sidebar graphs. I'm not sure what to call this type of graph.
It's almost like a pie chart, but it's a bar graph instead. So it's like 10% is on the left, 50% is in the middle, 30% is on the right. 
 
DAN_SHAPPIR: Yeah, Google kind of defined the scale somewhat arbitrarily of what it means to have a good average or bad performance. So for example, they might say that for first content for paint, which we'll get to FCP, having a value of under one second is good having a value of one second up to two and a half seconds is average and having a value that's over two and a half seconds is bad. 
 
AJ_O’NEAL: So since my site is a static site and granted it's with some blog template from a while back so it probably isn't optimized and I don't even know if it's even been a, no it isn't been a fight I'm pretty sure. Anyway, but I know when I've looked at Google Analytics, like half of my traffic is from India. Maybe it's somewhere in that range quite a bit of my traffic, because it's a technical blog and I think that's why. And so I'm wondering if as I'm looking at this data, because historically month over month, my blog hasn't changed, you know, it's the same server. I add a couple more articles now and then, but it seems like there's quite a bit of variation in, you know, say in January, lots of people were in the fast segment and there's also a fair number of people in the slow segment. But then in November, not so many people in the fast segment. Almost everybody is in the average segment. If we go back to September, almost everyone's in the fast segment. And so I don't know how to make sense of this historical data. You have any light to shed on that? 
 
DAN_SHAPPIR: Well, Crocs run being free is somewhat limited in the amount of information that it shows. If you actually go to the Crocs database itself, you can actually query like a you know, geographical locations your visitors are coming from, and you can get more fine-grained information. So it can definitely be the result of, you know, fluctuations. It might be, for example, if there's a holiday in India, for example, like you said, you've got a lot of visitors coming from there, then that can obviously impact your, you know, the distribution of your traffic, and consequently the performance that you see. So, yeah, just looking at the overall numbers is useful, but if you really want to dig in and make sense of them and turn some of them into actionable items, then you probably may need more information. For example, if you're doing, your blog is out of the goodness of your heart, but if you actually make it into a business, for example, and you say, hey, I've got a lot of quote-unquote customers coming from India then maybe it makes sense for me to have either a server in India or use some sort of a CDN service that accelerates access from India so that they don't have to wait for the data to come, you know, for the request to go all the way to my server in the States and then come back to their computers in India. Or maybe you'll want to replicate your server onto an Amazon cloud in india or something like that in order to provide them with faster access so but but yes so in order to turn it into actionable items you you do need the appropriate information when I think about the performance you know how when i try to analyze performance and what performance actually means i'd like to use a model that's called uh... the rail performance model which is also created by google google will actually feature quite a bit here related to web performance and good on them for doing that. I think we're really lucky that the good of the web happens to at least currently align with the benefits to Google's business. So they have business interests in improving the web. Hopefully that this will continue. In any event, they have this model which they proposed or created called RAIL, which stands for Response, Animation, Idle, and Load. So response deals with measuring how quickly a web page reacts when you interact with it. So like if there's a button you click on the button. How fast do you actually get some visual response to to that to that click now? Don't confuse response with responsiveness which deals with how pages? React to different display sizes. This is about just reacting to user interactions on the page. So having good performance means that when a user interacts with some element on the page, you want the page to respond to that interaction in under 100 milliseconds. According to researches done, that gives your users, your visitors, the feeling that the response is kind of instantaneous. Now, it doesn't mean that the actual operation that they're looking for happened within that 100 millisecond. They might be doing some really complex operation that you need to send back to your backend and perform some lengthy computation or database lookup or whatever. I'm talking about the fact that they need to see some visual indication that the system received their input and is reacting to it. It can be even something like popping up some sort of a spinner if that's what you can do. But anyway, like I said, you want to do that within 100 milliseconds. Now, to be able to do that within 100 milliseconds, it means that the actual event handler should not take longer than 50 milliseconds, and that any background operation that the browser is performing should be broken up into segments that are also not longer than 50 milliseconds. So that if a user happens to click right at the beginning of such a segment. You have to wait 50 milliseconds for that segment to finish, and then another 50 milliseconds for the event handler to finish, all in all, 100 milliseconds or less. Well, preferably less. This kind of has to do with how browsers work. As you know, browsers are single-threaded, so if the browser is busy handling some sort of a JavaScript task, it can't react to your clicks or whatever until that JavaScript finishes, the browser is effectively frozen. I'm sure we're all familiar with that. So, yeah. And whenever a background task like that takes longer than 50 milliseconds, that's referred to as being a long task. And we will see that long tasks are kind of featured quite a bit in the performance metrics that, once I start elaborating on them. Animation is the A in Rail. It deals with how smoothly the browser performs transitions and animations. For animations to appear visually smooth, you want them to happen at 60 frames per second or FPS. A sensor browser itself needs something like more or less 6 milliseconds to render a frame. That only leaves like 10, 11 milliseconds max for your code to generate the content for the frame. And that's definitely not a lot of time, but that's a whole big discussion in and of its own, how to do efficient and effective animations on the web. And I'm not going to talk about this today. Maybe this is a topic for some future discussion. I is for idle, which isn't really a measurement. It just means that you should schedule background tasks for when the browser is idle and break them down into segments that are shorter than 50 milliseconds each.
For that purpose, you can use request idle callback DOM API, which I think Amy, you recently found out isn't available on all browsers. 
 
AIMEE_KNIGHT: Unfortunately, yes. 
 
DAN_SHAPPIR: Yeah, it can be simulated kind of with setTimeout, but yeah. Yeah. It's really a shame that it's missing. Basically, you give it some a callback and it will schedule your callback when the browser seems idle. But really, these days, it's actually better to move lengthy background operations into workers so that they execute completely off of the browser's main thread and don't block it at all. If you recall, this is something we had Surma from Google as a guest explaining this in detail on episode 393. It was an excellent episode, which unfortunately wasn't able to participate in. I think I only joined around episode 400, but yeah, it was a really good one. 
 
AJ_O’NEAL: I think it's got to be either Gmail or Slack, which neither of those seem like they should be intensive, but they're the things that seem like if I look at my browser usage, they're the things that are pegging it, right? And then the other thing is like maybe a news site because it loads 600 ads. Now I use an ad blocker so that that doesn't crash my browser anymore, but it's actually fairly common for a news site to crash the browser because it's loading just too many ads. And then beyond that. I mean, I'm looking at a lot of blog articles and shopping on Amazon. I am, I don't know. I listen to music online sometimes. Uh, I don't play games online. What, what are the types of applications where you see these types of metrics that you're talking about, you know, having a schedule background tasks. I have an idle time. Where, where do they fall in, in like either what you think a lot of these people they're listening are building or that you know, we experience on the day to day. 
 
DAN_SHAPPIR: So we had some conversations in which we spoke, not necessarily on the air about boot camps, for example, and we talked about the fact that people at boot camps usually learn something like React. And if you're building a webpage using React, there's a good chance that you'll run into long tasks because React either renders or hydrates. And unless you're using some really new React technology like React Suspense, that React rendering operation is a blocking operation. And if you've got a sophisticated or complex UI, which a lot of websites now have, that can actually take a fairly long time to render. Any old website that uses React can easily have long tasks. And as for animations, I think we've all of us seen animations that stutter or aren't smooth on websites. 
 
AJ_O’NEAL: And like 90% of the time, that's just the loading animation. You know, it's like... 
 
DAN_SHAPPIR: Yeah, but what does it matter? It is what it is. For example, these days, people on, you have this situation on mobile in particular, where people have come to expect really smooth and responsive behavior when they're using their mobile devices because they're used to native apps. And then when they use web apps or web pages, they don't get that because people build really heavy websites for not necessarily a good reason. 
 
AJ_O’NEAL: Again, what comes to my mind on my phone is anytime I'm reading a news article, like the thing just comes to a halt. Like can barely get it to scroll up. 
 
DAN_SHAPPIR: Yeah, again, but again, if you even go into some, you know store or you do some online shopping and for some reason you do it from your phone, then you will find that a lot of online retailers, the web performance that they provide isn't necessarily that great. Again, especially on lower-end devices. Anyway, which brings us to the final letter in the RAIL, which stands for LOAD, and this is what most web developers currently focus on when they decide to work on web performance. So, LOAD is kind of...The loading time is kind of front and center for most web developers. That's because we've come to know, to learn that lengthy load time means a high bounce rate. Just to clarify, I assume a lot of our listeners are familiar with it, but they'll clarify it nonetheless. A bounce rate is when somebody visits your site or your page and then leaves without clicking anything. So it's kind of like if you want the physical analogy, it's kind of like somebody walking into a store, a physical store, looking around and then turning around and leaving without speaking to anybody, without trying anything on, without actually checking anything, certainly without buying anything. So you've managed to get somebody to your site, that person actually started loading your site and then just left without doing anything for some reason. So you've apparently disappointed them in some way. And it turns out that slow loading site is a great way to disappoint users and cause them to leave and increase your bounce rate. So, unfortunately, load performance is one of those things that is easy to experience. So you can go to a website and say, hey, this website loads quickly or this website loads slowly. But it's much more difficult to actually describe. What exactly does a web, for example, when does a website actually finish loading? When the content is visible, when some of the content is visible, when all of the content is visible, maybe just when the main content is visible, but then how do you define the main content? For example, AJ, you gave an example of new sites, most, a lot of the content, sometimes even most of the content are ads. Obviously visitors don't care about the ads finishing loading. So you kind of want to maybe disregard the ads as from the visitor's or the user's perspective when you're thinking about when the page is loaded. But if you're using some sort of a synthetic tool to measure performance, how will that tool distinguish between the actual interesting content and the noise? And maybe you actually want to base your measurements on interactivity.
when are some input elements interactive, or maybe when all the input elements are interactive. So how can you even tell when all the elements on a page are visible and interactive? The JavaScript in the page can keep creating additional elements and then making hiding and showing stuff. So how do you know when it's the end of the load process and starts being like the page doing its stuff. And as I previously explained, it's not enough that the elements just become like that you attach the event handlers to the various input elements. You also want to handle user input quickly, like I said, preferably in under 100 milliseconds. How do you know that you've reached that point? Are you going to simulate the click at each and every element and at each and every stage of the loading process and measure how long it takes to respond, you know, obviously, that's not something you can feasibly do. So because of all this, instead of thinking about a page load as a specific point in time, like saying, I've gotten to this point, and now the page is loaded, we should really think about this as a sequence of events or milestones. And on top of that, instead of measuring a concrete occurrences. We often have to make do with various heuristic metrics, which gets us to all those terms that I threw out at the beginning of our conversation. And that's why we do have so many performance metrics and why it's important. An important step in the performance monitoring is deciding which metrics are relevant to you and which metrics you should focus on, depending on your particular use case. 
 
 
In JavaScript, there's always something new to learn frameworks, technologies, tools, updates. It's a lot of work to stay up on JavaScript. Educative.io helps with that. Their platform is made from the ground up with software engineers in mind. Instead of making you scrub back and forth through videos and spend hours on setup, their courses are text-based and feature live coding environments so you can skim back and forth like a book and practice in browser as you learn. One course that I recommend if you've been laid off during the coronavirus scare is the JavaScript interview handbook. These courses cover topics from JavaScript, machine learning, Kubernetes, and much more. And each course has a free preview so you can poke around free of charge. On top of that, you can visit educative.io slash jabber to get 10% off any course or subscription. Check it out today. 
 
 
DAN_SHAPPIR: So before I start going down the list, any comments or questions? Okay, then let's run down this alphabet soup. So when reviewing the different performance metrics, I like to proceed more length sequentially from the session start as time progresses. Roughly, this translates to first looking at the metrics that measure the network performance, then to metrics that cover visibility, and finally to the metrics that deal with interactivity, because usually that's how things go. First, the stuff needs to be downloaded over the network, from the web server whatever, and then the stuff becomes visible. The browser renders the various elements. And finally, we hook up the various event handlers and the JavaScript finishes running so that the site can actually respond quickly to user interaction. So starting with the network stuff, as I said, one of the first metrics to look at is time to first byte or TTFB. Basically, that's the time from when the user, let's say, clicked on a link in a Google search for a particular website and until the first byte is received by the browser from that website. So it incorporates stuff like the DNS lookup to figure out the IP address of the service you're connecting to from its domain name. It involves establishing a TCP connection. If you've not yet if they're not yet using QUIC, it involves establishing if it's, you know, it should be a secure connection. So there's a SSL or a handshake that needs to happen. I'm glad to say it. 
 
AJ_O’NEAL: You just mentioned QUIC. I was under the impression that that was still very experimental and very few web servers had something in place to be able to handle it. Like for example, I don't think Node can handle it. And I don't. I don't know about the Apache. 
 
DAN_SHAPPIR: No, it can probably handle everything. Anything. So somebody probably wrote a module for it. I wouldn't be surprised, but if, for example, you're using the Google CDN to deliver your static assets, they use quick and your browser supports quick, they will use quick, uh, 
 
AJ_O’NEAL: of course, because they invented it, but like the average, 
 
DAN_SHAPPIR: but it's not really part of HTTP two or three. So, so eventually we will get it everywhere. It's not that TCP is bad, TCP is great. The web was built on TCP. The internet was built on TCP and the web is built on the internet, so it's built on TCP. But the behavior of our networks have changed since the 60s and 70s whenever TCP was invented. So stuff like, I'm not going to go into that because these are really big topics, and our listeners can definitely find great talks about this stuff on YouTube, for example, or articles that they can find if they Google for it. But QUIC is basically built on UDP instead of TCP. And it's just more, it appears to be more appropriate to modern networks than TCP is. For example, TCP starts with a fairly small window, which is then it then grows as it sees the quality of your network connection. So it's kind of throttling itself for the first packets because back in the day, networks were much slower than they are today. And QUIC does this better. So in any event, TTFB, just as I said, talks about how quickly the bytes start arriving at the browser after the user clicked the link. And it mostly has to do with, for example, either just making sure that your place, your server, as close as possible to where your users are. So we were talking before about the fact that UAJ have a lot of visitors from India. So maybe it makes sense for you to also have a server in India. Can be a virtual server, of course, or just making sure to use some sort of a CDN. If you look at solutions like Gatsby.js or Next.js or Netlify, all of these guys, all these companies, they actually have solutions where you can actually statically generate your HTML files and then push them onto CDNs, which makes the delivery of these files down to the browsers a lot faster and more efficient because they're just that much closer to the actual user devices. The request needs to go through far fewer hops to actually get to the data. So we had time to first byte. And then we have kind of time to last byte, which is when the HTML finished downloading. That's not such a common metric to use. More commonly, people use something called Dom Content Loaded or DCL, which means that HTML is loaded and has been parsed. You can actually have a JavaScript event handler for that. For those of us who remember jQuery, jQuery already actually fires at DCL. So when you look at something like Crux Run and they show you the TTFB metric, let's say they also show you the DCL metric, so this way you know when your HTML started arriving and you know when your HTML finished arriving and also finished parsing, which is also kind of dependent on the size and complexity of your HTML. It used to be that the main metric for performance was on load and now it's hardly used anymore. It's not that interesting. That's because modern websites are so dynamic and with single-page applications and JavaScript that keeps on running and downloading more and more stuff, even after the page is loaded. So onload is not really that useful anymore. But just to touch on it, the main difference between DCL and onload is that DCL just looks at the HTML itself and onload also looks at all the resources that were requested by the while the web page was still downloading. So for example, if you have an image tag inside the HTML and that image tag starts downloading an image as soon as it's received by the browser, then onload will also wait for that image to finish downloading before it's fired. If on the other hand, you have some JavaScript that runs later and just
assigns creates an image element or assigns a new image to an existing image element. If that JavaScript runs after the HTML finished downloading, it's not actually counted for the onload. So that more or less covers the network key part of the performance metrics, which moves us over to the visual parts or the visible parts. So the first metric there is something called first paint or FP which is just when a pixel changed as a result of the HTML. I don't know if you've noticed that, but it used to be that when you started navigating to some webpage, let's say again, from a Google search, the page would immediately become like white, and then eventually you will get the actual page that you were navigating to. Browsers kind of change their behavior there. What they do is they actually keep the old, page content around until they actually get the HTML from the new page and start parsing the new HTML and being able to render the HTML of that new page. So there's like more continuity between pages. This is especially useful if you have like a multi-page application for your site. So as you're moving between the pages in your site, you don't see those white gaps in between the pages. They're not as obvious. But still, once the HTML arrives, then the previous content is erased, you get that white background, and then pixels are starting to be drawn. And when the first pixel is drawn, that's a first paint. It will often be something like a background color or whatever, so it doesn't provide a whole lot of value, just shows the visitor that something's happening. So yeah, so because of that, because it doesn't provide a whole lot of value. This is not the metric that's so commonly used. A much more commonly used one is something called First Contentful Paint, or FCP, which means that the first pixel of some content is drawn. And content is an image, or text, or an SVG, or canvas. Those are basically the things that count as content. So if it's just a pixel that's part of a background color. That's not counted for first contentful paint, but if it's part of an image, then it is counted. Often though, I see that FP and FCP are pretty close to each other, maybe even the same if the first pixel drawn is part of an image, for example. So what I would say, for example, is if you're considering text, for example, it's often not just enough that the HTML with the text arrive you actually need to wait for the font to download. Because unless you use the new font display CSS attribute, Chrome, for example, if you've got the custom font associated with the text, will not show the text until either the font has arrived or some internal timeout is reached, in which case it will go to a fallback font. You can now tell the browser to immediately go to the fallback font but then that can cause a sort of a flash effect when the browser then switches fonts. So, you know, which one is preferable to you depends on, you know, your own design considerations. 
 
AJ_O’NEAL: That's something I'd say I'd see like every day is the font flash. And it's kind of, it feels amateurish to me. Feels like I'd rather you just picked Helvetica. I would not have known. 
 
DAN_SHAPPIR: Yeah, but you're not a designer. 
 
AJ_O’NEAL: No, I like, I love fonts. I love them, like, but you have to have some sort of value being brought with a font. And most of these sites, I don't think that they really have any value being brought with the font. Like if they had a designer that chose that font on purpose, great, but I don't think that's the case with most of these sites. I think most of them is just like, Oh cool, Google fonts, let's pick one. And it's not, 
 
AIMEE_KNIGHT: but yeah, I guess like I would say too, I mean, this is something that I'm hitting where I'm at right now. And that's, Like to AJ's point, I think this is where like DAB and design UX really need to work together because like branding is important, but you don't want it to come at the cost of page speed because of how closely that's tied to revenue. 
 
AJ_O’NEAL: And I see stuff fail all the time because it seems to me, like if you just pound refresh on any site that's loading Google fonts, if you let it reload like 20 times, it'll fail at least once. And that's, that seems to be a very fragile area of the web the font loading. 
 
DAN_SHAPPIR: Yeah, but look, there are workarounds for this sort of thing. So like I said, there's a CSS attribute which can actually let you control the behavior at least somewhat. So you can either have it wait a little bit for the font that you want, but then go to the fallback font if downloading the font takes too long, or it can immediately go to that fallback font and then switch to the custom font when the custom font finally arrives. Or it can actually, there's even the option of having it, the fallback font for the first session, download the custom font, keep it in the browser cache so that when the next time you visit, it comes from the cache, it arrives immediately and then it will use the custom font. So the first time you visit a certain site, you'll get the fallback, but for every consecutive visit, you'll get the custom font. Look, it's like Amy said, it's a part of the brand. Marketeers and designers agonize over this sort of a thing. They will tell you that they want to convey the spirit and the intent and whatever, and it has to be on brand. And it's not something that we can just say, oh, you know, oh, bah humbug and ignore that. 
 
AIMEE_KNIGHT: But if it comes at the cost of conversion, then we don't have jobs to design things. 
 
DAN_SHAPPIR: You can also, like I said, there are also like various workarounds that you can do. You can use pre-loading to try to get the fonts down faster. You can even go as far as inlining the font as data URLs inside the HTML itself. There are various tricks and hacks that you can use, but yeah, at the end of the day, it's all about compromises. That's our life as developers and people working in tech. Continuing from first contentful paint, the next one is first meaningful paint. And that's when are the pixels, when does the browser paint the pixels that actually have a meaning for you as a site visitor. For example, if you're visiting, let's say, a weather website, then the pixels that you care about are the pixels that have the temperature and whether it's going to rain or not. You don't really care about the ads. You don't even care about the logo of the site. All of these pictures count for first contentful paint, but they're not meaningful for you. The problem with first meaningful paint is that it's, at the end of the day, it's really subjective. Certainly generic tools like Google Lighthouse don't really know what's meaningful in any website out there You know, they can try to guess there are certainly heuristics around it But any heuristic is really problematic in this context because of this First meaningful paint is kind of being sort of deprecated in in favor of a new metric which I'll get to in a second. I just want to say, though, that if, for example, you know that a particular image on your website is the meaningful content, you can even just put an onload on that image and measure the meaningful paint of that image yourself. So like I said, because of this problem with meaningful paint, Google recently introduced in the proposed to the W3C web performance working group a new measurement calling called Largest Contentful Paint or LCP. Basically, they figured out that the most meaningful content is probably going to be the biggest content on the page. So the biggest image or the biggest headline, that's probably going to be the most important one. So Largest Contentful Paint is basically just an event that you can get from the web performance API that fires whenever there's a bigger, something bigger being drawn that whatever was drawn before, which is a really powerful mechanism but also highlights its limitation because first contentful paint is like accurately defined, you know, a particular pixel. That's the first pixel. Any pixel that is drawn afterward is not going to be the first pixel. Largest contentful paint raises the question of when do I stop? Like if something big is drawn, maybe something even bigger is still being downloaded and will be drawn later. And sometimes things are intentionally delayed. By the way, all of the stuff that I'm talking about, all these painting operations are neglected to mention. They're for stuff that is above the fold. So anything that is below the fold doesn't count for FP, FCP, or... 
 
AJ_O’NEAL: What does that mean, above fold, below fold? 
 
DAN_SHAPPIR: So you know how maybe if we're old enough to remember, but once newspapers are really, really big, so that...When our dads used to go to the bathroom with a newspaper, they would have to fold it so that they could hold it while they were sitting there comfortably. And as a result, when newspaper men thought about where, or people, thought about where to put, back then it was mostly men, I guess, but now it's certainly news people. When they thought about where to put the most important content, they wanted to put it above where the people used to fold the newspaper. So the most important stuff was at the top of the front page, above where people would fold the front page. The less important stuff was below that fold. Even less important stuff was on the back of the newspaper, and the least important stuff that you really didn't care about was in some center page. So the term above and below the fold now in the browsers mean the content that's visual where the page loads before the user does anything like scroll or zoom out or whatever. So what's initially visible in the browser's viewport, that's the content that's above the fold. Anything that you need to scroll down to get to, that's below the fold. A long story for a fairly simple term. 
 
AIMEE_KNIGHT: I was just going to add really quickly to the largest contentful paint. So I don't think that's something, because it's part of like a Lighthouse V6 that it's reporting on just yet in PageSpeed Insights, but if you go to web.dev, they have code provided to you that you can inject into basically like your render pipeline of your app and it will tell you what the largest node is. 
 
DAN_SHAPPIR: Actually even PageSpeed Insights already collects this information, they just don't display it yet. So if you use the PageSpeed Insights API, you will actually get the largest content for paint. But yes, they've announced that in v6, which was already supposed to come out, I think, but you know, it'll be there when it gets there, that they're kind of eliminating the first meaningful paint, which is still there, and they'll replace it with the largest contentful paint. So going forward, that's the metric to use. And like I said, it's actually already supported in the Web Performance API. So I assume that the various, like I said, tools that you can use to gather performance informations from real user sessions will also start reporting largest contentful paint. Two more items that I want or measurements that I wanted to mention in the context of visibility, one older one and one newer one. The older one is something called Speed Index, or SI. That's not one that Google invented. That's one that they inherited. Think about it this way. Let's say you have a page that loads and needs play or render some really complex scene. It can work one or two ways. You can have nothing, nothing, nothing, nothing, and then bam, the entire scene. Or you can have the scene build up gradually until finally. So they finish, let's say, at the same time. But one, there was nothing. And then suddenly you got everything. And the other one built up gradually. From the performance perspective, then building up gradually is preferable because the user has something to see, hopefully something that's contentful and maybe even meaningful, while the page is building up. You know, one of the things that really annoy me about some news apps, even native apps, is that they don't show the content at all until after the ads are loaded. They have like a spinner or something, that until everything in the page is downloaded, they don't show you anything. And that's really annoying to me because I'm waiting on the content that I don't care about, while I could have already been reading the content that I do care about. So in terms of performance, you do want that gradual buildup. And that's what speed index measures. Kind of takes like screenshots of the browser as it builds up the display and sees how much content is available at each point in time. And based on that, it gives you the score.
And now because of that, because of this whole process of taking screenshots and whatnot, the speed index or SI score is really only relevant for synthetic tests because you can't really take those screenshots in real user sessions. But Google Lighthouse, for example, that's one of the measurements that they currently use. A new measurement that they recently introduced and one that can be measured in the field is called cumulative layout shift. I'm sure you've all encounter the situation where you were reading something and then because let's say something additional was downloaded it pushes everything down and suddenly you lost your spot in whatever it is that you're reading because everything shifted down because something on top just finished downloading it can be even worse if you're trying to click on something. In the document that explains this measurement, they actually even have this animated GIF or video or something that shows a really amusing scenario when somebody is trying to cancel a purchase, but because something appeared and pushed everything down, they click the purchase button instead by accident because it got pushed down the instant that they clicked. So obviously, you want to avoid that. So, Commulative Layout Shift or CLS for short. That kind of is a measurement that combines how much the size of the content being shifted down, how much area of the screen is being shifted down, by how far is it shifted. So it's like a combination of these two values. And you want to have this as small as possible. You don't want stuff jumping around as the user is scrolling through your page. 
 
AIMEE_KNIGHT: So just recently, and thanks to Dan, I've been able to chat with him and get his insight to, which I'm immensely appreciated. But some of the tricks that I've been kind of doing is, so one of the first ones was, because I work for an e-commerce company, we have a lot of different tracking pixels and all kinds of stuff. We also have a chat box that is pretty heavy on initial page load. So what I did, I couldn't use the request idle callback because it wasn't available. What I did do is, since that was like a huge chunk blocking the page from being usable to the user is I Delayed that to only occur once the user has actually interacted to the with the page so that it can load more quickly. Just another thing just like make sure you're auditing like all your caching headers and especially if you're using third-party services. So we at work use site and the app was not always on site. And so we had some headers that were actually interacting with the default caching that Zite gives you. And then the last thing that I've been working on, which seems a little bit more creative is, we have, because we're building out multiple properties, we have some reusable chunks of React code in a monorepo. It's not necessarily needed on the page. So what I did is using like dynamic imports in JavaScript is I'm only loading those things if a certain user interaction is required to load those things and that's also cut down pretty drastically on this kind of stuff. 
 
DAN_SHAPPIR: So basically you're doing tree shaking and lazy loading of stuff only as needed. By the way, I completely agree with you about the caching. One of the easiest or quickest wins is just making sure that all your resources have the appropriate cache headers. I've seen so many cases in which stuff that's totally static is either not cached or cached for a really short duration. Another quick win that's similar to that is making sure that all your downloaded content is compressed. I mean, every browser these days support Gzip and even, broadly, for even greater compression. So downloading your HTML or JavaScript without compression or CSS without compression is just like sad. And there's no reason not to take advantage of that. But yeah, I totally concur with you. There are a lot of wins that can be achieved. We spoke with Bruce a couple of shows back, Bruce Lawson, about semantic HTML. I forget the number of the show. You can check this out later. But he was talking about the picture element, for example, as a means of downloading appropriately sized images. Because let's say the mobile screen is going to be smaller than the desktop screen, so why would you want to download a huge image for your mobile device? So you can use media queries to download properly sized images. Using that, you don't even need JavaScript. I think we also, one thing, Amy, that you discussed on a couple of shows is the ability that now exists in some browsers to lazy load images. So images that are below the fold don't download until the user actually scrolls to them. And all you need to do in order to get that is just to add an attribute to your image tag. You don't need to write any JavaScript or anything for it. So yeah, there are a lot of wins that can be achieved without too much effort. Especially if you know what to look for and then you can kind of measure where you were before these changes and measure where you are after these changes. And, you know, if you're looking for a raise, for example, it's really useful to be able to have this graph that shows how much you improved something when the next time you're negotiating your salary. 
 
AIMEE_KNIGHT: And I guess, yeah, since I'll have to hop off, my pick is just going to be the web.dev docs. I think they are... Not a lot of people know about them necessarily. And they're...Absolutely amazing. 
 
DAN_SHAPPIR: Yeah. 
 
AIMEE_KNIGHT: I pick as Dan as well. 
 
DAN_SHAPPIR: Thank you very much for that. 
 
AJ_O’NEAL: And two seconds. Just want to point out something. You can use picture now. It is backwards compatible with other browsers. So though your default image inside of your picture tag is an image tag, which means that no users that don't have supported browsers will have any deficit and all users that do have a supported browser will get the benefit 
 
DAN_SHAPPIR: and supported browsers for data like any browser from 2016 you will definitely get a lot of benefit for stuff from stuff like that. But I totally agree with you Amy about web.dev In fact more or less all the terms that I'm talking about have definitions on web.dev. So if one somebody wants to look like wants to reread the definition or wants to see images or graphs that show what these terms are, want more, who needs more clarification, they can definitely find it on web.dev. It's an excellent website. So I guess I'll resume. So I finished more or less the visual metrics. So now it's time for the metrics that kind of measures interactivity. The most explicit one, you could say, is simply called time to interactive, or TTI which kind of looks at when does the page become consistently interactive. And by consistently interactive, it means that you can expect reasonable reaction that all the elements that should respond to user input have the event handlers associated with them and that they'll respond relatively quickly to that user input. The problem is how to measure that. Now, Google has some fairly complex and sophisticated heuristic algorithm inside of Lighthouse. I have to say that I'm not such a huge fan of that algorithm. And I think it can be improved. And maybe I'll do that. Or at least I'll propose an alternative to the existing algorithm. Let's see how they respond to that. But in any way, it's really only appropriate for synthetic tests because measuring it in the field is kind of problematic. For example, if the user interacts with the session, that actually impacts this whole measurement of TTI. So you might get wrong TTI value simply because the user actually interacted with the session. So that makes this measurement kind of problematic. And like I said, it's mostly used in synthetic tests. A newer measurement is something called TBT, or total blocking time. This measurement, if you recall, I talked about long tasks. Those are those segments when the browser is busy for more than 50 milliseconds, which means that if the user does some sort of interaction, it's highly likely that we won't be able to respond in under 100 milliseconds. 50 millisecond segments. It looks at all the long tasks that happen until the time to interactive. And then it looks at how much longer than 50 milliseconds they were and just sums this all up. So it's like a sort of, again, a heuristic value that gives some sort of a measurement of how likely or unlikely the web page is to quickly respond to user interactions. And the last measurement that I want to talk about, because I really, I think, went through a whole lot, is again a relatively new one called First Input Delay, or FID. That's a measurement that's intended for actually for the field. And it looks at when the user interacted for the first time with anything, something in the session. So for example, if there's a button, the user clicked on that button. So it looks at when the first interaction occurred relative to the start of the session and how long it took the browser to respond to that interaction. And by respond, they're looking at how long it took until the first screen or update after that interaction. Essentially, the time that it took from that interaction until something visually happened in response to that interaction. So the values that you can get from FID are like you can get like three things from it. How many sessions actually have FID? You can say that those that don't have an FID, those are, that's your bounce rate. How long relative to the start of the session did FID happen? Because people probably don't interact with your site before it's like visually sufficiently complete for them to interact with it and how and the length of that duration until it responded to that, the browser responded to that interaction. And as I keep harping, you want that to be as under 100 milliseconds or close to that as possible. So those are the values that you can get from FID. The final thing that I wanted to say about metrics is that you can also create custom performance metrics for yourself that match your own particular use case. For example, there's a well-known story that Twitter created when they went through their big performance push. They created their own custom metric called time to first tweet, where they measured the amount and how long it took from when the page started until the first tweet became visible. So for them, obviously, when you visit Twitter, that's what they really care about. How quickly do you see the top tweet in your feed. So that's the thing that they measure. But let's say you're building an online store, then you may decide that one of the most important metrics for you might be a measurement of how long it takes until the visitor can make a purchase. For example, when does the Buy Now button become interactive and quickly responsive? So that might be your metric for deciding how well your web page performs. Or again, you can use a whole combination of all the various metrics that I mentioned, along with your custom ones. Now, just to finish off, if you use a tool like Lighthouse or PageSpeed Insights, then you get this sort of a score. And the score that they do is just a weighted average of some of these metrics. So they use a certain combination in Lighthouse version 5, which is what you currently get. But like I said, they're about to release Lighthouse version 6 and they intend to make some changes there, use different metrics and different weights. So it's likely that we'll see all of a sudden different scores for anybody who's using
when they do that. We will see if it's more accurate. Certainly hope so. That more or less covers all I had to say. I know that I said quite a bit. 
 
AJ_O’NEAL: All right. Enough said. Certainly. Well, thanks, Dan. Thanks for that in-depth explanation. I was really well prepared and researched and I think there was a lot of value there. So I hope that our listeners got that as well. As always, feel free to tweet at us with more questions and comments. We'd love to come back around to some of these topics and discuss them again with more information. So, so, uh, don't forget to check us out on Twitter. We're what JS Jabber these days. 
 
DAN_SHAPPIR: Yeah. And, uh, also this topic is definitely my passion. And like I said, what I do day in day out. So, uh, If anybody wants to reach out to me, the best place to get to reach me is also on Twitter. I'm just Dan Shapir on Twitter. So feel free to reach out. 
 
AJ_O’NEAL: That's with two P's and no E at the end. 
 
DAN_SHAPPIR: Correct. 
 
Are you stuck at home climbing the walls when you should be hanging out with a community at the latest conference to get cancelled? Are you wondering where to hear your JavaScript heroes like Amy Knight and Douglas Crockford and Chris Heilman? After the cancellations, I decided to put on a JavaScript conference for you online. I invited my favorite folks from around the web and got them to come speak at an online event just for you. Go to jsremoteconf.com and check out our speakers and schedule. The conference is on May 14th and 15th. The call for proposals is open until March 31st. Come join us at an online conference that we guarantee will keep you safe and keep you informed. jsremoteconf.com. 
 
 
AJ_O’NEAL: All right, well thanks very much for being our guest today and do you have any picks for us?
 
DAN_SHAPPIR: Yes, so I actually do have a few picks actually just two really so the first pick is that a few days ago I found out that there's this new accessibility feature in the Chrome dev tools. And I want to highlight that you can actually simulate vision deficiencies from within Chrome dev tools, so if you want to also know yeah, if you want to see how people who are let's say colorblind or have blurred visions or whatever, how they perceive your website, you can now actually simulate that and see it for yourself. Right now, it's not in the production version of Chrome. You can find it in Canary, but you just open the DevTools, go to the bottom of the rendering tab, and you will find a dropdown there that you can choose what kind of vision deficiency you want to check just see how the browser window adjusts. It's really, really cool and really, really useful. And I'm really happy that Google has done that. So that's my first pick. And my second pick, you guys keep picking TV shows, and I never did. So I decided to pick one as well. And the one that I want to pick is one that I'm really, really currently enjoying. I really love it. It's a better called soul. I was a huge fan of Breaking Bad. You know, it's up there like I would guess in my top three shows of all time. And Better Cold Soul is just awesome. When it started, I was really worried that they couldn't live up to Breaking Bad, but they have. It's really different. The pace is completely different and the characters, there's a lot of similarity, but the main character is certainly different than the main character of Breaking Bad. It's a totally different personality but the show is still, is just so great. I'm really, really loving it. I'm really enjoying it. And I can't recommend it highly enough. So, so those are my picks. 
 
AJ_O’NEAL: Awesome. So in terms of media, I have got to pick Brandon Sanderson's The Way of Kings. I think that it might be, I haven't watched that dirty, filthy soft porn show myself, but I think that it's, some people say it's tactically a similar idea to Game of Thrones, but in a family-friendly way where, you know, yeah, there's a little bit of blood, but there's no just lewdness to it. So like, I can't say for sure, but it seems like the two have some level of comparison, but I just love it, oh my gosh. And Brandon Sanderson is just such an amazing author. He's, I mean, I would have never picked fantasy as a genre that I was interested in. But the way that Brandon Sanderson writes, it's so methodical and logical that the fantasy feels more like science fiction that he writes.
 
DAN_SHAPPIR: It's interesting that you use that as the comparison. I just want, I've read it as well. I've also enjoyed it a whole lot. I just want to make sure people are aware that the series is not yet complete. I think he's supposed to write quite a number of additional books there. Hopefully this won't end up like what's going on with Game of Thrones where, you know, it never actually happens and we're kind of stuck. 
 
AJ_O’NEAL: So, Brendan Sanderson has a good track record of rotating through his novels and completing them. So, I, like it may be a decade before the series is complete because it seems like this one is like his masterpiece that he's been working on here and there letting it float around in the back of his head since like the 90s or something. And it's finally gotten it to a point where, over the last several years, he's been publishing. But I have faith that Brandon Sanderson will actually complete his works because he seemed to have demonstrate that in the past. And he's talked about his methodology of how he writes his books. And from what I understand, he writes the outline of the series so that the plot points are developed and then starts writing each book and revises as he goes. So he makes a point of talking about how he knows where the story is going before he publishes the first book, which I think with some other authors, the name of the wind comes to mind. 
 
DAN_SHAPPIR: Yeah, he's stuck. I understood that he wrote the book and then threw it away or something 
 
AJ_O’NEAL: supposedly the third book, he wrote it and didn't like it and is revamping it. And but I don't know, like it started so strong, like the way of Kings is what I wanted out of Name of the Wind, really. Like that's that's what I thought that I was getting into. It has very similar vibe and feel and storytelling technique and jumping back and forth between time periods. And so like to me, the books feel very similar, except that the way of Kings. I just have confidence that the whole thing is going to come to a conclusion, whereas, Name of the Wind, it just started becoming tangent after tangent after tangent after tangent. And then it's like, okay, well, like even if you could wrap this all up, does it mean anything anymore? 
 
DAN_SHAPPIR: So, yeah, I know what you mean. It's kind of like Lost, uh, in a sense, the, the TV show, uh, that you create this, uh, really amazing yarn, but then you have a problem pulling all the threads together.
Uh, the, and, and another issue that I have though, even with, with that book is that by the time the next book comes out, so much time would have elapsed. I might start forgetting the plot. 
 
AJ_O’NEAL: I don't know what the plot is. Uh, my wife, so my wife and I started reading it last year. And then I kind of felt like, I don't know if this guy's going to deliver. So I became less invested in, and she read through the second book on her own and kind of like gave me the recaps and every time she told me what was going on I'm like this just seems like it's getting less and less focused. Anyway, not to bag on it because I think that the premise was awesome and you know he could tie it up really well. The execution of the third book maybe why he's waiting on it so long is to get it just right. So maybe he'll just nail it and hit it out of the park. Hopefully he will. 
 
DAN_SHAPPIR: Yeah, one last thing I'll say about that is that it's set in the same universe as his other books, the so-called Cosmere. If anybody wants to go through a series that he did start and then finish, they can look at his excellent Mistborn series as well. 
 
AJ_O’NEAL: Yeah, Mistborn is a wonderful work. And in some ways I prefer Elantris, which I think was more his entry into his own... You know, like his first popular book that he did himself. But Elantris is very interesting. It takes like five chapters to get into because they rotate through the characters. It takes you the five chapters just to kind of understand like, oh, this is the place that we're in and this is how these characters are relating to each other. And you can start to imagine a path where they start converging, that it's not just three separate stories, but the first three chapters, it's just like three completely different stories. And that was a new way of storytelling for me. The way of King's kind of bounces back and forth. It's not methodical. Like this chapter is about this character, this chapter is about this character, this chapter is about this character but it does bounce back and forth where it's like, here's two chapters about this character, now here's a chapter about this character. We're gonna skip this other character for a little bit because nothing's going on in their world right now. We've said all we need to say about them and then come back to them 10 chapters later. 
 
DAN_SHAPPIR: So I'll throw in another retroactive pick. If you've listened to my other picks throughout all the episodes of JS Jabber that I participated on, I actually described various fantasy classics like books fantasy books that I really, really love. So if you go back, even just look at the picks section on the JS Jabber website, you will find quite a number of really excellent fantasy books, most of them complete series that you can share that I would definitely highly recommend for you to check out. So if you're into that kind of stuff, you know, please do.
 
AJ_O’NEAL: And then, you know, since I haven't babbled on long enough, I'm also gonna pick Taco Bell. If you don't have a Taco Bell in the country where you live, let me tell you, you are missing out on the finest American misrepresentation of Latin food that you could possibly be missing out on. Ah, 97 cent burritos have never tasted so good. And I, I just, I can't stomach the authentic stuff in comparison. I, I really don't like most Latin food, but I do love Taco Bell. And there's a couple of like. American Latin fusion restaurants that I do really like because they you know, they do the things that Americans like they have nice presentation they put on the mole sauce on probably things that maybe mole doesn't traditionally go on and which by the way is Yes, yes the I forget how to say chicken, but you know the chicken fingers with with with vinegar tomato sauce. No, anyway, so That's all. Thanks for listening in. Thanks for everybody that was on the show today that has already left now. 
 
DAN_SHAPPIR: Yeah, and we wore them out, we wore them out, we wore them down. 
 
AJ_O’NEAL: We'll catch you next time on a new episode of JSJabberZ!
 
DAN_SHAPPIR: Bye! 
 
AJ_O’NEAL: Adios! 
 
Bandwidth for this segment is provided by Cashfly, the world's fastest CDN. To deliver your content fast with Cashfly, visit C-A-C-H-E-F-L-Y.com to learn more.