Opinionated Core Web Vitals - JSJ 647

Dan Shappir takes the lead this week to discuss Core Web Vitals and how Google is pushing the web to be faster.

Show Notes

Dan Shappir takes the lead this week to discuss Core Web Vitals and how Google is pushing the web to be faster.
He leads Chuck, Aimee, and AJ through the ways that developers can measure and improve the performance of websites based on the statistics specified by Google as components of Google rankings.

Sponsors 


Links
Picks

Transcript


CHARLES MAX_WOOD: Hey everybody and welcome back to another episode of JavaScript Jabber. This week on our panel, we have AJ O'Neill. 

AJ_O’NEAL: Yo, yo, yo. Coming at you live from Pleasant Pleasant Grove. 

CHARLES MAX_WOOD: Amy Knight. 

AIMEE_KNIGHT: Hey, hey from Nashville. 

CHARLES MAX_WOOD: Dan Shapir. 

DAN_SHAPPIR: Hi from Tel Aviv. 

CHARLES MAX_WOOD: I'm Charles Max Wood from devchat.tv. And Dan, you were kind of filling us in a little bit before the show talking about Core Web Vitals, is that what that's called? 

DAN_SHAPPIR: Yeah, that's what I thought that it'll be interesting to speak about. It's not like we haven't exactly spoken about it before. I actually was a guest on a previous show that was all about the alphabet soup of performance metrics, like all the different acronyms. And later on, we had Rick Viscomey from Google talking about how they collect this information. And then we had Martin Split from Google talking about how they use this information, at least to an extent, but I still think that there's a lot to discuss here and to take apart. And especially given that I'm going to take this kind of an opinionated view of this whole thing, very non-corporate. So everything that I'm saying is my own opinions and nothing to do with my employers or anything like that. Hopefully I won't get into any trouble. 

 

This episode of JavaScript Jabber is brought to you by DigitalOcean. DigitalOcean recently announced their new App Platform service, which is a solution to build modern cloud-native apps. With App Platform, you can build, deploy, and scale apps and static websites quickly and easily. Simply point your GitHub or GitLab repository and let the App Platform do all the heavy lifting. As it has support for Node.js, Python, Go, PHP, Ruby, Static Sites, Docker, and Container images, DigitalOcean runs their App Platform on their own infrastructure so your costs are significantly lower than the other products on top of DigitalOcean Kubernetes, providing a smoother migration path, so you can take more control of your infrastructure setup. As a listener of JavaScript Jabber, you can get started for free. Better than free, because DigitalOcean is giving you $100 credit when you go to do.co slash jabber. Again, go to do.co slash jabber to get your $100 free credit on DigitalOcean's new app platform. We want to thank DigitalOcean for sponsoring this episode

 

CHARLES MAX_WOOD: Yeah. So, so yeah, what, what are core web vitals? I think that's a good place to start.

DAN_SHAPPIR: Well, let's yeah, that's a good place to start. So it turns out that Google thinks that the web should be faster. Uh, that, that is that when people visit websites, the websites should, should load faster, should respond faster and overall provide a better experience to, to those visitors. I like to say that we are kind of lucky that the good of the web, not always, but in many cases, kind of aligns with Google's bottom line. They make most of their money, most of their revenue from ads. I think Google is like the biggest ad company in the world or something like that. Maybe Facebook might be giving them a run for that position. But other than that, I think they're the biggest by far. And most of that, not all of it, but most of that, I think is coming from ads on websites. So Google has this incentive that more people browse the web because then more people would be exposed to their ads and Google makes more money. And that means that they want a better web, a web that people spend more time on. And for me, as a proponent of the open web, that's really a good thing. So I'm really happy that this alignment has happened and whatever their motivation, as long as they're doing good things, well, I'm happy about it. And when Google tries to push the industry in what they consider to be the right direction, they have like these two whips that they can use to prod us along, to get us going. And one huge whip is the Google search. All of us want our websites to rank, because if you don't rank, then you don't exist. And if they make something a ranking factor or ranking signal, and then go public about it, telling everybody that that's the case, well, people have this huge incentive to invest in that aspect of their website. And that's one thing that they can do and they have done in the past. And the other thing that the other whip that they hold is the Chrome web browser which I think is the most popular web browser. At least that's what I see when I look at the Wix statistics about the browsers that our visitors are mostly using. And that's also a whip because they can adjust the Chrome UI to encourage certain things in websites. And I'll give an example. Few years ago, Google decided that the web needs to be HTTPS everywhere that if we want the web to be good, it needs to respect privacy, it needs to respect security, and for that it needs to be HTTPS. Now, you know, I still remember not so long ago if you would go to some online store or whatever and it would be just HTTP and it wouldn't turn to HTTPS until you were actually at the checkout and then let's say it might redirect to PayPal or something like that. And only then would it actually become HTTPS. But all the rest of the thing, the entire process was over at HTTP, which from my perspective is also problematic because I don't want people to know what I'm shopping for. And Google rightfully concluded that that's a bad thing and that everybody should be HTTPS all the time. I hope everybody agrees with me here. And what they did is they exactly used these two whips that they had. So in the, in the search they basically told people that HTTPS is going to be a ranking factor. That all other things being equal, if there are like two sites that have equivalent that's content and authority, one is HTTPS and the other is HTTP. The one that uses HTTPS will rank over the one that just uses HTTP. Now, to be fair, it doesn't even need to be true. It's just sufficient that they say that that's what they're going to do. And that really gets a lot of people going. And the other thing that they did is that they modified the Chrome UI initially to put this kind of a greenish background behind HTTPS URLs and then put a reddish background behind HTTP URLs. I think that now that's done. They have this lock icon. I don't think they put the colors anymore, but they got it into people's heads that HTTP was dangerous, that it's not a good thing, and that you should prefer HTTPS. And they used Chrome for that purpose. So by using these two things, these two whips, like I said, or prods, they got us all moving in the right direction. And now they are effectively doing, or trying to do, the exact same thing with regard to performance. And that's where Core Web Vitals come into the picture.

CHARLES MAX_WOOD: Yeah, that makes sense. It's funny too, because I remember when they made some of those changes, people were, especially the your HTTPS example, a lot of folks were frustrated because they were just content sites and things like that. And they didn't want to have to go to HTTPS. But yeah, I agree with you that that makes sense. It's interesting that Google has put themselves in a position to sort of push some of these ideas or push some of these technological choices, right? Where I think a lot of other companies would have just let things coast, right? It would have been like, well, whatever you want to do, right. And so it would have been up to the public instead of up to folks like Google to push some of this stuff. 

AJ_O’NEAL: I don't want to get us off track. So just don't answer if it does. But what is the incentive for Google wanting people to have HTTPS? I get why it's good for me, but I don't get why it's good for them. Well, add revenue, add revenue. They not getting those malicious adjacker things. Is that it? 

DAN_SHAPPIR: Yeah. Look, I haven't tried to do a deep and specific analysis of that, but I tend to agree. But like I said at the beginning, Google wants the web to be a pleasant and a safe place that people want to use. Like you wouldn't go to buy at a store where there are a lot of pickpockets hanging around and maybe stealing your stuff or whatever. You want to go somewhere where you feel safe while doing the shopping and they want it to be the experience when you're online. So that would be my guess. But like I said, I think that we're kind of lucky in that overall, the good of the web in most cases, not all cases, and we shouldn't always count on it to be true, but in a lot of cases, the good of the web and the good of financial benefit of Google currently align, which is lucky for us. It doesn't align that much with Apple, for example, which is really unfortunate for us. 

CHARLES MAX_WOOD: So I guess the point, you know, coming back to the web vitals and performance is, again, they want it to be a good experience where people are happy to come to the web. And so they're encouraging people now on performance in the same way they did for HTTPS in the sense that, yeah, it's good for the web. It's good for the consumers of the web. And so Google is going to push it forward because more people on the web is good for them. 

DAN_SHAPPIR: Yes, at least that's how I see it. And again, this is my opinion. Like I said, this is going to be an opinionated episode and that's my understanding of the situation, but it also raises the problem that they were facing because with HTTPS, it's, it's really clear cut either the site uses HTTPS or it doesn't. And it's very easy for them to see that that's the case. In the case of performance, how do you actually determine if a website has good performance or not? And what does good performance actually constitute? How do you measure it? And what do you actually measure? What might be good performance for one type of a website? Might be bad for a different type of website or it's also dependent on geographic locations because the web, let's say in Canada, is much faster than the web, let's say in India. So it gets much more complicated when you look at that factor of performance. And that's kind of the reason that we had that big episode a while back on where I literally listed for an entire hour and a bit of that entire recording a whole bunch of performance metrics because there are a lot of different measurements that you can take when you're trying to gauge the performance of a particular website. So it really becomes tricky. And in order to kind of address, so let me put it differently. When Google decided that they're going to be focusing on performance, they concluded, and again, rightfully so in my opinion that it has to be metrics that are valid, that actually indicate something real about the performance of a website. But it also can't be something, they can't have too many metrics. So there are a lot of aspects of web performance that you can measure, but they're not going to focus on everything all at once. Instead, they're going to focus on a particular subset of things, what they would consider to be the most important things, because the vast majority of web developers and designers and SEO people, etc. that are out there are not performance experts, don't want to be and shouldn't be. And if you came at them with something like 20 different metrics that they need to now analyze, measure and analyze and optimize, it's just not going to happen. So what Google did essentially is like they did this to step process. First of all, they came up with something that which they called web vitals, which were the largest set of performance metrics. And then out of these, they distilled three metrics, which they refer to as core web vitals, which are the ones that they're going to focus on for now or at present. And what they also said is that every year, they're going to look at three core Web Vitals, see if they're still the ones that you need to focus on and potentially modify them, evaluate them, maybe even replace them with other metrics if they see a need. So far, they've announced this set of three metrics back in the beginning of 2020, I think, and so far they haven't changed them. So for now, at least they're sticking with it. And these three core Web Vital metrics are LCP or largest content for paint, FID, which stands for first input delay, and CLS, which stands for cumulative layout shift. And these are the ones that they're focusing on. And these are the ones that they're now starting to use as a ranking signal into Google search, which means that the values that get measured for your website for these metrics can be impact the ranking of your site, at least in mobile searches? 

CHARLES MAX_WOOD: So you kind of made this point before, Dan, and I'm a little curious to see how this goes, but you basically said not all internet connections are created equal, and not all devices are created equal either. And so I'm wondering exactly how do you measure these numbers, right? Because it looks like it's... I pulled up one website where they kind of explained what these three measures are. And it's, hey, it has to be 2.5 seconds or four seconds or, you know, 100 milliseconds, 300 milliseconds kind of thing. They've got these benchmarks. So is it just when the bot hits it or? 

DAN_SHAPPIR: That's an excellent question. A lot of people initially assume that it's going to be the Google bot or something that Google is going to hit your website, measure how long it takes for the bot to do whatever, and those are the numbers that they're going to use. And that's totally not the case. People were also looking at measurement tools that Google put out there, like a Google Lighthouse and Google PageSpeed Insights and thought, Hey, they're probably going to be using Lighthouse to measure your website and use that number. Well, again, that's totally not the case. So I mentioned we had Rick Viscome on a previous episode and Rick is the person at Google who manages a project called CRUX, which stands for Chrome User Experience. And this is a reminder of what this thing is. It turns out that when you install Chrome on your device, or if it comes pre-installed, unless you opt out, then Chrome gathers data about all the web surfing that you're doing. Do people still use the term web surfing? I don't know. All the websites that you visit, all your web sessions Google collects information about that, anonymous, of course, and it gets sent up to the mothership. So it gets collected in this huge Google database. And that includes performance information. So they measure your Chrome, measures the time that it takes for your website to load effectively. And that measurement or those measurements, actually, because it's more than one number and gets put into this database called Crux. Now Google then uses this information as the input, as an input more accurately to the Google ranking algorithm. So the Google ranking algorithm receives a ton of signals from a lot of different sources and the various analysis that they perform on a website. They like look at things like the quality of the content, the backlinks the authority, et cetera, well, now they have another ranking signal which incorporates the performance data that they get from this CrUX database, which means ultimately that the number that they use or the numbers that they use in order to determine whether your website has good performance or not is based on the actual experience of real users who are using Google Chrome as their browser. Unfortunately, they're not looking at people who are using other browsers. So Safari users, for example, have zero impact on this ranking signal. Likewise, Firefox users or even Edge users, even though Edge is using the same Chromium engine as Chrome, because it's a Chrome thing. It's not a Chromium thing. 

AJ_O’NEAL: You forgot Brave. 

DAN_SHAPPIR: Brave certainly does not send data to to the Google servers. I think Brad and I would become apoplectic if it happened. 

CHARLES MAX_WOOD: I know, right? 

DAN_SHAPPIR: I think they're introducing their own search engine even. 

AJ_O’NEAL: And it works remarkably well, actually. I think that the Brave Search is getting me really good results for the type of searches that I do, which I'm surprised by. 

DAN_SHAPPIR: I think that I don't. Is it something that they developed or I think they bought some company or merged with some company, I think. I'm not sure. Anyway, it's kind of off topic. I don't know. Yeah. Probably, we should probably bring somebody from brave to talk about that. It does sound like an interesting topic, but anyway, I digress. So as I mentioned, there are these three main metrics, which are the core web vitals. The first one I mentioned is, is LCP, which is largest contentful paint. It measures the time from the start of the session until when that the biggest content element within the initial viewport was rendered. Usually, that would be an image, but it might also be text. Interestingly, SVGs don't count. So why my guess is that SVGs would be too easy to fake. Here's the thing about Core Web Vitals that Google needs to take into account. On the one hand, there are supposed to be these metrics that, putting Google aside, there are supposed to be these metrics that if you optimize, if you improve, you will get a better website. I mean, getting good performance is not just about the rank. Like I said, the rank is kind of the whip. Getting good performance is also about the user experience. If you have good performance, you'll have better engagements. You'll have a lower bounce rate. Those are the things that should really be of interest to a site owner. If people, if you have an online store, even if people find it, but then it loads so slowly that they just leave without buying everything that what have you gained? You want people to engage with your content and that, you know, amongst other things requires good performance. So in an ideal world, the Core Web vitals, you would just think about the visitor experience. But once Google has made them a ranking signal, they also need to take into account that people will do bad things in order to improve their rank. It's unfortunate, but it's well known that it happens. So think about me putting this I don't know, an SVG box around my entire viewport just to create this big visual element because it's as big as the entire viewport, but it's effectively empty. It's like this one line SVG and it's embedded into the HTML. So it will be like the first thing to render. And I would get a really great LCP score, even though I've literally done nothing to actually improve the the actual user experience. Now I'm not saying that this is the reason why Google has done it, but that's my guess. Or let's put it this way. They have another metric that's a web vital, but not a core web vital, which is first contentful paint, which measures when the first element of content is, is rendered on the page. And that one does take SVG into account. So it's interesting that for the first contentful paint, they do take SVGs into account. But for the largest contentful paint, they actually don't take SVGs into account. Okay. It is what it is. So an LCP would be either the biggest image or the biggest piece of text, whichever is the bigger one between them. In the case of text, it's the, the rounding is the box that contains the text. So it would be, let's say if the text, let's say is inside of div, it would be the, the rectangle of that, uh, div, the closing rectangle of that div.

CHARLES MAX_WOOD: All right. So when you say largest, you're saying by real estate on my page. 

DAN_SHAPPIR: Yes. Although in the case of images, they also like factor in the image quality. So, uh, if you've got a low res image that you then stretch, stretch to cover a large area, they will kind of factor it down to account for the fact that it's a low res. 

CHARLES MAX_WOOD: Right. 

DAN_SHAPPIR: Interesting. Anyway, they're also playing with these metrics. They're trying to make them more useful. Some people are kind of annoyed by it because people are working hard on optimizing these metrics. And then Google comes along and modifies how they actually measure one of the metrics. So that can be annoying or confusing. So for example, in Chrome 88, they modified the LCP metric to ignore full viewport images. So if an image covered the entire viewport, they would actually ignore it. And the motivation was that such an image is probably a background image. And consequently, it's not like the main content of the page. And therefore, they decided to ignore it. By the way, the whole reason for picking Largest Contentful Paint is that they kind of ran tests. They said people at least that's what they told me they sat people in front of computers had them load website and like click a button when they thought the page the site load was loaded like and finish loaded and what they found is that in most cases the time the point in time where the people click the button correlated to when the largest piece of content was rendered in the browser window. So that's where this metric came from. Big problem with LCP though, is that you never know unless the content covers the entire viewport, in which case, like I said, for images, it's actually ignored. You never know if a bigger piece of content is not going to occur like a little bit later. So think about it. Let's say the page is loading, it draws something. It's that's the largest content for paint until this point in time. But then something bigger might be drawn a second later, which then becomes the largest content for page. So the question is kind of, when do you stop? And that's one of the problematic things about this measurement. So they stop when the user interacts with the page. So if the user scrolls or clicks, they say, okay, if the user does something with the page, it means that from the user's, the visitor's perspective, the page is loaded. So, you know, no point in waiting for more stuff. But that's kind of one of the problems with LCP. And that's one of the reasons why when it's measured in lab conditions, like by a tool like Lighthouse, you might get really different results than what you might get in the field, because it might be stopping at a different point. And that's a problem with it. There are other issues with it as well, but those are like the main ones. 

CHARLES MAX_WOOD: I guess my question here is, let's say that I have a content website, I don't know, that has podcasts on it, maybe. And I care about my Google ranking. And so I want to improve this. How do I know what the large, what the thing is that they're measuring? Right? Because, I mean, ideally, what I'd like to do is I'd like to say, oh, they're measuring when I load this image or, you know, this, whatever, right? When I load in this piece. And so do I opt, I'd like to be able to optimize that. And then maybe something else takes that on or I don't know. I'd like to know which piece to optimize. Is there a good way to know that? 

DAN_SHAPPIR: Yeah. So there are actually two ways to go about it. So one way is the lab way or the synthetic way. Say you're working on your website or a new version of your website. You still don't have actual visitors or people coming to this new version. But you want to compare it to the older version to see, or to the current version to see whether you're improving or regressing or whatever. So you can use a lab tool like Google PageSpeed Insight. You put in the URL for the existing site, you put in the URL for the new site, and it essentially just simulates a session in a certain configuration. For example, PageSpeed Insight actually simulates as a mobile device the Moto G4 on a 3G network. And then it runs a test for a couple of seconds, loads the site, sees what's the largest piece of content within the initial viewport, and basically measures how long until that got rendered. And you can compare the value that it gets for the existing site with the value that you're getting for the new version that you're currently building. And not only that, it will actually show you which of the elements on the page is the one that it picked as the largest contentful paint. So it might be an image. And then you can go and check and say, hey, I'm actually loading this huge PNG. If I use, convert it to a JPEG instead, it would be a tenth of the size and it will download so much faster. Or maybe I can even use one of the newer formats like WebP or AVIF and make it even smaller than that. Or maybe I'll change the design of my page a little bit on mobile so that instead of an image being my largest contentful paint, I'll make my headline text be slightly bigger. And then it will be my largest contentful paint. And maybe instead of a web font, I'll use one of the quote unquote system fonts like REL or even Times New Roman or whatever. So it doesn't even need to download the font. And then I would get a really, really good LCP value because it's just text. So, yeah. So, so one way to go about it is, is a lab tool or a synthetic tool, like page speed insights, like Lighthouse. The other way that you can go about it is that you can actually look if your site gets sufficient traffic. You can actually get data from that same Crux database that Google uses for ranking. So, and Rick actually touched on that. So it's interesting. First of all, they don't put every website into the Crux database. You need to get a certain amount of traffic before they even put you in the database. So just so you know, if you create a new website that doesn't yet have any traffic you're not going to get this boost for performance, even if your website is really fast, until you get sufficient traffic to actually get into that database. And sufficient traffic, I think, is like a couple of hundred visitors or thousands of visitors a day or something like that. So, you do need to get a certain amount, minimal amount of traffic before they even consider you. So, you won't get penalized but you won't get the boost for that not because you're not fast enough But because you just don't have you don't yet have enough traffic to even be in that database. But suppose you are in that database Well, there are two ways for you to look at the values that Google is calculating for you A one way is using Google page speed insights in the middle of the page They have a section called field data and they will actually show you the values that you got over the past 28 days. So this sort of an aggregate value. And what they're looking at is the 75th percentile. So for example, for LCP, a good value is two and a half seconds or less. So they look at the 75th percentile. That is the value that is slower than 75% of your users and faster than your slowest 25% of users, and see if that falls in the good range, which is 2.5 seconds or less, or in the quote-unquote needs improvement range, which is between four seconds and two and a half seconds, or in the poor range, which is slower than four seconds. If it falls within that poor range, you won't get a boost for that metric if you fall within the needs improvement range, you will get a certain boost that's relative to how good you are. So it's like a linear growth sort of a thing. They're not giving the exact formula, but let's assume that it's some sort of a linear growth. So the closer you are to the good section, the more boost you will get. And once you get to the good section, it plateaus. Once you get to that 2.5 seconds, that's the you get the maximum boost. And if you manage to improve it from 2.5 to 1.5, you won't get any higher boost for your ranking, you might get happier users on your page, but, but in terms of the ranking, that's the maximum boost that you will get for that particular metric. So that, that's LCP. 

CHARLES MAX_WOOD: So I guess I have another question related to this and that is, I for a long time, I've kind of focused my concerns over performance into say the backend, right? And so however long it takes to render HTML, blah, blah, blah. And this comes primarily because my background's mostly Ruby on Rails, right? And so it's all almost all server rendered, right? But now I'm thinking, okay, well, what if I put in say a stimulus JS or a react or angular or something, right? How does that affect this, right? Does it kind of add up the time it takes, including loading in whatever components I've put into my website so that this thing shows up or doesn't, and including all the CSS? And does it also include the time it takes to go back to the server and say, okay, now I need this spreadsheet, now I need this JavaScript and stuff like that? Or is it just the paint time? 

DAN_SHAPPIR: So the answer to that is yes. But before I go there. I just wanted to finish one point. So I mentioned that you can get the field data through Google Pagespeed Insights. I just want to mention that there's another source that's really useful for getting the field data, and that's the Chrome Search Console. So most people who have a website for professional reasons usually use the Search Console to make sure that Google properly sees their website. And you now have a Core Web Vitals section panel or tab, call it whatever, within your search console. They will actually show you the current situation broken down by pages. They will say, pages of this type have, let's say, poor LCP or they need improvement for CLS. Again, this data is coming in from the CRUX database, I think. It doesn't really matter. It's the end of the day, it's the same source of data. It's that data that they collect from actual Chrome sessions. They actually, in this case, I think they actually do have this daily moving average, but it's still averaged over a 28-day period. So even if you made a significant improvement, take into account that it'll take some time for Google to actually notice it. But now going back to your question. And the answer is that yes, modern webpages are complicated beasts. That's one of the reasons for this podcast, I think. And there are a lot of moving parts. There's the backend, there's the network, whether or not you use CDNs. That's the media that you're downloading to the browser, its format. Are you downloading it via CDN or using something like a cloud flare or something like that how much JavaScript are you running on your browser? All these things impact performance. And since Core Web Vitals try to measure performance, all these things impact Core Web Vitals. The bottom line in terms of LCP is that you want to get your primary content down in front of the visitor as quickly as possible. So for example, if it's an image it's better if the HTML that is downloaded from the server already has that image SRC inside of it, rather than you first having to download, let's say, React and run React on the client side, because all you're actually serving is just blank HTML, you know, the empty div. And you're totally constructing your entire user experience on the client side you're probably going to get poorer performance. And that's just a reflection of reality, because your visitors are getting a poorer performance. It might be easier for you to build your website this way, but at the end of the day, I think what matters most should be the experience of our visitors and customers and whatnot. So yes, you do need to take into account your server time. If you're serving a really dynamic content then you need to think about your database queries. Or maybe alternatively, you can use something like a static site generator, like a JAMstack sort of thing, and you're still doing all these database queries, but you're doing them at build time rather than in runtime. And then the page just gets generated and pushed into a CDN, and then it's delivered really, really quickly. So yes, there are a lot of moving parts in modern websites, and you need to think about them. What can I say? Yeah. So moving on to the second Core Web Vital, which is FID, or first input delay, that one measures the time from the first interaction that the visitor has with a web page, let's say it could be a mouse click or button press, whatever, until the browser can process that request, which means that if there's, let's say, some client-side JavaScript associated with with that mouse click, then until the browser can run that JavaScript. Or if it's some sort of a default action, then until the browser can instigate that default action. So that's what gets measured by first input delay. And the things that can cause a lengthy first input delay are, for example, if you're running a whole bunch of JavaScript on inside the browser, and the browsers well, they're not as single threaded as they used to be, but they're still pretty single threaded. So if your main thread is busy because it's running a lot of JavaScript and because of that, if the user clicks on something, then nothing happens, then you'll get a poor FID. And it only measures the first one, that's the first input delay, I guess because you don't get a second chance to make a first impression. And yeah, that's what it measures. And the thing here is, is that it really annoys people when they try to interact with a user interface and that user interface does not respond to their interactions. The example that I like to give is when you're standing in front of an elevator and you press the button, you don't expect the door to open immediately. So you don't actually expect that operation that you requested to instantly finish. You're happy if it happens, but usually it doesn't happen. But you do expect the button to instantly turn on. And you do expect the numbers to start counting towards your floor. And if you click the button and nothing happened where people do this sort of thing, it's called rage clicks, they start going like tap, tap, tap, tap, tap. It's the same thing with web interface or app interface, especially on mobile devices. People have become really used to this kind of an instantaneous response that when you tap something, it responds to your tap. If it doesn't, well, it can drive you nuts. That is the thing that first input delay tries to measure. 

AJ_O’NEAL: In the case of flat design, I think it's really annoying about this is pretty much all visual cues are eliminated. And when I say flat design, I mean most flat design, not the theoretical best flat design that could occur, but they get rid of the shadows, they get rid of the colors. Yeah, super frustrating. 

DAN_SHAPPIR: Oh, I totally agree. 

 

Are you ready for core web vitals? Fortunately, Raygun can help. These modern performance metrics play an important role in determining the health of your website, which is why Raygun has baked them directly into their real user monitoring tools. Now you can see your core web vital scores are trending across your entire website in real time and drill into individual pages to focus your efforts on the biggest performance gains. Unlike traditional tools, Raygun surfaces real user data, not synthetic, giving you greater insights and control. Filter your score by time frame, browser, device, geolocation, whatever matters to you most. And what makes Raygun truly unique is the level of detail they provide so you can take action user session data, instance level diagnostics of every page request, and a whole lot more. Visit raygon.com today and take control of your core web vitals. Plan starts from as little as $8 per month. That's raygon.com for your free 14-day trial. 

 

DAN_SHAPPIR: I'll give an example from a Wix tool. We have the Wix editor where you build your website. And when you click the save button, the same operation is a lengthy operation. It can't be made instantaneous. But like I said, people don't expect the elevator door to open immediately. So we do need to, when we start the save operation, we do need to provide immediate feedback to the user that their save request is being processed. So it could be some sort of a spinner, color, something. And you're totally correct that if I build a user interface where the visual cues or the visual feedback is so minimal that many users might not even notice it, you're doing damage. It's frustrating. That's totally true. So just to finish that point on FID, as with LCP, it's kind of this green, yellow, red, like street light, where the green part, the good part is from zero to a hundred milliseconds. So anything that's a hundred milliseconds or faster counts as good FID up to 300 milliseconds counts as needs improvement and anything above 300 milliseconds count as poor. And that's based on a lot of user interface research that has been done over the years that shows that people consider anything better than a hundred milliseconds to be in effectively instantaneous. 

AJ_O’NEAL: And I missed part of that. You want it to be at least a hundred milliseconds, right? 

DAN_SHAPPIR: Most at most. 

AJ_O’NEAL: No, at least. 

DAN_SHAPPIR: No, you want it to be a hundred milliseconds or less. 

AJ_O’NEAL: No, at least. But if that happens, no, nobody wants it to be less because 

CHARLES MAX_WOOD: when you click when it responds, it needs to respond within 100 milliseconds. 

AJ_O’NEAL: Okay, so are we saying respond as in some change? Okay, yes, yes, agreed. Agreed. Agreed. But like, like if a transition is less than 100 milliseconds, it looks really jarring. 

DAN_SHAPPIR: Yeah, but the transition needs to start. 

AJ_O’NEAL: Yes, agreed. Agreed. Okay. Okay. 100% agree. 

DAN_SHAPPIR: And and again, they do the same sort of a calculation on the ranking signal. So each one of these Core Web Vitals is now calculated independently. Initially, when Google presented this model, there was the assumption that you're only going to get the boost if you're green, that is good, and that you're only going to get the boost if you're good or green for all the three. And recently, they came out and said, no, each one of these metrics is going to be measured independently and you're going to start getting a boost even if you're in the needs improvement range, you're going to achieve the maximum boost once you reach that good range. And then like I said, it plateaus. I guess the reason that they've said that, they haven't been clear that, so it's not like they said we change our minds because they were never really explicit about it before, but that's what most people understood. And I'm guessing that Google just looked at the results out there and said, hey, if we do it that way, then too many people won't get any boost and people would be just totally discouraged and not try to improve anything. So at least give them the incentive. If they're like really bad on one of the metrics but might be good in the others, at least give them an incentive to improve what they can improve, at least for now. So they're going to measure each one of these metrics independently. 

CHARLES MAX_WOOD: So I can imagine people here, we kind of talked about managing what that largest thing is on your page so that it's something that will load fast. And I can imagine something here where you create or kind of steer people toward something that you know, you can get them to click on that will start to interact quickly, right? 

DAN_SHAPPIR: Yeah. So there are good ways to improve the FID and then there are, I don't know, less good ways to improve the FID. So, so let's say I'm building this. 

CHARLES MAX_WOOD: My loading spinner is going to come in so dang fast. 

DAN_SHAPPIR: Yeah. So let's say that I'm building an online store. And obviously the thing that I want people to click on is the buy button. And I want the buy button to be really quickly responsive because if people are get frustrated while pressing that button, it's really bad experience for your store. So, and let's say I do have poor FID. So the good way to fix it is to make my code better instead of downloading. I don't know, a meg of JavaScript in order to get my page up and running. Maybe I can download 100K of JavaScript to get my page up and running. Maybe I can download my JavaScript in parts so that in order to get that button up and running, I only need 20K of JavaScript to be downloaded, parsed and run, and I can download the rest even when somebody actually needs it, and so on and so forth. So that would be the good way of improving my FID. The not so good way would be to hide the button until I finish downloading all my JavaScript. And then there'll be nothing to press until I'm ready for you. I could do that. And then people might just bounce off of the page. But if they do click, they will get a good FID value. So like I said, there are good ways to improve things and there are not so good ways to improve things. But the best way to improve FID is, like I said, by reducing the JavaScript payload that is needed to support the initial interactions and also to avoid, let's say, really complicated CSS, maybe that could really create a bottleneck in terms of the rendering of the page and stuff like that. But the only thing that I can say here is that in most cases that I've seen, unless people do really bad things, FID is usually pretty good. If I look at the other metrics, people are having a lot of problems with CLS and people are having a lot of problems with LCP. Most websites that I've seen that actually make it into the Crux database usually have pretty good FID. Unless you're doing really bad stuff with JavaScript, like I said, downloading a full meg of JavaScript, gzip, just to get the ball rolling, then you've got a problem. But if that's not the case then you should be okay for FID. Which brings us to the last metric, which is CLS. And this is an interesting one. That one actually tries to measure what is called visual stability. I'm sure you've all encountered this situation where you've been reading some sort, let's say an online article or a blog post or whatever, and you're in the middle of reading a paragraph, and then the page jumps because it just finished loading an image and that image pushed everything down, or maybe it replaced an ad, and the new ad has a different size from the previous ad, so things get either pulled up or pushed down, and you've lost your position in the page. And that's obviously a very poor user experience. I hate it when it happens, and it definitely happens in a lot of new sites. So what CLS measures is how much things on the page jump around not in response to direct user interactions. So they look, whenever something on the page moves, first of all, they check to see if it's, if there was a user interaction in the past half second or so, and if there has, then they ignore it because they say, well, there's a chance that it's in response to that user interaction, we'll let it slide. But if it's not that, if nothing, like the user is just reading, then they look at the size of the thing that moved and at how far it moved. And they calculate a number based on that, like sort of a ratio number. How much of the viewport has moved and how far has it moved? And it's cumulative in the sense that it's not like the LCP which is looking for the largest paint and ignoring the others or FID, which looks at the first interaction and then ignores subsequent interactions. It continues to measure these movements. So the more such movements that you have on the page, the bigger the value that you will get. And it just keeps on running and running and running throughout the entire session. And now they changed it now, and I'll explain how they change it in a second, but I just wanted to verify that that's clear because it's a kind of a complicated metric. And it's also complicated because unlike the other metrics, it doesn't measure time. In fact, it doesn't have really any sort of a units. It's just a number. 

AJ_O’NEAL: So remind me again, how are you getting all of this deep dark secret knowledge? 

DAN_SHAPPIR: Actually, Google have been fairly forthcoming about these metrics. So there, for example, there's a site called web.dev, which is run by Google. And it has articles explaining these three metrics. And Google have been fairly forthcoming about it. There's still a lot of confusion out there because, you know, whenever there's a technical topic, there's bound to be confusion about it. And Google have been making changes as they go along, and as they receive feedback from the field, from partners and whatnot. But they have been as forthcoming as they could be about the stuff that they're actually measuring. 

CHARLES MAX_WOOD: Yeah, I have a question. Yeah, that's actually what I've been looking at. And I just posted the links we put in the show notes. So I'm looking at the cumulative a layout shift that you're talking about, Dan, and I've been to some pages and I've had some of my pages do this too, right? Whereas things kind of get loaded in some, some images or things like that on the initial load, a lot of stuff will shift, right? But then after that, then it's pretty stable. So how, how much room do you get on this? Right? Do they give you any grace at the beginning or is it just, Hey, 

DAN_SHAPPIR: no.

CHARLES MAX_WOOD: So it's, it's got to come in pretty solid then. 

DAN_SHAPPIR: So for example, you were, you were mentioning images and that's a great example. A lot of people have issues around images because you put in an image tag and then initially there's no space reserved for that image. And then the image downloads and only when the image finishes downloading in the bar and the browser can actually parse that image, it actually sees that the size of that image shifts everything in order to make room for that image. And it has to do with the fact that in the browsers with HTML, everything kind of flows. So if you push something in the middle, it kind of pushes everything aside in order to make or down in order to make room for that thing. And so what you should be doing is that you should be putting the dimensions, all the expected dimensions of that image in the image HTML tag. So you can actually put width equals and height equals and then it will reserve that blank space for that image and then won't need to shift things around in order to make room for it. Or you could use CSS for that as well. But the point is that you want to reserve space for things so that you don't need to shift things around. Now it can get really complicated in some like for example with ads, because a lot of times you have very little control over ad sizes, but maybe that'll force the industry to kind of standardize on ad sizes. I don't know. Now, what Google have done is they've made a recent change that they've made this measurement window, which means that they kind of divide your session into five second segments, and they measure the CLS within each five second segment and report the CLS, the biggest CLS that they found. And the reason that they've done that is that they were getting pushback that long lived sessions were getting really bad CLS scores simply because they were running for a really long time. So for example, people would open, I don't know, let's say a Facebook webpage, have it open for an entire day and it would get this huge CLS value at the end of the day because of a million of tiny shifts that nobody ever actually noticed, which accumulated over the entire day. So in order to remedy that, they recently modified it, as I explained, to kind of work in the sort of a windowed kind of a manner. 

AJ_O’NEAL: But it seems like I'm not too concerned with valueless applications that just scroll things all day. I would be happy if those got penalized. That seems like the right thing to do. Do we really want TikTok and Facebook to be ranking higher because they're programming people to be ADD? 

DAN_SHAPPIR: No, to be fair, I don't think Facebook has a ranking issue. I don't think they care that much how high they rank on Google. 

AJ_O’NEAL: I don't think they need Google at this point. I think if Facebook did not come up in a Google search, they probably would have zero change. 

DAN_SHAPPIR: I think they intentionally don't come up in Google searches. Can you actually search for things in Facebook?

AJ_O’NEAL: I guess not anymore. 

CHARLES MAX_WOOD: You used to think so. 

AJ_O’NEAL: Yeah. Cause they're kind of competitors in terms of they're both on ads and face Facebook's goal is to keep everything off of Google because Google's been, you know, to be fair, Google has been playing dirty and ripping people off of their ad revenue by putting snippets and the search results and stuff like that. 

DAN_SHAPPIR: It's an interesting discussion and I won't go there. No, it's a really interesting discussion, but I think it's beyond the scope. But just to finish on that CLS topic, so I gave one example, which is reserving space. Another one that I want to mention, which just might help people out improving their CLS values, is when you use CSS animation, it's really important what you animate. So let's say, for example, you want to move something around on your page and you're using CSS animation for that. If you're animating the actual X and Y, of that element, you will get a really big CLS value. What you want to animate instead is the transform X and Y of that element. And the reason is that when you use transformations, you're not causing reflows. So the stuff on the page doesn't actually move. And therefore there's no layout shift and that's what you want to animate. So let's say you have a cookie banner and actually something that we had at Wix. So we added, you know, GDPR and everything. We added a cookie banner and we suddenly saw, and we also measure, obviously, the Core Web Vitals for all our sessions. And we noticed a jump in the CLS. And it turns out that the developer who created this cookie banner, just animated on the X and Y. And we just modified it to animate on the transform X and Y. Visually, there was absolutely no difference. If anything, it was slightly smoother. And the CLS issue was fixed. So if you're using animations, it's really important that you animate on transformations rather than, for example, on the position or the size of the element. I hope that was clear.

CHARLES MAX_WOOD: It makes sense to me. And I think the reason that we're kind of going through this is because yeah, my tendency is yeah, you know, it was just like, okay, I need this to move or slide in or slide out or whatever. So just move it or slide in or slide out. Right. And then I would manage some of that by yeah, by just managing X and Y or managing the height, right. Just changing the height. And so thinking through this and going, okay, yeah, by managing the transform CSS or the transform options like you're talking about, if I don't get penalized for that, then that's definitely the way I want to go so that I can maintain the user experience that I'm building, but at the same time, not get penalized for it in by Google. 

DAN_SHAPPIR: And it results in better and actual better user experience because the transform, like I said, doesn't force the browser to do a reflow. So it's, it's better, it's smoother, it's it's GPU accelerated. It uses less battery on mobile devices. So it's better all around. So it's Google pushing the, and you know what? Most cases, if you're using like a CSS library or some animation library, it's a good bet that that's what they're doing anyway. So unless you're creating your stuff by hand, in which case watch out for it. If you're using some sort of a library, there's a, you're probably getting the correct behavior out of the box, but you can always use a tool like Lighthouse, which I mentioned before, PageSpeed Insights, add this component, throw it in there, measure it, and if you suddenly see that your CLS has jumped, it actually shows you, as with LCP, PageSpeed Insights will actually show you which elements on the page caused the shift. You can say, oh, okay, I see that this animated thingy is causing a shift. Let me check my animation. Maybe I'm doing it wrong. 

CHARLES MAX_WOOD: Makes sense. Amy, Steve, you guys have been pretty quiet. Anything you want to add or chime in with? 

AIMEE_KNIGHT: Yeah. I was trying to debate when the best time to add this. So a couple of things that I wanted to add. The first thing is Dan knows, cause I kind of talked to him a good bit about this at my prior job, but this is a really good thing for people to work on if you're in like a very product heavy environment because it is a good argument to product type people, especially if you have a consumer application to get the bandwidth to work on this kind of stuff is pretty fun too. So that was just a little tangential thing I was going to add. But then I feel like the more important thing I was going to add is... So at my last job, one thing that we did, which was pretty fun and also very beneficial, you can get really, really, really fine grained, as Dan was mentioning, with your Lighthouse scores and automating regressions against those. So you can get super fine-grained on every single one of these measurements. You can set what you're... We were using GitHub Actions that I set up, but based on if you merge a pull request or put up a PR or something like that, probably for this you'd want to do it when you put up a PR. But there are some plugins, I think, sponsored by the Google team for GitHub Actions where you can automate this stuff. And I'd highly recommend doing that. 

DAN_SHAPPIR: Oh, I totally agree the various tools out there. So GitHub Actions is one option. There are other tools out there. It's essentially called performance budgets. You want to have this as a part of your CI, CD process. So for example, at Wix, we measure, whenever we build a new version of a component, we have, let's say a test site or a test page, and we look at the JavaScript download size. So we check to see that the size of the JavaScript download doesn't it doesn't increase or doesn't increase too much. And we also check the lighthouse score and potentially other metrics and see if there's a regression in the score. And if there's a regression in the score then the build is broken and needs to get fixed before you can actually deploy. And so yeah, it's a totally automated process. 

CHARLES MAX_WOOD: So what tool are you using for that in your CI-CD? 

DAN_SHAPPIR: In our case we're using in-house developed tools but there are plenty of other tools out there. Like Amy said, you can... Yeah, go for it, Amy. 

AIMEE_KNIGHT: Yeah, I was going to say, so we were using a GitHub action. I can try to find it before PIX and drop it in the show notes. 

DAN_SHAPPIR: Yeah, so Google had something called Light Wallet, but I think they might have renamed it. So basically, there are plenty of solutions out there, a lot of them built on top of Lighthouse, that just turn it into something that you can incorporate into your CICD process. So just search for performance budget or automate lighthouse or stuff like that. And you should be able to find something. And the other thing that I wanted to mention in this context is it's, it's a good idea if you have enough traffic, if you're big enough to also look at actual field data, now Google is, is the, the crux is a good place to start. Built into your search console and you can get it in the page speed insights and it's free and you can even look at how well you're doing compared to your competitors. But if you want to try to improve your current situation, then that 28 day cycle might be just too long. And then you might want to integrate some sort of a third party solution. I think you mentioned that we might have people from Raygun in an upcoming episode. I think there are also other tools out there. Speedcurve comes to mind. New Relic, I think, has something. So there are plenty of tools out there that, obviously, they're not free, but that you can incorporate into your... Or at least maybe there are also free solutions out there. But you can incorporate them into your website and start collecting real-time data, including Core Web Vitals. And then you can do all sorts of sophisticated segmentations, perhaps, faster feedback and also be able to do stuff like AP tests in the field and see how that impacts your COVID vital score. 

CHARLES MAX_WOOD: Yeah, I was going to ask because both Raygun and Sentry are sponsors and so I was going to say I haven't looked to see if they incorporate these numbers but I would imagine that if they don't have them in there now they will soon because this is something that people are going to care about. 

DAN_SHAPPIR: I think they both do actually. Yeah. So I think I more or less covered the things I wanted to cover. Unless there are any other questions, I guess we can move to picks. 

CHARLES MAX_WOOD: Yeah, I think we've covered pretty much everything there is. All right, well, let's go ahead and do some picks then. 

 

Did you work your tail off to get that senior developer gig just to realize that senior dev doesn't actually mean dream job? I've been there too. My first senior developer job was at a place where all of our triumphs were the bosses and all the failures were ours. The second one was a great place to continue to learn and grow, only for it to go under due to poor management. And now I get job offers from great places to work all the time. Not only that, but the last job interview I actually sat in was a discussion about how much my podcast had helped the people interviewing me. If you're looking for a way to get into your dream job, then join our Dev Heroes Accelerator. Not only will we help you get the kind of exposure that makes you attractive to your dream employer, but you'll be able to ask them for top dollar as well. Check it out at devherosexcelerator.com. 

 

CHARLES MAX_WOOD: AJ, why don't you start us out? 

AJ_O’NEAL: All right, I'll start us out with a great technical pick. Classless CSS, I picked this before, but I'm picking it again because it's just really, really good. Basically it's CSS, it's a buzzword that's not as buzzy as other buzzwords, but classless CSS means CSS that just works. CSS that you could use for a blog or for Markdown or for other things where, as it says, you don't have to use classes to get the effect. You just include the thing on your page and your page looks better instantly without learning anything, without having to know about grids, without, you know, it's just classless CSS. So there's a repository where there's a bunch of them mentioned. And then I'm gonna link to that. And I will try to link to, there's a couple of ones specifically that I like, cause a lot of the ones in the repository is just, it's just literally 30 different classless CSS styles and most of them suck but there's two or three of them that are just on point. I am also going to pick One Finance. I used to use Simple and as all Simple users know, it's now gone. And because it was acquired by a company that got by acquired by a company that got, well that then was later acquired by a company. And so they are gone and I've switched over to One Finance and the key difference about it is that it allows you to have basically as many accounts as you want without any overdraft fees or anything or monthly fees or any of that it's to help you organize stuff. So you can have, you know, the cable company, you can, instead of giving them your, your main bank account, you can create an account that you give only to the cable company. And when something gets screwed up, like it just makes it easier to budget and makes it easier to not have to worry about giving away an account number. That I mean, anybody that's had a bank long enough bites you in the butt when you give away your account number, because eventually somebody starts doing something shady, like your internet or phone company starts giving you extra charges or, anyway. So it's just, it's cool. And I'm including a link that's one of those refer-a-friend links. So I think it's, I get 50 bucks, you get 50 bucks kind of thing. So if that sounds useful to you, definitely check it out. Also, JCS Criminology is a YouTube channel that I've been following lately. It's- It's kind of like a criminology podcast. It's a, it's kind of PG 13, maybe, maybe MA because it is about real criminals and real crime. And it's a dude analyzing kind of the psychology of people and pointing out like, Oh, you notice in this video, when they're doing this notice how their eyes shift, this is generally an indication of such and such. It actually goes over a couple of people that are innocent as well. But it just, I love those psychology things. And then after that, just, I'm still doing my Beyond Code thing, I'm not, it's, the course is not developed yet, but I've been doing a lot of live streams and little videos. There's an auth library that I've been working on. It's basically 40 hours plus of live streaming now. So if you want to watch me code, you can. And I am going to get back to the little short videos of teaching programming concepts as well. I just kind of been overburdened with other things that I do. Anyway, so that's, that's all my, my picks. 

CHARLES MAX_WOOD: All right, Amy, what are your picks?

AIMEE_KNIGHT: Okay, so I was able to find these. The first one is, they do different things. So the first one is by the Lighthouse team. And this just runs based on whatever actions you use or whatever, I don't know what they technically call them and get, but like whatever hooks or events that are happening in your repo, it'll run a Lighthouse test against that branch. And then the one that I was thinking of, so that's the first one link I'll drop. The one that I specifically was thinking of is you can set up these budgets and just like you would a test basically fail to build if when it runs that test on your branch or that's assuming that you have your PR set up where there's some sort of like working URL that you can go to when you put up a PR but getting in the weeds. But yeah, you can set up a budget to run your tests and fail to build if your branch drops below the budget that you've specified. I'll drop a link to both of those. And that's it for me. 

CHARLES MAX_WOOD: All right, Dan, what are your picks? 

DAN_SHAPPIR: OK, so my first pick is something that I actually should have mentioned while we were talking about it. So Google actually also recently, like a couple of days ago at the time of this recording, built a data studio on top of the Crocs API, which enables you to graph Core Web Vitals related stuff for various technologies. So if, for example, you want to compare the performance of all the sites that are out there that use React compared to all the sites that are out there that use Angular, well, you can do it based on the data in their Crux database. So obviously, their Crux database doesn't contain every website. It would contain something like the top 10 million websites according to the traffic that the Chrome sees. But it's still fairly interesting. You can do all these all sorts of comparisons between different technologies and different products and different CMSs, for example, and stuff like that. So I'm posting the link here. It's an interesting tool. So that's one thing that I wanted to post. Another thing that I wanted to post, I recently watched this excellent video called, Math Has a Fatal Flaw. And it's amazing. He talks about go to the self-referential paradox in set theory and about the completion problem. And it's all of this entire video is like half an hour. And the amount of information that it covers in such an understandable fashion, an accessible way is astounding. Somebody on there literally said, I'm a professor in the university and this is like an entire course this half an hour video. So it's an amazing video and I highly recommend it. So I'm gonna post a link to that here as well. And the last thing is something that I'm actually kind of asking for help as it were. I'm running out of stuff to watch. I'm unable to find good stuff on TV anymore, like the Netflix and stuff. I don't know, it's just the quality is not there. So we watched Mayor of Easton or Easttown, which was really good with Kate Winslet and I recommend it. But now that it's done, I don't know what to watch. So if anybody has a suggestion, you know, hit me up on Twitter or something and let me know what you recommend for us to watch. 

AJ_O’NEAL: Anime, always anime. 

DAN_SHAPPIR: Which anime? Most animes are stupid.

AJ_O’NEAL: Well, Death Note, Fullmetal Alchemist, Sword Art Online. 

DAN_SHAPPIR: I didn't say all anime, I said most anime. There are a couple of good ones, but most of them, I lose interest within like the first episode. 

AJ_O’NEAL: We'll talk, Dan. 

AIMEE_KNIGHT: Dan, you better be careful and lock your doors at night now. 

DAN_SHAPPIR: Again, I didn't say all anime, I said most anime. 

AJ_O’NEAL: Well, most content is crap. 

DAN_SHAPPIR: Exactly. 

AJ_O’NEAL: Most anime falls within most content. We just talked about set theory.

DAN_SHAPPIR: I totally agree. 

CHARLES MAX_WOOD: All right, I'm gonna throw in some picks. 

DAN_SHAPPIR: If somebody can point me, sorry, if somebody can point me at the good anime, I'd appreciate it. 

CHARLES MAX_WOOD: All right, I've got another call I need to get to, so I'm gonna do this real fast. But I started reading another book and some of you may or may not have heard of it. It's Atlas Shrugged. And I have to say, I'm about a quarter of the way through it. And by reading, I mean listening to on Audible. I love this book. It is so far, it is just amazing. Isn't Ian run something that someone who is, we are supposed to read in our teens or something? I don't know that. It was ever something that, yeah, in any of the classes that I took, we had to read. But I'll tell you, I really identify with the characters that are in there and with some of the things that are, I feel like some of the stuff that she talks about, I just look at the world today and go, Oh yeah, this is applicable. So anyway, it's making me think, which is always a positive thing. So I'm going to pick that and then go check out Dev Influencers podcast, the influencers.com slash podcast. And yeah, that's what I got. So we'll go ahead and wrap up here. Thanks, Dan. This was really interesting. 

DAN_SHAPPIR: My pleasure. And as usual, if for anybody who's interested in performance related stuff or JavaScript stuff, you know, hit me up on Twitter. I'm Dan Shipier on Twitter. I usually follow back and go for it. 

CHARLES MAX_WOOD: All right, folks, we're going to wrap up here until next time, folks max out. 

DAN_SHAPPIR: Bye. 

 

Bandwidth for this segment is provided by Cashfly, the world's fastest CDN. Deliver your content fast with Cashfly. Visit c-a-c-h-e-f-l-y dot com to learn more.

               
Album Art
Opinionated Core Web Vitals - JSJ 647
0:00
01:10:35
Playback Speed: